content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Staten Island Calculus Tutor
...At first, chemistry can be really tough because it seems like there is always an exception to the rule, but once you begin to think like an atom, things get less electron cloudy. I can help you
forgive your grievances with chemistry and let your pi-bonds be pi-bonds. At first, physics didn't make any sense to me either.
9 Subjects: including calculus, chemistry, physics, biology
...I recently found a job in a global company as an engineer and relocated to NJ area and hope to continue tutoring. I tailor my teaching style to accommodate the student because each individual
learns differently and I think that it's important for the student and tutor to meet prior to their firs...
15 Subjects: including calculus, English, Chinese, GRE
...I have had outstanding resulting working with students through both online and traditional lessons.I've been a soccer coach (U9 through U18) at Fort Lee Soccer League for six years. I've
coached both our in-town and travel teams. My teams have won several tournaments and season leagues.
83 Subjects: including calculus, chemistry, physics, statistics
...I finished the remaining two months of my student teaching back in New Jersey in a second grade classroom. Upon graduating, I worked in Piscataway as a long-term (9 months) eighth grade math
teacher. Most recently, I worked in Howell as a long-term (4 months) seventh grade math teacher.
18 Subjects: including calculus, reading, geometry, precalculus
Over the past year, I have tutored more than a dozen students at all academic levels (K-12 & undergrad). I enjoy watching my students learn and grow over time, as they begin to grasp new material
and develop an affinity for learning. I strive to imbed a deeper understanding of most subjects than is...
38 Subjects: including calculus, Spanish, algebra 1, GRE | {"url":"http://www.purplemath.com/staten_island_calculus_tutors.php","timestamp":"2014-04-18T21:28:14Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:9a8de22e-65cb-44e7-b9b8-9168bbf71a9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Download Links
by W. Todd Maddox
Venue: JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR
Citations: 26 - 12 self
author = {W. Todd Maddox},
title = { TOWARD A UNIFIED THEORY OF DECISION CRITERION LEARNING IN PERCEPTUAL CATEGORIZATION},
year = {2002}
Optimal decision criterion placement maximizes expected reward and requires sensitivity to the category base rates (prior probabilities) and payoffs (costs and benefits of incorrect and correct
responding). When base rates are unequal, human decision criterion is nearly optimal, but when payoffs are unequal, suboptimal decision criterion placement is observed, even when the optimal decision
criterion is identical in both cases. A series of studies are reviewed that examine the generality of this finding, and a unified theory of decision criterion learning is described (Maddox & Dodd,
2001). The theory assumes that two critical mechanisms operate in decision criterion learning. One mechanism involves competition between reward and accuracy maximization: The observer attempts to
maximize reward, as instructed, but also places some importance on accuracy maximization. The second mechanism involves a flat-maxima hypothesis that assumes that the observer’s estimate of the
reward-maximizing decision criterion is determined from the steepness of the objective reward function that relates expected reward to decision criterion placement. Experiments used to develop and
test the theory require each observer to complete a large number of trials and to participate in all conditions of the experiment. This provides maximal control over the reinforcement history of the
observer and allows a focus on individual behavioral profiles. The theory is applied to decision criterion learning problems that examine category discriminability, payoff matrix multiplication and
addition effects, the optimal classifier’s independence assumption, and different types of trial-by-trial feedback. In every case the theory provides a good account of the data, and, most important,
provides useful insights into the psychological processes involved in decision criterion learning.
850 Signal Detection Theory and Psychophysics - Green, Swets - 1966
242 Foraging Theory - Stephens, Krebs - 1986
229 A neuropsychological theory of multiple systems in category learning - Ashby, Alfonso-Reese, et al. - 1998
168 Decision rules in the perception and categorization of multidimensional stimuli - Ashby, Gott - 1988
149 Comparing decision bound and exemplar models of categorization - Maddox, Ashby - 1993
113 Varieties of perceptual independence - Ashby, Townsend - 1986
108 Relations between prototype, exemplar, and decision bound models of categorization - Ashby, Maddox - 1993
98 Prototypes in the mist: The early epochs of category learning - Smith, Minda - 1998
97 Relative and absolute strength of response as a function of frequency of reinforcement - Herrnstein - 1961
93 Learning in extensive form games: Experimental data and simple dynamic models in the intermediate term - Roth, Erev - 1995
86 Multidimensional models of categorization - Ashby - 1992
71 The problem of inference from curves based on group data - Estes - 1950
68 Complex decision rules in categorization: Contrasting novice and experienced performance - Ashby, Maddox - 1992
67 On the law of effect - Herrnstein - 1970
64 Models for behavior: Stochastic processes in psychology - Wickens - 1982
62 On the dangers of averaging across subjects when using multidimensional scaling or the similarity–choice model - Ashby, Maddox, et al. - 1994
61 Induction of category distributions: A framework for classification learning - FRIED, HOLYOAK - 1984
59 On the dangers of averaging across observers when comparing decision bound models and generalized context models of categorization - Maddox - 1999
57 An adaptive approach to human decision making: Learning theory, decision theory, and human performance - Busemeyer, Myung - 1992
56 Striatal contributions to category learning: Quantitative modeling of simple linear and complex nonlinear rule learning in patients with Parkinson’s Disease - Maddox, Filoteo - 2001
52 Integrating information from separable psychological dimensions - Ashby, Maddox - 1990
50 Multivariate probability distributions - Ashby - 1992
46 Predicting similarity and categorization from identification - Ashby, Lee - 1991
39 Signal detection by human observers: A cutoff reinforcement learning model of categorization decisions under uncertainty - Erev - 1998
37 A possible role of the striatum in linear and nonlinear category learning: Evidence from patients with Huntington’s disease. Behavioral Neuroscience, 115, 786-798. and Category Learning -
Filoteo, Maddox, et al. - 2001
36 Stimulus categorization - Ashby, Maddox - 1998
34 Quantitative modeling of category learning in amnesic patients - Filoteo, Maddox, et al. - 2001
34 Selective attention and the formation of linear decision boundaries - Maddox, Ashby - 1998
33 Probability matching and the formation of conservative decision rules in a numerical analog of signal detection - Healy, Kubovy - 1981
31 Base-rate and payoff effects in multidimensional perceptual categorization - Maddox, Bohil - 1998
26 Statement verification: A stochastic model of judgment and response - Wallsten, Gonzalez-Vallejo - 1994
24 The decision rule in probabilistic categorization: What it is and how it is learned - Kubovy, Healy - 1977
21 Categorizing externally distributed stimulus samples for unequal molar probabilities - Lee, Janke - 1965
21 Probability matching as a basis for detection and recognition decisions - Thomas, Legge - 1970
19 The neurobiology of category learning - Ashby, Ell - 2001
18 Costs and payoffs in perceptual research - Winterfeldt, Edwards - 1982
17 Base-rate effects in multidimensional perceptual categorization - Maddox - 1995
17 On the relation between base-rate and costbenefit learning in simulated medical diagnosis - Maddox, Dodd - 2001
17 Criterion adjustment and probability matching - Thomas - 1975
16 Toward a generalization of signal detection theory to N-person games: The example of two-person safety problem - Erev, Gopher, et al. - 1995
15 A response time theory of perceptual separability and perceptual integrality in speeded classification - Ashby, Maddox - 1994
15 Some evidence on additive learning models - Dusoir - 1980
15 Overestimation of base-rate differences in complex perceptual categories - Maddox, Bohil - 1998
13 The randomization procedure in the study of categorization of multidimensional stimuli by pigeons. J Exp Psychol Anim Behav Process - Herbranson, Fremouw, et al. - 1999
13 Costs and benefits in perceptual categorization - Maddox, Bohil - 2000
12 Psychophysics of remembering - White, Wixted - 1999
10 Category discriminability, baserate, and payoff effects in perceptual categorization - Bohil, Maddox - 2001
10 Stimuli, reinforcers, and behavior: An integration - Davison, Nevin - 1999
10 Factorial effects in the categorization of externally distributed stimulus samples - Lee, Zentall - 1966
10 Why I am not a cognitive psychologist - Skinner - 1977 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.138.378","timestamp":"2014-04-23T11:17:35Z","content_type":null,"content_length":"37349","record_id":"<urn:uuid:db6e17d7-6852-419c-a145-913ae1826ac5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Floral Park Precalculus Tutor
Find a Floral Park Precalculus Tutor
...Topics include formatting cells, inserting functions, using relative and global addresses, inserting graphs, conditional formatting, sorting rows and columns, and linking cells. I have over 15
years experience in computer support for both Macs and PCs. I have experience with hardware upgrades, installing software, setup of networks and printers/scanners, and software instruction.
22 Subjects: including precalculus, calculus, physics, geometry
...My current students have and are doing exceptionally well and most have been moved to advanced programs. Students leave my home only when they have gained a thorough understanding and
confidence in the topic!! I train students for school tests, state tests, regents, SAT. I assure good scores!!I am an M.S (Telecommunications Engineer). Math is my primary subject.
9 Subjects: including precalculus, calculus, algebra 1, algebra 2
I am an Chemical Engineering Graduate (B. Tech - Honours) from Indian Institute Of Technology with a career spanning over 30 years. I have strong background in Physics and Mathematics/Statistics
and I enjoy teaching and tutoring.
11 Subjects: including precalculus, calculus, statistics, algebra 2
...In doing so I graduated Summa Cum Laude in 2004. I have worked as a New York City High School Science teacher for three and a half years,from 2006 to 2010, specializing in Living Environment
and Earth Science. In addition to those two subjects I had also instructed my own Biotechnology elective, as well as Health Education.
20 Subjects: including precalculus, English, chemistry, physics
...I scored a perfect 800 on the SAT math section when I took the test prior to going to Princeton. I find that an approach of repeatedly working through actual SAT problems and identifying the
types of problems that the SAT commonly asks is the best training to do well on the test. I have an MA in philosophy from Virginia Tech and I completed PhD coursework at Brown.
40 Subjects: including precalculus, chemistry, English, physics | {"url":"http://www.purplemath.com/floral_park_precalculus_tutors.php","timestamp":"2014-04-17T07:53:38Z","content_type":null,"content_length":"24572","record_id":"<urn:uuid:81aa83b2-288c-4bd2-badd-831467eaae17>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Davidson Missouri W/Traveling Salesperson Problem
From OpenWetWare
Home | Background Information | Current Project: Solving the Hamiltonian Path Problem in vivo | Mathematical Modeling | Gene Splitting | Results | Traveling Salesperson Problem | Software | Resources
and Citations
Although our current project is to develop a bacterial computer that solves Hamiltonian path problems, in the future we would also like to tackle the Traveling Salesman problem using similar methods.
The TSP asks: Given a directed graph where each edge has a cost associated with it, what is the cheapest, or shortest, path to take such that you end at your starting point and visit every node
exactly once?
The graph above shows a modified complete graph with edges leaving the ending node (#4), returning to the start node (#1), and moving from the start to the stop node removed. If we wanted to solve
this weighted and directed graph for the shortest path through all nodes, starting at node #1, ending at node #4, and passing through each node only once, we could use our current HPP E. coli
computer construct with one slight modification.
Instead of putting each half-gene back to back along an edge, we could add in spacers of specified lengths that would allow us to model the various weights in the graph above. These weights would
give edges different lengths (in base pairs). After performing PCR on all of the solved plasmids (with primers binding to the promoter and terminator), we would be able to find the shortest path
through all of the nodes by running the PCR products on a gel. Because the total length of the genes in any Hamiltonian Path through the graph is a constant, the smallest solved fragment will have
the lowest total spacer length and will, therefore, be the solution to the Traveling Salesperson Problem (shown above).
False positives can also come into play with this construct. However, certain rules can be put into place when choosing spacer lengths to avoid having false positive PCR products that are shorter
than the true solution. | {"url":"http://openwetware.org/wiki/Davidson_Missouri_W/Traveling_Salesperson_Problem","timestamp":"2014-04-20T21:02:55Z","content_type":null,"content_length":"20274","record_id":"<urn:uuid:89ba7867-822c-4d22-901a-a331e51c51fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ada Programming/Attributes/'Base
Represents the base type of another type or subtype. This attribute is used to access attributes of the base type.
T'Base refers to the "base range" of the type, which defines the range in which intermediate calculations are performed.
The standard states that the range of T'Base:
1. includes the value 0
2. is symmetric about zero, with possibly an extra negative value
3. includes all of the values in the subtype T
So for example, if T is:
type T is 1 .. 42;
then T'Base is
type T'Base is -42 .. 42;
Note that built-in operators go through the base type, and T's "+" op for example is implicitly declared as:
function "+" (L, R : T'Base) return T'Base;
There are no constraint checks on T'Base, so for example:
O1 : T := T'(1) + T'(2);
O2 : T'Base := T'(1) + T'(2);
then in the first assignment to O1, there is a constraint check to ensure that the result of 1 + 2 is in the range of T, but in the second assignment to O2, there is no check.
T'Base is useful for generics, when you need to able to recover the base range of the type, in order to declare a object with value 0; for example, if this is an accumulator.
It's helpful to know something about the base range of the type, so that you have a guarantee that you don't get any overflow during intermediate calculations. For example, given type T above then
procedure Op (O1, O2 : T) is
Sum : T'Base := O1 + O2;
This is a problem, since if the sum of O1 and O2 is large (that is, greater than T'Base'Last), then you'll get overflow. Knowing that you're going to be adding two values together means you should
declare the type this way:
T_Last : constant := 42;
type T_Base is 0 .. 2 * T_Last;
subtype T is T_Base range 1 .. T_Last;
That way you know that (sub)type T's range is 1 .. 42, but you also have a guarantee that T'Base'Last >= 84, and hence the sum of two values of type T cannot overflow.
Note that a declaration of the form:
type T is range ...
actually declares a subtype, named T, of some anonymous base type. We can refer to the range of this base type as T'Base.
Note also that an enumeration type is its own base type, so given this type:
type ET is (A, B, C);
then the range of ET is the same as the range of ET'Base. If you need some extra literals in your "base" type, then you have to declare them manually, not unlike what we did above:
type ET_Base is (ET_Base_First, A, B, C, ET_Base_Last);
subtype ET is ET_Base range A .. C;
Now you can say ET'Next (ET'Last) and you'll get a meaningful answer. This is necessary when you do something like:
E : ET'Base := ET'First;
while E <= ET'Last then
... -- do something
E := ET'Next (E);
end loop;
If you declare:
type My_Enum is (Enum1, Enum2, Enum3);
subtype Sub_Enum is My_Enum range Enum1 .. Enum2;
then Sub_Enum'Base'Last is Enum3.
Last modified on 6 January 2009, at 08:49 | {"url":"http://en.m.wikibooks.org/wiki/Ada_Programming/Attributes/'Base","timestamp":"2014-04-21T14:49:17Z","content_type":null,"content_length":"37005","record_id":"<urn:uuid:dae3ce95-83e1-4f1c-ad80-1466ca598065>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
some questions on Riemann surface
up vote 3 down vote favorite
There are several puzzling questions on Riemann surface for me: Q.1 Definition of Riemann surface can be given in at least two ways: Def.1) it is a complex one dimensional manifold; Def.2) for each
$a\in \mathbb{C}$, consider collection of germs at $a$, of analytic functions, and give a topology on it. Are these really equivalent definitions? or Def.2 is more general than Def.1?
Q.2 When we say a group $G$ is an automorphism group of a compact Riemann surface, how is the action? (for ex. what is description of action of of PSL(2,7) on a genus 3 Riemann surface? In the book
of Thomas Breuer, I couldn't see any description of action of a group on a Riemann surface; he has given computational methods to investigate groups.)
Q.3 The automorphisms of a compact Riemann surface can always be lifted to universal cover?
Q.4 If a group $G$ acts on a compact Riemann surface $X_g$, of genus $g$, then $X_g/G$ is also a compact Riemann surface of some genus $h$ and $g,h$ are related by Riemann-Hurwitz formula. Can anyone
suggest some good reference for this relation? (here, I would like to see this Riemann Hurwitz relation topologically; many books describe it using algebraic geometry techniques).
(I went through many books on Riemann surface for these questions; but not understood many things)
cv.complex-variables riemann-surfaces
2 I can recommend for all those questions the book "Lectures on Riemann surfaces" by Otto Forster. One doesn't have to know any algebraic geometry to understand the book. – Someone Feb 10 '11 at
Regarding Q4: You can find a purely topological derivation of Riemann-Hurwitz in paragraph 21 of Prasolov & Sossinskys "Knots, links, braids and 3-manifolds". – bavajee Feb 10 '11 at 14:13
Your definition 2 does not seem to define anything at all, since you didn't specify additional conditions on the topology. Do you plan to choose some set-theoretic bijection between the points $a
\in \mathbb{C}$ and the points on your Riemann surface? – S. Carnahan♦ Feb 10 '11 at 15:54
The first one is explained pretty well in Weyl's "The Idea of a Riemann Surface" (though the terminology is somewhat old-fashioned). – arsmath Feb 11 '11 at 12:08
add comment
2 Answers
active oldest votes
Q1. There are two DIFFERENT notions of Riemann surface in the literature.
a) One-dimensional complex analytic manifold (coming from the book of Weyl).
b) Riemann surface "spread over the plane (or over the Riemann sphere)". Your second definition, the set of germs with an appropriate topology on it, formalizes this second notion.
Older books seem to understand Riemann surfaces in the sense of the second definition. Sometimes a) was called an "abstract Riemann surface" in these books.
For most mathematicians with modern training the "Riemann surface of log z" and the "Riemann surface of arccos z" are meaningless expressions because these are the same as the plane, in
the sense of definition a).
up vote 3 The formal relation between a) and b) is the following. "A Riemann surface spread over the plane" is a pair (S,f), where S is an abstract Riemann surface and f is a holomorphic function
down vote from S to C. (If f is meromorphic, we have a Riemann surface spread over the sphere.)
Here is another way to say this. Let S be a Riemann surface in the sense a). It has a set of charts $\phi_j: U_j\to D_j$ from the elements of an open covering U to discs D in the plane.
The correspoddence maps $\phi_k\circ\phi_j^{-1}$ on $D_j\cap D_k$ must be conformal.
Now let us require that these correspondence maps be IDENTITY maps of $D_j\cap D_k$. Then we obtain notion b). This is an additional structure on a Riemann surface in the sense a) which
is sometimes called a flat structure.
If you look carefully (say, on the example of arccos) you will see that the two definitions of a Riemann surface in the sense b) that I gave are not exactly equivalent. More about this
in my survey "Geometric theory of meromorphic functions", and in the preprint of Biswas and Perez Marco, Log Riemann Surfaces.
add comment
Q1: Use the manifold definition, going back to Weyl. The other definition comes out of the theory of analytic continuation. (And is somewhat puzzling historically - I'm not quite sure how
the Poincaré-Volterra theorem fits in, but these days you'd probably want to read this material in terms of sheaf theory, to which it was one of the inputs.)
Q2: G acts on the field of meromorphic functions, is one way to look at it. These are holomorphic mappings of the surface to itself, described by some algebraic mappings in fact.
up vote 2 Q3: I think so, by "abstract nonsense".
down vote
Q4: The quotient is to be treated carefully, since quotients of manifolds are not always manifolds. But in terms of the function field this can be seen as Galois theory, and X is a ramified
(usually) covering of the quotient curve. The topological explanation of the Euler characteristic in the Riemann-Hurwitz formula is intuitively clear: just look at what happens under the k
-th power map on the unit complex disc, in terms of a simple triangulation, to see how ramification affects coverings.
For Q3: If H is the upper half-plane, S the Riemann surface, and pi:H->S, then composing pi with any automorphism gives a holomorphic map from H to S. Since this map takes a
simply-connected surface to S (in particular the image of pi_1 of H is trivial), it must lift to a holomorphic map to H. – Jonah Feb 11 '11 at 16:48
"lifting" the holomorphic map is easy; we apply a topological criteria; but is it easy to show that the lift is "holomorphic"? – Martin David Feb 12 '11 at 3:12
Sure. Say the automorphism of the surface you start with is called f, and the lifted automorphism is called F, and pick x in H. Pick a small enough neighborhood of F(x) so that the
restriction of the (holomorphic) covering map is an analytic isomorphism. Now it is clear, since f compose pi is the same as pi compose F, that F can be written, in a suitably small
neighborhood of x, as the composition of holomorphic maps. – Jonah Feb 12 '11 at 9:05
add comment
Not the answer you're looking for? Browse other questions tagged cv.complex-variables riemann-surfaces or ask your own question. | {"url":"http://mathoverflow.net/questions/55014/some-questions-on-riemann-surface?sort=oldest","timestamp":"2014-04-16T07:36:27Z","content_type":null,"content_length":"65298","record_id":"<urn:uuid:ec5f0c4e-025c-480c-8120-a1b1833096f5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design a mod-6 synchronous counter, Computer Engineering
Design a MOD-6 synchronous counter using J-K Flip-Flops.
Design of Mod-6 Counter: To design the Mod-6 synchronous counter, contain six counter states (that is, from 0 to 6). For this counter, the counter design table lists the three flip-flop and their
states as 0 to 6 and the 6 inputs for the 3 flip-flops. The flip-flop inputs essential to step up the counter from the now to the next state is worked out along with the help of the excitation
table. The needed counter states and the J K inputs essential for counter flip- flops are specified in the counter design table demonstrated in Table no.1.
│Input pulse│Counter States │ │Flip-Flop Inputs │
│ ├────────────────────────┼────┼───────────┬────────────────────────┬───────────────────────────┤
│ │ │ │ │ │ │
│ │ │ │ │ │ │
│ count │A B C │J[A]│ K[A]│J[B] K[B]│J[C] K[C] │
│ 0 │0 0 0│1 X │0 X │0 X │
│ 1 │1 0 0│X 1 │1 X │0 X │
│ 2 │0 1 0│1 X │X 0 │0 X │
│ 3 │1 1 0│X 1 │X 1 │1 X │
│ 4 │0 0 1│1 X │0 X │X 0 │
│ 5 │1 0 1│X 1│0 X │X 1 │
│ 6(0) │0 0 0 │ │ │ │
Table no.1: Counters Design Table for Mod-6 Counter
Flip-Flop A:
The primary state is 0. This change to 1 after the clock pulses. Thus, JA must be 1 and KA may be 0 or 1 (that is X). In the subsequent state 1 change to 0 after the clock pulse. Hence, JA may be 0
or 1 (that is, X) and KA must be 1.
Flip-Flop B:
The primary state is 0 and this remains unchanged after the clock pulse. Hence, JB must be 0 and KB may be 0 or 1 (i.e. X). In the subsequent state 0 changes to 1 after the clock pulse. Hence, JB
must be 1 and KB may be 0 or 1 (that is, X).
Flip-Flop C:
The primary state is 0 and this remains unchanged after the clock pulse. Hence JC must be 0 and KC may be 0 or 1 (that is, X). In the subsequent state, this remains unchanged after the clock pulse.
Thus, JC must be 0 and KC may be 0 or 1 (that is, X).The JK inputs needed for such have been found with the help of the excitation table, (as in table no.1). The flip-flop input values are entered
into Karnaugh maps demonstrated in Fig. a [(i), (ii), (iii), (iv), (v) and (vi)] and a Boolean expression is determined for the inputs to the three flip-flops and then all expressions are simplified.
As all the counter states have not been utilized, Xs (don't) are entered to denote un-utilized states. The expressions that simplified for each input have been demonstrated under each map. At last,
these minimal expressions for the flip-flop inputs are utilized to draw a logic diagram for the counter demonstrated in fig.(b).
As before, the JK inputs needed for this have been found with the help of the excitation table, (as in table no.1). Such input values are entered in Karnaugh maps Fig. (a)[i to vi] and a Boolean
expression is determined for the inputs to the three flip-flops and then all expressions are simplified. Xs have been entered in that counter states that have not been utilized. The simplified
expressions for all inputs have been demonstrated under each map and at last a logic diagram based upon these expressions is drawn and is demonstrated in fig.(b).
Fig.(b) Karnaugh Maps for JA,KA,JB,KB,JC,KC
Fig.(c) Logic Diagram for MOD-6 Synchronous Counter
Posted Date: 5/4/2013 2:53:48 AM | Location : United States
Your posts are moderated
What are the different layers of TCP/IP protocol stack? Layers in the TCP/IP protocol architecture are:- o Application Layer o Host-to-Host Transport Layer, o Net
what is ecs?
A1->A2A3 A2->A3A1|b A3->A1A2|a
Q. What is Charge-coupled Devices? CCDs are employed for storing information. They have arrays of cells that can hold charge packets of electron. A word is signified by a set o
Q. Describe about full adder? Let's take full adder. For this other variable carry from previous bit addition is added let'us call it 'p'. Truth table and K-Map for this is dis
What is page frame? An area in the main memory that can hold single page is called as page frame.
What is refactoring? Refactoring is explained as the changes to the internal structure of software to improve its design without changing its external functionality. It is an e
State about AGP The latest addition to many computer systems is the inclusion of accelerated graphics port (AGP). AGP operates at the bus clock frequency of the microprocessor.
How to detect overflow condition An overflow condition can be notice by observing the carry into the sign bit position and the carry out of sign bit position. If this carries a
Parallel Computer Architecture Introduction We have talked about the classification of parallel computers and their interconnection networks in that order in units 2 and | {"url":"http://www.expertsmind.com/questions/design-a-mod-6-synchronous-counter-30161759.aspx","timestamp":"2014-04-18T15:40:04Z","content_type":null,"content_length":"36410","record_id":"<urn:uuid:ab41cae8-5dab-4fd8-94a0-06652455ba06>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Miscellaneous comments concerning dynamical Up: Contents Previous: Exponential divergence in N-body
Unless otherwise stated, the term ``system'' in this section will mean an autonomous Hamiltonian system. The discussion of symplectic integration for Hamiltonian systems in this section draws heavily
from Sanz-Serna [66], which is a very readable introduction to the subject. Another seminal paper is Channel and Scovel [13].
The question of what kind of integrator to use to integrate N-body systems is one that has had plenty of attention. No discussion of stellar system N-body integrators would be complete without
mention of Aarseth's classic integration scheme for collisional systems [1], which includes regularization. Makino [55] derived formulae for optimal order and timestep criteria for Aarseth-type
integrators. For collisionless N-body systems, and indeed many non-astronomical Hamiltonian systems, symplectic integrators have enjoyed widespread popularity. Recent years have also seen a growing
interest in exactly conserving integrators.
The popularity of symplectic integrators stems from the fact that the time-t flow of any Hamiltonian system is a symplectic or canonical mapping. One consequence of this is that the mapping is volume
preserving in phase space. This is a consequence of Liouville's Theorem (see, for example, [56]), which states that the phase-space density of trajectories inside an ensemble remains constant with
time. A symplectic integrator also preserves phase space volume to within the machine precision. As a result, symplectic integrators tend to preserve qualitative properties of phase space
trajectories: trajectories do not cross, and although energy is not exactly conserved, energy fluctuations are bounded. In fact, it can be shown that the phase space of a linear Hamiltonian problem
simply undergoes slight stretches and/or contractions along various axes, so for example a circular trajectory in 2 dimensions becomes an ellipse. Furthermore, if the Hamiltonian function is
sufficiently smooth, then the discrete solution produced by a symplectic integrator lies on the exact solution of a problem that has a slightly perturbed Hamiltonian. In general, even for non-smooth
Hamiltonians, the discrete solution lies exponentially close in 66].
Non-symplectic integrators generally turn a conservative system into a dissipative one, while a symplectic integrator, on average over time, preserves the conservative nature of the system. Earn and
Tremaine [23] point out that roundoff error actually destroys the exact symplecticness of a time- and/or space-continuous system. They demonstrate how one can build a precisely symplectic discrete
map over a lattice discretization of space using integer arithmetic, so that if the modeled problem is Hamiltonian, then the discrete model is also exactly Hamiltonian, but with a slightly different
Hamiltonian function. This builds confidence in the qualitative nature of the solution being similar to that of the continuous system.
Since symplectic integrators preserve the topological structure of trajectories in phase space, they are probably more reliable for long-term integrations than non-symplectic integrators, since a
non-symplectic integrator causes the system to become dissipative, which in turn can badly distort the topological structure of the phase flow over long periods. However, symplectic integrators are
often implicit, and require more function evaluations and smaller timesteps to produce the same computational accuracy as compared to standard non-symplectic integrators [66]. Thus, for short-term
integrations where accuracy is paramount, a standard non-symplectic integrator may be more efficient.
Another drawback of symplectic integration is that any integration scheme which is symplectic for constant timestep does not remain symplectic if the timestep is changed based solely and explicitly
on the state vector [71]. Recall that the time-
Nonetheless, several researchers have begun to experiment with variable timestep symplectic integrators. Hut, Makino and McMillan [40] introduce an implicit method of symmetrization which is intended
to preserve approximate time symmetry. For the Leapfrog integrator, their method reduces to
where h(y) is a function used to compute the timestep, based solely on the state vector. They tested their symmetrized leapfrog on a Kepler problem with eccentricity e=0.9, using 1 implicit iteration
per timestep and
Skeel and Biesiadecki [72] introduce another method for using different timesteps while preserving the symplectic nature of the integrator. They show that, if it is possible to split the Hamiltonian
into independent additive terms, then each term may be integrated with a different, but constant, timestep, and the integration remains exactly symplectic. Each timestep must be an integer multiple
of the smallest timestep. For example, if there are only two step sizes and the large timestep is a multiple M of the smaller, then the force at timestep n is computed as
They show that an artificial splitting of the Hamiltonian into ``quickly'' and ``slowly'' varying components can also utilize their method. They test their method on the Kepler problem, setting the
``quickly'' varying component to be the half of the orbit that is closest to the central mass, and the slowly varying component to be the other half of the orbit. Their results are similar to the
second-order results of Hut et al. [40]. It would be interesting to see if a 4th order version of their method would also give comparable results to Hut et al. [40]. The error of this method is
analyzed more fully in Littell, Skeel, and Zhang [54], where they come to the unsurprising conclusion that the smallest stepsize times the highest frequency of the ``quickly'' changing components
must be small. In other words, their method still needs an adequate sample of the highest frequency changes in order to integrate them. They prove the also unsurprising result that, if h, and
In addition to symplectic integration, there is an active and quickly blooming body of literature on exactly conserving algorithms. In general, there does not exist an integrator that is both
symplectic and exactly energy conserving for Hamiltonian systems [69, 71]. However, since energy is not the only quantity conserved in most systems, there is still the possibility of building a
symplectic integrator that conserves other quantities, although there are limits to what the integrator can conserve in comparison to the real system [25]. Furthermore, if the system is not
Hamiltonian, then a symplectic integrator provides no added benefit, whereas a conserving integrator may still be of great value. One must be careful, however, how one enforces energy conservation.
It is plausible that a method which naïvely enforces global energy conservation may introduce spurious mechanisms of energy transport within the system [69].
Shadwick, Bowman, and Morrison [69] introduce a non-traditional integration scheme which is explicit, and exactly conserving of all quantities one is willing to program it to conserve. The drawback
is that the method is not general: one needs to derive a complicated set of explicit formulae for each new integrator-problem pair. One applies the integration method to one's problem on paper, and
then derives exactly how the conserved quantities of interest are not conserved. Then one simply adds terms to the integration routine that compensate for the non-conservation. If the standard
version of the integrator is explicit, then so is the modified version. Although it is a very tedious process, it seems to have impressive results. The authors present 3 examples: the ``three-wave''
problem, the Lotka-Volterra predator-prey problem, and the Kepler problem, each demonstrated with both Euler's method and a 2nd order predictor-corrector. In the Kepler problem they force
conservation of energy, angular momentum, and a quantity known as the Runge-Lenz vector. They note that non-conservation of phase-space volume in Hamiltonian systems can be used as a measure of
error, since one can no longer use conservation of other quantities as a measure of error. The authors, however, do not clearly state the limitations of their method. For example, it is not clear
precisely why one couldn't use their method to force conservation of phase space volume as well as energy, which we know to be impossible [25, 71].
We will close our discussion of integrators with a remark by Sanz-Serna [66, p.278,] (d is the dimension of configuration space):
Conservation of energy restricts the dynamics of the numerical solution by forcing the computed points to be on the correct (2d-1)-dimensional manifold H= constant, but otherwise poses no
restriction to the dynamics: within the manifold the points are free to move anywhere and only motions orthogonal to the manifold are forbidden. When d is large this is clearly a rather weak
restriction. On the other hand, symplecticness restricts the dynamics in a more global way: all directions in phase space are taken into account.
Athough I think he makes an extremely interesting point, I note that his remark does not address the possibility that Shadwick et al. [69] have demonstrated: that energy is not the only quantity that
can be conserved. Thus, it is this author's opinion that much interesting work remains in the area of integration of conservative systems.
Next: Miscellaneous comments concerning dynamical Up: Contents Previous: Exponential divergence in N-body Wayne Hayes
Fri Dec 27 17:41:39 EST 1996 | {"url":"http://www.cs.toronto.edu/~wayne/research/thesis/depth/node7.html","timestamp":"2014-04-18T08:02:25Z","content_type":null,"content_length":"15772","record_id":"<urn:uuid:49e7ed54-3906-4db3-a33a-cf5014518792>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Primer on Arc Elasticity
When calculating Arc Elasticities, the basic relationships stay the same. So when we're calculating
Price Elasticity of Demand
we still use the basic formula:
PEoD = (% Change in Quantity Demanded)/(% Change in Price)
However how we calculate the percentage changes differ. Before when we calculated Price Elasticity of Demand, Price Elasticity of Supply,Income Elasticity of Demand, or Cross-Price Elasticity of
Demand we'd calculate the percentage change in Quantity Demand the following way:
[QDemand(NEW) - QDemand(OLD)] / QDemand(OLD)
To calculate an arc-elasticity, we use the following formula:
[[QDemand(NEW) - QDemand(OLD)] / [QDemand(OLD) + QDemand(NEW)]]*2
This formula takes an average of the old quantity demanded and the new quantity demanded on the denominator. By doing so, we will get the same answer (in absolute terms) by choosing $9 as old and $10
as new, as we would choosing $10 as old and $9 as new. When we use arc elasticities we do not need to worry about which point is the starting point and which point is the ending point. This benefit
comes at the cost of a more difficult calculation.
If we take the example with:
We will get a percentage change of:
[[QDemand(NEW) - QDemand(OLD)] / [QDemand(OLD) + QDemand(NEW)]]*2
[[110 - 150] / [150 + 110]]*2 = [[-40]/[260]]*2 = -0.1538 * 2 = -0.3707
So we get a percentage change of -0.3707 (or -37% in percentage terms). If we swap the old and new values for old and new, the denominator will be the same, but we will get +40 in the numerator
instead, giving us an answer of the 0.3707. When we calculate the percentage change in price, we will get the same values except one will be positive and the other negative. When we calculate our
final answer, we will see that the elasticities will be the same and have the same sign. To conclude this piece, I'll include the formulas so you can calculate the arc versions of price elasticity of
demand, price elasticity of supply, income elasticity, and cross-price demand elasticity. I recommend calculating each of the measures using the step-by-step fashion I detail in the previous
New Formulas - Arc Price Elasticity of Demand
To calculate the Arc
Price Elasticity of Demand
, we use the formulas:
PEoD = (% Change in Quantity Demanded)/(% Change in Price)
(% Change in Quantity Demanded) = [[QDemand(NEW) - QDemand(OLD)] / [QDemand(OLD) + QDemand(NEW)]] *2]
(% Change in Price) = [[Price(NEW) - Price(OLD)] / [Price(OLD) + Price(NEW)]] *2]
New Formulas - Arc Price Elasticity of Supply
To calculate the Arc
Price Elasticity of Supply
, we use the formulas:
PEoS = (% Change in Quantity Supplied)/(% Change in Price)
(% Change in Quantity Supplied) = [[QSupply(NEW) - QSupply(OLD)] / [QSupply(OLD) + QSupply(NEW)]] *2]
(% Change in Price) = [[Price(NEW) - Price(OLD)] / [Price(OLD) + Price(NEW)]] *2]
New Formulas - Arc Income Elasticity of Demand
To calculate the Arc
Income Elasticity of Demand
, we use the formulas:
PEoD = (% Change in Quantity Demanded)/(% Change in Income)
(% Change in Quantity Demanded) = [[QDemand(NEW) - QDemand(OLD)] / [QDemand(OLD) + QDemand(NEW)]] *2]
(% Change in Income) = [[Income(NEW) - Income(OLD)] / [Income(OLD) + Income(NEW)]] *2]
New Formulas - Arc Cross-Price Elasticity of Demand of Good X
To calculate the Arc
Cross-Price Elasticity of Demand
, we use the formulas:
PEoD = (% Change in Quantity Demanded of X)/(% Change in Price of Y)
(% Change in Quantity Demanded) = [[QDemand(NEW) - QDemand(OLD)] / [QDemand(OLD) + QDemand(NEW)]] *2]
(% Change in Price) = [[Price(NEW) - Price(OLD)] / [Price(OLD) + Price(NEW)]] *2]
Notes and Conclusion
Keep in mind that for all over these formulas it doesn't matter what you use as the "old" and as the "new" value, just as long as the "old" price is the one associated with the "old" quantity. You
could call the points A and B or 1 and 2 if you like, but old and new works just as well.
So now you can calculate elasticity using a simple formula as well as using the arc formula. In a future article, we will look at using calculus to compute elasticities.
If you'd like to ask a question about the elasticities, microeconomics, macroeconomics or any other topic or comment on this story, please use the feedback form. | {"url":"http://economics.about.com/cs/micfrohelp/a/arc_elasticity_2.htm","timestamp":"2014-04-18T20:43:27Z","content_type":null,"content_length":"44963","record_id":"<urn:uuid:54542309-8357-4b80-8beb-747f2f4a1c70>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quarter Circles
Date: 12/20/96 at 22:07:38
From: Jason k Cheung
Subject: Functions
How would I make a function that would graph a quarter of a circle in
the quadrants II and IV?
Date: 12/21/96 at 03:33:24
From: Doctor Pete
Subject: Re: Functions
One possible way to do this is as follows: We know we can draw a
semicircle in quadrants I and II with:
f(x) = +Sqrt[1-x^2]
(i.e., the postive square root of 1 minus x squared). This gives the
correct graph in quadrant II, but we would like to "flip" the graph
about the x-axis for x > 0. That is, if g(x) is our desired function,
then g(x) = -f(x) for x > 0, and g(x) = f(x) for x < 0.
To do this, consider the function h(x) = |x|/x, that is, the absolute
value of x divided by x. For x > 0, this is 1, and for x < 0, this
is -1. (Question: What is h(0)? This is of interest, because your
answer to this will affect what g(0) will be.)
Now, I leave it to you to figure out how to apply h(x) to f(x) to get
-Doctor Pete, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 12/24/96 at 15:10:41
From: Jason k Cheung
Subject: Re: Functions
Is this what you were thinking about?:
This would give me a graph in quadrant II.
This would graph quadrant IV.
Date: 12/28/96 at 20:09:59
From: Doctor Pete
Subject: Re: Functions
Hmm.... while I might agree that the equations you have given are
technically correct, I think I should point out a few things.
First, note that Sqrt[-x]^2 = Sqrt[x]^2 = |x|, the absolute value
of x. Then Sqrt[-x]^4 = Sqrt[x]^4 = |x|^2 = x^2, because the square
of a number is always non-negative. So there is no sense in
differentiating between the two; that is, for all values of x:
Sqrt[1-Sqrt[-x]^4] = Sqrt[1-Sqrt[x]^4] = Sqrt[1-x^2]
Second, the purpose of introducing the function h(x) = |x|/x is to try
to do what you have done by splitting up g(x) into two cases. (I will
call g(x) the function that we wish to obtain, and f(x) = Sqrt[1-x^2]
as I have defined in the previous message.) In order to get the
desired function, we could have simply written:
/ f(x) = Sqrt[1-x^2] , x < 0
g(x) = -|
\ -f(x) = -Sqrt[1-x^2] , x > 0
This definition is not always desirable. You can use h(x) to do what
the minus sign is doing in the second case, because h(x) = 1 when
x > 0, and -1 when x < 0. So -h(x) = -1 when x > 0, and -h(x) = 1
when x < 0. This gives you the correct sign, so in order to merge the
two cases into one, we take the product -f(x)h(x), which gives:
g(x) = -f(x)h(x) = -Sqrt[1-x^2] |x|/x
This is the function we're looking for. If x < 0, then |x|/x = -1,
and g(x) = Sqrt[1-x^2]. If x > 0, then |x|/x = 1, and
g(x) = -Sqrt[1-x^2].
Also note that g(x), the function we were looking for, is
discontinuous at 0, and is defined on the domain [-1,0) U (0,1].
f(x) = Sqrt[1-x^2], however, is defined, and continuous, on the entire
interval [-1,1]. So obtaining g(x) from f(x) requires the
introduction of a function h(x) that has a discontinuity at 0.
Indeed, h(x) = |x|/x is not continuous at 0, however you may wish to
define h(0).
-Doctor Pete, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/53020.html","timestamp":"2014-04-16T23:08:06Z","content_type":null,"content_length":"8328","record_id":"<urn:uuid:c1a2e4fb-4f74-4fb8-af4a-339f6287bdae>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learn Math (10 tips)
One of the most common things I hear from my programming buddies is, “I wish I knew more math.” I think I tend to hear this from my friends because when I was an undergrad earning my BS in CS I took
my time to also pick up a BS in math. Most of my friends know this and so they tend to ask me questions about math and learning it.
Well guess what? Even with a four year degree in math, I feel the same way. Oh, perhaps I’d say it differently, “I wish I could remember and pickup more math” but it’s the exact same feeling. As
programmers, math is very important and we know that even when we’re just writing SQL statements or producing XML transformations.
So how do we learn more of this math? Well, it’s going to take time and commitment and I admit I don’t have it all figured out, yet, but I’ve been spending some brain time on it trying to figure ways
to get there.
There are several important ingredients you’ll need, but before I dive into that let me explain some of the ways my learning style changed as I studied math in my undergrad. I consider these changes
you might try to bring about in yourself so that you’ll pick up the math more easily.
1. Learn to learn. Simply put, reading it and thinking about it are not enough. As an aside, if learning the math doesn’t feel difficult, you’re probably not actually learning it. To really learn
something and expand your mind is hard work and you’ll get tired, frustrated and discouraged. Just remember, your mind is similar to a muscle. You have to exercise it and let it rest and recover.
Exercise is hard work, but the benefits are worth it.
2. Definitions are probably the most important thing you’ll ever learn in math. See point 1 for my definition of “learn”. When you really get the definition of something, many theorems just make
sense and after you get point 4, proving them is just a matter of explaining to someone why the theorem makes sense to you. Of course not all theorems are this way, and you should definitely try
to capture any clever reasoning behind named proofs. These patterns of reasoning will resurface in your later proofs.
3. Write it all out, everytime. Primarily I mean definitions and statements of theorems. Also, think about it while you write it. Stop and pause, think about where you could take short cuts in
phrasing the definition of something. Does that change the meaning? Why do you suppose the definition bothers to include unique identity instead of just identity? Are there theorems you’ve
learned about this object that would fail to hold?
4. Work the exercises, and write out all the details of the proof, even if the proof is given in the book and you think the omited details are “obvious”. Learning to phrase the “obvious”
mathematically is immensly important. Additionally, you may discover something by exploring the “obvious”. Entire branches of mathematics have been discovered by people exploring the “obvious”.
See the history of non-Euclidean geometry for background (some info here: http://universe.sonoma.edu/activities/geometry.html).
5. Infinity, sequences and real numbers (aka real analysis), induction, and group theory. When you learn each of these concepts your understanding of programming will actually improve. You’ll see
things in new ways. Even things in nature. There’s probably a million other things that you can learn to expand your mind in the same way, but I learned the above list in that order and it
changed me and my way of seeing the world. I’m confident it will do the same for you because I witnessed similar changes in friends that I studied with.
Now, on to the things you’ll need to in order to be more successfull in your studies.
6. You need good text books. Text books are not created equally. What makes a text book good is hard to define. Basically, the definitions need to be good and progress nicely. The book needs to
start where you are and lead you where you want to go. Some of the theorem and exercises should contain typos and incorrect solutions. This is highly counter intuitive but very important if you
want to learn the subject.
Since you’re trying to learn the material, not just absorb it, you’re going to be hand checking the theorem proofs and working the exercises until the logic makes sense. Here is one possible
outcome I’ve experienced several times with flawed exercises: When you come to one that has a flaw you might at first go with it. Then you might work it a bit more and realize you’re not getting
it. This stuff is confusing. But then you think, “wait, I made it this far, why is this stuff confusing?” Back to the definition you go. Do you really get the definition? Yeah, I guess so but if
that’s the case, then this is really a bad way to put it. Oh wait, it’s not just a bad way, but it should be this way. Oh! I get it, and that theorem over there seems so trivial now… Another
possibility is that you think you have found a typo, but even after “fixing” it, it’s still not quite right. In this case, often you’re wrong and you have to figure it out! Doesn’t this sound
familiar? It’s like this, you read it, interpreted it, wrote it down and now it doesn’t work like you wanted and you have to spot the problem and figure out the solution. Oh, I just described
debugging something you programmed. Learning math can actually improve your programming without teaching you anything specific to the code you’re working on. One last comment on the text book. I
don’t know how to get good text book recommendations. Perhaps amazon reviews are okay, but I want to trust the reviewers and know that they will recognize a good text book before I trust them
here. Perhaps it would work to find a quality researcher and look at their books.
7. People to talk to. You need friends that you can talk to about the ideas you’re learning. Now that I don’t have classrooms full of other math undergrads to talk with I don’t know what to do about
this aspect. Perhaps I need to go back to school or maybe I can find forums online. I should look into this and let you know what I learn.
8. Subjects that you care about. Ever dreamed of cracking some crypto system? Ever wondered why a Turing machine is capable of computation? What is type inference? How does the four color theorem
work? I’m sure you’ve wondered one of these or similar questions. But perhaps you don’t care at all about Fermat’s last theorem. So just to state the obvious, remember to pick something that has
a pay off for you emotionally. It fills a curiousity or dream. You’ll need this as motivation when the learning gets hard.
9. Learn time budgeting. The best time budgeting technique I have picked up is this: Take a calendar that gives your week in tabular form. Each day is a column and the days are split into rows about
30 minutes per row. Whenever you have something that you’d normally stick on your TODO list, block out a chunk of time on your calendar. Let’s say you need to go to work, wash the dishes, cook
dinner, take out the trash and study your math. Well, you start by blocking out the time for work, then fill in 30 min for chores and an hour to cook and eat dinner. Suddenly you see that you
really can’t afford to spend 2 hours with your math book tonight. Maybe you block in 30 min with your book. Even though you want to get through chapter 3 before you crack open your other book you
won’t be able to do it this week. But that’s okay, because you’ll be making lots of these appointments with your book and eventually you’ll get through it.
Now if you do this for each day of your week and try to live out the schedule exactly as it’s written you’ll start to feel like a robot and there is a solution to that feeling. I’ll explain with
an example. On Tuesday you’re supposed to be cleaning the bath tub at 6pm but you really want to watch a movie (scheduled for Wednesday). So here is the rule: You can change activities on your
calendar as long as you follow some law of conservation. In this case, you could swap the days of the activities. Just live up to it. There’s no point in bothering with this sort of time
budgeting if you can’t keep the promises you make with your calendar. In fact, you probably won’t get much done of what you mean to.
10. Change your life gradually. The most lasting ways of changing your life generally happen gradually — not over night. So work up from one math text a year to one every 3 months. Start at something
reasonable and increase the load as your interest increases. Sink or swim works for some things, but I find it unlikely that it will work here. | {"url":"http://blog.codersbase.com/posts/2006-08-17-learn-math-tips.html","timestamp":"2014-04-18T14:31:55Z","content_type":null,"content_length":"12280","record_id":"<urn:uuid:de90f2bb-0807-4630-a2ef-d9813ef912f9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cross product and Heron's formula
March 9th 2011, 01:06 PM #1
Sep 2009
Cross product and Heron's formula
Why when do I use the cross product to get the area of a triangle I get the right answer when I use Heron's formula I get the wrong answer that is off by 11%?
The question is found below;
17. Given the points A(1, 2, 0), B(0, 1, 0) and C(1, 0, 2), determine the area of triangle ABC.
The answer is 3^0.5 . With Heron's formula I get 1.93649. Why do they conflict?
EDIT: Never mind, I forgot to find BC. LoL, I had AC, CA, and AB, but no BC. I knew something was up. :P
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/trigonometry/174014-cross-product-heron-s-formula.html","timestamp":"2014-04-21T00:36:03Z","content_type":null,"content_length":"28517","record_id":"<urn:uuid:68b9ae65-1c55-4920-89ba-b10d034e15c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
diyAudio - golden ratio. I dont get it
murphythecat8 4th December 2012 02:58 AM
golden ratio. I dont get it
my dimension for my bass (30 to 300hz) enclosure requirement is that the box is no more then 38cm wide and 82 heigt.
I want to have around 110 liter.
I dont know I to make the golden ratio, can anyone help
planet10 4th December 2012 03:10 AM
38 x 61.5 x 99.5 is golden ratio. If 18 mm ply, gross volume would be 191 litres,
If we start with 110 litre add 5 litre for bracing and back of driver then:
115 litre = 115,000 cm^3. Take the cube root = 48.6
48.6 x 1.618 = 78.7
48.6 / 1.618 = 30
So interior dimensions of 30 x 48.6 x 78.7 cm
speaker dave 4th December 2012 10:27 AM
I don't get it either.
The golden ratio was originally for visually pleasing proportions. Put proper absorption in the cabinet (fiberglass) and don't worry about the dimension ratios.
DeonC 4th December 2012 10:33 AM
Originally Posted by planet10 (Post 3269455)
38 x 61.5 x 99.5 is golden ratio. If 18 mm ply, gross volume would be 191 litres,
If we start with 110 litre add 5 litre for bracing and back of driver then:
115 litre = 115,000 cm^3. Take the cube root = 48.6
48.6 x 1.618 = 78.7
48.6 / 1.618 = 30
So interior dimensions of 30 x 48.6 x 78.7 cm
Thanks, Dave. I did not start this thread, but find the info very useful. That method is so simple I wish I had thought about it. :D
Mark.Clappers 4th December 2012 10:35 AM
The golden ratio spreads the internal standing waves in the frequency domain (ie they don't overlap, both the base and the harmonics). This is useful for mid-range but not so much for bass as the
wavelengths involved are usually much larger then the enclosure size.
balerit 4th December 2012 10:51 AM
Originally Posted by speaker dave (Post 3269809)
I don't get it either.
The golden ratio was originally for visually pleasing proportions. Put proper absorption in the cabinet (fiberglass) and don't worry about the dimension ratios.
I agree, it's for looks only, just another myth. I use Square root of 2 to propertion my bass reflex boxes. Also remember when you calculate a boxes volume it is the ideal volume and therefor
everything that goes inside the box takes away from this ideal volume and you have to compensate for this by making the box larger, my free software does this for you.
Sample - The Book Worm
puppet 4th December 2012 02:10 PM
Originally Posted by speaker dave (Post 3269809)
I don't get it either.
The golden ratio was originally for visually pleasing proportions. Put proper absorption in the cabinet (fiberglass) and don't worry about the dimension ratios.
I've wondered about this, too. If you're just modeling a flat baffle for diffraction purposes (say for an OB), seems like the "golden ratio" doesn't produce an optimal response in sim. Might be fine
to start there though.
speakerdoctor 4th December 2012 02:21 PM
When it comes to room acoustics, use of the GR to position speakers can be helpful.
awkwardbydesign 4th December 2012 02:56 PM
Originally Posted by speaker dave (Post 3269809)
I don't get it either.
The golden ratio was originally for visually pleasing proportions. Put proper absorption in the cabinet (fiberglass) and don't worry about the dimension ratios.
Wasn't it for concert halls? Unlike Victorian structures, such as the Albert Hall.
mondogenerator 4th December 2012 05:23 PM
I always thought it was observed in ancient egyptian design, adopted by the greeks, and used visually to proportion windows, paintings etc. Ive never tested if it really works in acoustics. My
favourite adaptation of GL is to halve the longest length, calculating for twice the volume. Its a nice way to get a more cube-like shape. Of course standing waves would be more bunched up, but i
never heard a problem. GL is better than guesswork, or breaking the 'rules' i.e. W equally a third H for example.
All times are GMT. The time now is 05:58 AM.
vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd.
Copyright ©1999-2014 diyAudio
Content Relevant URLs by | {"url":"http://www.diyaudio.com/forums/multi-way/224937-golden-ratio-i-dont-get-print.html","timestamp":"2014-04-16T05:58:36Z","content_type":null,"content_length":"16510","record_id":"<urn:uuid:56492c56-9311-4910-98a9-fc3e6d1e21c5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can't follow the math in FRM Handbook!! Am I doomed?
I will be taking the FRM exam for Nov 2010 part 1. I purchased the FRM Handbook (Jorion) and started reading the first chapter on Quantitative Analysis this week. I was only able to get up to
page 7 before being lost in the math. I have no CFA background and haven't used complex math since college 15 years ago so the math is going over my head. I have a CPA (which doesn't use complex
My question is whether my limited math knowledge will be a major hinder in passing the FRM exam?
The FRM Handbook seems to assume that the reader understands the math symbols and concepts, because it doesn't explain them. Does the exam require strong math skills, because based upon the Quant
Analysis chapter of the FRM Handbook it appears that it does? If so, where do I turn for instruction that will help me to understand the equations? Do I need to go get a calculus textbook and try
to learn it real quick, or will the BT webinars get basic enough to help me understand the complex equations? Or is there another source that would be helpful to me in understanding the math
equations. Flipping through the FRM Handbook, it appears that the math will only get worse (since I have only made it up to page 7)!
Any advice would be great.
T.Flockert New Member
I think there is enough time for you to get into the statistical concepts. I would recommend to get a good introduction to statistics (book).
The concepts used (in level 1) are in part sophisticated, but you don't have get deep into those topics. And those basic ideas are easy to understand, if you get used to the notation. And for
most test-questions you just have to know the formulas by heart and do not need a deep understanding of all details related to it.
For example: if you know the formula for GARCH(1,1) you should be able to answer most questions related to that, without knowing the aplication of a Portmanteau test or something like that. Or...
you don't need to get deep into matrix calculus in order to cover the regression-related topics.
Or in few words: not too hard, enough time --> don't worry
I totally agree with T.Flockhert. The FRM Handbook (IMO) awkwardly presumes some slightly-above-exam-level math. However, the calculus Jorion presumes pays plenty of dividends all the way to the
end (L2); i.e., the first derivative is everywhere. So, for L1, I agree that you might "keep it simple" with a focus on basic math, formulas and the stat in Gujarati 1 to 8. But, thinking ahead
to L2, I do also think that it's worth investing time is intro calculus so that you understand the first partial derivative.
I like your plan to sit L1 in November b/c you can give proper attention to (i) the statistics and (ii) hopefully, just calculus up to derivatives. But T.Flockhert is totally correct about L1: it
is generally the application of math & formulas rather than any advanced math.
So, I would focus on Gujarati & Hull assignments in this respect. Doing Gujarati practice questions gives you most of the stat you'll need; and Hull gives you all the "bond math" practice you'll
need (eg, variations on PV, FV, compounding)
Here are two links:
http://www.bionicturtle.com/learn/article/additional_frm_reading_resources_book/; this contains two of my favorite calculus books but more recently i worked through a Humongous Book of Calculus
Problems and I thought it was fun and very effective: see http://www.bionicturtle.com/forum/viewthread/2463/
Thanks for the reply T Flockert and David. I have found started going through your Early Bird webinars and have found them very helpful to brush-up on the math and help the concepts that I
learned back in college come back to mind. On behalf of everyone who is in a similar situation as me of being rusty on their math skills, thank you very much for doing the Early Bird webinars!
cpaguy7 -
I am thrilled you find them helpful.
Can i ask a question: do you think math refresher webinar(s) would be helpful, for example, in June/July, as precursor to Nov 2010 (i.e., after the May 22nd exam)?
... since we didn't get the #2 webinar recorded, I'd be happy to develop an improved quant introduction, if there is demand? Thanks, David
cpaguy - re the math, please see http://www.bionicturtle.com/forum/viewthread/2997/
... I totally forgot about Carol Alexander's Volume I which, IMO, is the best intro to quant finance book. The only caveat is that it is not a greatly gentle introduction (e.g., the linear
algebra presumes some prior understanding), it's more like a suprisingly dense intro survey that achieves more than much bigger books ...David
Yes, that would be great if you could do an intro to quant webinar! Since I have found the other Early Birds to be so helpful, I was very disappointed that EB webinar #2 did not get recorded. I'm
sure there are many others who would also benefit greatly from a quant intro. Thanks!
cpaguy - Thank you for your feedback! Suzanne and i just tentatively agreed that we can conduct two (2) quant intro webinars after the L1 exam (e.g., June, July) as this will still allow for two
L1 webinars and two L2 webinars before November .... (I am always interested to make the intros better and more efficient) ... when we lost #2, we were still learning the issues related to live
(many things can go wrong), but I think we've figured it out now...thanks, David
Great to hear! I also just ordered Carol Alexander's volume 1 that you recommended. Thanks again.
williamhsu New Member
Hi David
I totally forget partial derivative that I learned in college. What is the quick way to learn those concept for FRM exam. Thanks for all helps
Hi William,
I think youtube is quickest, these are my two favorite calculus teachers (i have other sources for in-depth treatments ... but youtube is fastest, imo):
□ http://integralcalc.com/ Krista's youtube channel @ https://www.youtube.com/user/TheIntegralCALC | {"url":"https://www.bionicturtle.com/forum/threads/cant-follow-the-math-in-frm-handbook-am-i-doomed.2584/","timestamp":"2014-04-16T10:53:09Z","content_type":null,"content_length":"54027","record_id":"<urn:uuid:5201d96b-a201-4d52-8a21-ceffac55c1de>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stats: Counting Techniques
Fundamental Theorems
Every integer greater than one is either prime or can be expressed as an unique product of prime numbers
Every polynomial function on one variable of degree n > 0 has at least one real or complex zero.
Linear Programming
If there is a solution to a linear programming problem, then it will occur at a corner point or on a boundary between two or more corner points
Fundamental Counting Principle
In a sequence of events, the total possible number of ways all events can performed is the product of the possible number of ways each individual event can be performed.
The Bluman text calls this multiplication principle 2.
If n is a positive integer, then
n! = n (n-1) (n-2) ... (3)(2)(1)
n! = n (n-1)!
A special case is 0!
0! = 1
A permutation is an arrangement of objects without repetition where order is important.
Permutations using all the objects
A permutation of n objects, arranged into one group of size n, without repetition, and order being important is:
[n]P[n] = P(n,n) = n!
Example: Find all permutations of the letters "ABC"
ABC ACB BAC BCA CAB CBA
Permutations of some of the objects
A permutation of n objects, arranged in groups of size r, without repetition, and order being important is:
[n]P[r] = P(n,r) = n! / (n-r)!
Example: Find all two-letter permutations of the letters "ABC"
AB AC BA BC CA CB
Shortcut formula for finding a permutation
Assuming that you start a n and count down to 1 in your factorials ...
P(n,r) = first r factors of n factorial
Distinguishable Permutations
Sometimes letters are repeated and all of the permutations aren't distinguishable from each other.
Example: Find all permutations of the letters "BOB"
To help you distinguish, I'll write the second "B" as "b"
BOb BbO OBb ObB bBO bOB
If you just write "B" as "B", however ...
BOB BBO OBB OBB BBO BBO
There are really only three distinguishable permutations here.
BOB BBO OBB
If a word has N letters, k of which are unique, and you let n (n1, n2, n3, ..., nk) be the frequency of each of the k letters, then the total number of distinguishable permutations is given by:
Consider the word "STATISTICS":
Here are the frequency of each letter: S=3, T=3, A=1, I=2, C=1, there are 10 letters total
10! 10*9*8*7*6*5*4*3*2*1
Permutations = -------------- = -------------------- = 50400
3! 3! 1! 2! 1! 6 * 6 * 1 * 2 * 1
You can find distinguishable permutations using the TI-82.
A combination is an arrangement of objects without repetition where order is not important.
Note: The difference between a permutation and a combination is not whether there is repetition or not -- there must not be repetition with either, and if there is repetition, you can not use the
formulas for permutations or combinations. The only difference in the definition of a permutation and a combination is whether order is important.
A combination of n objects, arranged in groups of size r, without repetition, and order being important is:
[n]C[r] = C(n,r) = n! / ( (n-r)! * r! )
Another way to write a combination of n things, r at a time is using the binomial notation:
Example: Find all two-letter combinations of the letters "ABC"
AB = BA AC = CA BC = CB
There are only three two-letter combinations.
Shortcut formula for finding a combination
Assuming that you start a n and count down to 1 in your factorials ...
C(n,r) = first r factors of n factorial divided by the last r factors of n factorial
Pascal's Triangle
Combinations are used in the binomial expansion theorem from algebra to give the coefficients of the expansion (a+b)^n. They also form a pattern known as Pascal's Triangle.
Each element in the table is the sum of the two elements directly above it. Each element is also a combination. The n value is the number of the row (start counting at zero) and the r value is the
element in the row (start counting at zero). That would make the 20 in the next to last row C(6,3) -- it's in the row #6 (7^th row) and position #3 (4^th element).
Pascal's Triangle illustrates the symmetric nature of a combination. C(n,r) = C(n,n-r)
Example: C(10,4) = C(10,6) or C(100,99) = C(100,1)
Shortcut formula for finding a combination
Since combinations are symmetric, if n-r is smaller than r, then switch the combination to its alternative form and then use the shortcut given above.
C(n,r) = first r factors of n factorial divided by the last r factors of n factorial
You can use the TI-82 graphing calculator to find factorials, permutations, and combinations.
Tree Diagrams
The first event appears on the left, and then each sequential event is represented as branches off of the first event.
The tree diagram to the right would show the possible ways of flipping two coins. The final outcomes are obtained by following each branch to its conclusion: They are from top to bottom:
HH HT TH TT
Table of Contents | {"url":"https://people.richland.edu/james/lecture/m170/ch04-not.html","timestamp":"2014-04-24T00:13:22Z","content_type":null,"content_length":"7178","record_id":"<urn:uuid:1bfa4838-4622-4c45-adf0-967809ec4db3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairview, TX Algebra Tutor
Find a Fairview, TX Algebra Tutor
...The first day of school is rapidly approaching!I taught Algebra I for two years and ESL Algebra I for one year in surrounding school districts. I have also tutored several students over the
past two years in Algebra I helping them through the school year. I taught Algebra II for 2 years at a su...
10 Subjects: including algebra 1, algebra 2, geometry, GED
...I also tutor students regularly on algebra( Polynomials, functions, Graphing,Factoring) I am a physicist with both a bachelor and master’s degree in physics. As a physicist, we use math as a
tool to model or interpret our data. I also tutor students regularly on algebra (polynomials, functions, graphing, factoring). I am a physicist with both a bachelor and master’s degree in
25 Subjects: including algebra 1, algebra 2, chemistry, physics
...I taught courses at Richland College and Collin County Community College. My specialties are Physics I and Physics II, both algebra and calculus based. I also have experience with laboratory
experiments and writing lab reports.
8 Subjects: including algebra 1, algebra 2, calculus, physics
...Recently, I completed a contract position in math content development for an educational technology company. Prior to that I managed and tutored at an SAT ACT company in Plano. Currently, I
tutor and work part time as an instructor in GRE GMAT Quantitative at UT Dallas while also exploring careers in educational technology.
11 Subjects: including algebra 1, geometry, GRE, SAT math
...I taught Human Anatomy and Physiology here at UNT as a TA for 2 years while I was a Grad student, lecturing for about an hour each class, then going around the room to help students, as groups
and individually, study the models for an additional 2 hours. I made my own quizzes but lectured follow...
14 Subjects: including algebra 2, algebra 1, reading, chemistry
Related Fairview, TX Tutors
Fairview, TX Accounting Tutors
Fairview, TX ACT Tutors
Fairview, TX Algebra Tutors
Fairview, TX Algebra 2 Tutors
Fairview, TX Calculus Tutors
Fairview, TX Geometry Tutors
Fairview, TX Math Tutors
Fairview, TX Prealgebra Tutors
Fairview, TX Precalculus Tutors
Fairview, TX SAT Tutors
Fairview, TX SAT Math Tutors
Fairview, TX Science Tutors
Fairview, TX Statistics Tutors
Fairview, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Fairview_TX_Algebra_tutors.php","timestamp":"2014-04-16T19:31:50Z","content_type":null,"content_length":"23924","record_id":"<urn:uuid:952e4da4-fdd0-4929-9cb3-1150e2ad20e6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A polyhedron has 10 faces and 24 edges. What is the number of vertices of the polyhedron? 12 16 10 14
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500da009e4b0ed432e101382","timestamp":"2014-04-18T23:54:35Z","content_type":null,"content_length":"58241","record_id":"<urn:uuid:6049139b-dcdf-4e1d-b183-3f7a008e91e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
quick ring questions
May 16th 2010, 11:17 PM #1
quick ring questions
Hello, this questions came up in my exam and I was wondering if I got it right!
TRUE or FALSE:
Every field of 16 elements is commutative
I said TRUE since if we take a field minus the 0 element, then we have a group of units of order 15, which is cyclic and hence abelian by Sylow theory! So if we put the zero element back in
clearly that commutes with everything too. Is this correct?
I also said that R[X] was a subring of C[X] , ie. real polynomials are a subring of complex polynomials, is this correct? Thanks for any advice
Hello, this questions came up in my exam and I was wondering if I got it right!
TRUE or FALSE:
Every field of 16 elements is commutative
I said TRUE since if we take a field minus the 0 element, then we have a group of units of order 15, which is cyclic and hence abelian by Sylow theory! So if we put the zero element back in
clearly that commutes with everything too. Is this correct?
I also said that R[X] was a subring of C[X] , ie. real polynomials are a subring of complex polynomials, is this correct? Thanks for any advice
Do you mean field, or do you mean division ring? A field is commutative by definition...However, you are correct. The ring of units has order 15, and there is only one group of order 15, $C_{15}$
Yes, $\mathbb{R}[X]$ is a subring of $\mathbb{C}[X]$ as every real number is a complex number.
ahh.. I may have got it wrong the question actually says any RING with 16 elements is commutative, so the group of units is not necessarily 15- is there a counterexample for this?
yes, the ring of $2 \times 2$ matrices with entries from $\mathbb{Z}/2\mathbb{Z}$ has 16 elements and it's not commutative.
May 17th 2010, 12:29 AM #2
May 17th 2010, 03:14 AM #3
May 17th 2010, 04:27 AM #4
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/145092-quick-ring-questions.html","timestamp":"2014-04-20T06:59:48Z","content_type":null,"content_length":"41153","record_id":"<urn:uuid:b39e7bec-ce68-4a68-b798-8aefeea7eed7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generation of Fibonacci Sequence using Recursion.
Write a recursive function to obtain the first 25 numbers of a Fibonacci sequence. In a Fibonacci sequence the sum of two successive terms gives the third term. Following are the first few terms of
the Fibonacci sequence:1 1 2 3 5 8 13 21 34 55 89 ...
static int prev_number=0, number=1; // static: so value is not lost
int fibonacci (int prev_number, int number);
printf ("Following are the first 25 Numbers of the Fibonacci Series:\n");
printf ("1 "); //to avoid complexity
fibonacci (prev_number,number);
fibonacci (int prev_number, int number)
static int i=1; //i is not 0, cuz 1 is already counted in main.
int fibo;
if (i==25)
printf ("\ndone"); //stop after 25 numbers
prev_number=number; //important steps
printf ("\n%d", fibo);
i++; // increment counter
fibonacci (prev_number,number); //recursion
33 comments:
Thanks for this example!
NextDawn C and C++ Tutorials
Thanks. :)
The site you have provided is nice too. :)
it should be more clear
IF THE PROGRAM USED POINTER CONCEPTS THEN IT 'LL BE MOST USEFUL
Thanks a lot!
Thanks alot ... it seems to be pretty clear nad would need some more example if possible.
Also I have few info about the same on my blog
thanks a lot sir for your help
Thanks for the idea
Thanks for the idea
thnx... datz gud 1,
become a follower for free C,C++ And vb code of my blog:
http://www.a2zhacks.blogspot.com to learn C,C++,VB programming.
Thank u
many many thanx for example....please continue providing more examples in functions
thanks.. very easy coding for this program..easily understable..
many many thanks for the example
int fabo(int);
int main()
int result=0,a=1,b=1,c,i;
printf("enter upto which you want to generate the series");
printf(" %d",result);}
int fabo(int n)
int i;
if (n==1)
return 1;
return 1;
return (fabo(n-1)+fabo(n-2));
Rajat its good...
thanks you so much this helped me a lot.. :D
Thanks for the example very well explained. Infact this site sholud be opened first when the google is surfed
the variable a=1 and b=1????is for???
This comment has been removed by the author.
This comment has been removed by the author.
This is awesome post and good imformation
C interview questions
int fabo(int);
int main()
int result=0,c,i;
printf("enter how many no. you see:");
printf(" %d",result);}
int fabo(int n)
int i;
if (n==1)
return 1;
return 1;
return (fabo(n-1)+fabo(n-2));
find largest database of c programs at
Realy good job, i found your blog from google search, I impressed, i bookmark this blog for future
thanks , i think c is simple and easy language...
Payday Loans Online | Cash Advance Loans | Fast Cash Loans
char name[|XII+I|];
very simple and cool programs at
Appreciation for nice Updates, I found something new and folks can get useful info about BEST ONLINE TRAINING
Hi, Nice Program to calculate the factorial of a number. Use the concept of recursion
instead of using loops. .Thanks, its really helped me...... | {"url":"http://copperskullcprogramming.blogspot.com/2007/01/generation-of-fibonacci-sequence-using.html","timestamp":"2014-04-20T08:15:42Z","content_type":null,"content_length":"77587","record_id":"<urn:uuid:2c9fc238-94eb-4cbf-84f6-ef7af69666ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Types of Data & Measurement Scales: Nominal, Ordinal, Interval and Ratio
There are four measurement scales (or types of data): nominal, ordinal, interval and ratio. These are simply ways to categorize different types of variables. This topic is usually discussed in the
context of academic teaching and less often in the “real world.” If you are brushing up on this concept for a statistics test, thank a psychologist researcher named Stanley Stevens for coming up
with these terms. These four measurement scales (nominal, ordinal, interval, and ratio) are best understood with example, as you’ll see below.
Let’s start with the easiest one to understand. Nominal scales are used for labeling variables, without any quantitative value. ”Nominal” scales could simply be called “labels.” Here are some
examples, below. Notice that all of these scales are mutually exclusive (no overlap) and none of them have any numerical significance. A good way to remember all of this is that “nominal” sounds a
lot like “name” and nominal scales are kind of like “names” or labels.
Note: a sub-type of nominal scale with only two categories (e.g. male/female) is called “dichotomous.” If you are a student, you can use that to impress your teacher.
Continue reading about types of data and measurement scales: nominal, ordinal, interval, and ratio…
With ordinal scales, it is the order of the values is what’s important and significant, but the differences between each one is not really known. Take a look at the example below. In each case, we
know that a #4 is better than a #3 or #2, but we don’t know–and cannot quantify–how much better it is. For example, is the difference between “OK” and “Unhappy” the same as the difference between
“Very Happy” and “Happy?” We can’t say.
Ordinal scales are typically measures of non-numeric concepts like satisfaction, happiness, discomfort, etc.
“Ordinal” is easy to remember because is sounds like “order” and that’s the key to remember with “ordinal scales”–it is the order that matters, but that’s all you really get from these.
Advanced note: The best way to determine central tendency on a set of ordinal data is to use the mode or median; the mean cannot be defined from an ordinal set.
Interval scales are numeric scales in which we know not only the order, but also the exact differences between the values. The classic example of an interval scale is Celsius temperature because the
difference between each value is the same. For example, the difference between 60 and 50 degrees is a measurable 10 degrees, as is the difference between 80 and 70 degrees. Time is another good
example of an interval scale in which the increments are known, consistent, and measurable.
Interval scales are nice because the realm of statistical analysis on these data sets opens up. For example, central tendency can be measured by mode, median, or mean; standard deviation can also be
Like the others, you can remember the key points of an “interval scale” pretty easily. ”Interval” itself means “space in between,” which is the important thing to remember–interval scales not only
tell us about order, but also about the value between each item.
Here’s the problem with interval scales: they don’t have a “true zero.” For example, there is no such thing as “no temperature.” Without a true zero, it is impossible to compute ratios. With
interval data, we can add and subtract, but cannot multiply or divide. Confused? Ok, consider this: 10 degrees + 10 degrees = 20 degrees. No problem there. 20 degrees is not twice as hot as 10
degrees, however, because there is no such thing as “no temperature” when it comes to the Celsius scale. I hope that makes sense. Bottom line, interval scales are great, but we cannot calculate
ratios, which brings us to our last measurement scale…
Ratio scales are the ultimate nirvana when it comes to measurement scales because they tell us about the order, they tell us the exact value between units, AND they also have an absolute zero–which
allows for a wide range of both descriptive and inferential statistics to be applied. At the risk of repeating myself, everything above about interval data applies to ratio scales + ratio scales
have a clear definition of zero. Good examples of ratio variables include height and weight.
Ratio scales provide a wealth of possibilities when it comes to statistical analysis. These variables can be meaningfully added, subtracted, multiplied, divided (ratios). Central tendency can be
measured by mode, median, or mean; measures of dispersion, such as standard deviation and coefficient of variation can also be calculated from ratio scales.
In summary, nominal variables are used to “name,” or label a series of values. Ordinal scales provide good information about the order of choices, such as in a customer satisfaction survey.
Interval scales give us the order of values + the ability to quantify the difference between each one. Finally, Ratio scales give us the ultimate–order, interval values, plus the ability to
calculate ratios since a “true zero” can be defined.
That’s it! I hope this explanation is clear and that you know understand the four types of data measurement scales: nominal, ordinal, interval, and ratio!
11 thoughts on “Types of Data & Measurement Scales: Nominal, Ordinal, Interval and Ratio”
1. Time is in fact a ratio scale.
20 seconds is twice as long as 10 seconds. You can multiply and divide time. The absolute 0 doesn\’t have to be attainable for the scale to be ratio. To borrow from your example: there is no such
thing as \”no height\”, yet you\’ve classified height as ratio.
□ Thanks for the excellent comment, LJ. I have edited the article based on your comment. Time is a tricky one. This article from UC Davis explains how time, depending on how it is presented,
can be categorized as any of these types of scales. http://psychology.ucdavis.edu/sommerb/sommerdemo/scaling/levels.htm
2. Thanks for this informative text. Now I became clearer between these four terms.
3. Thanks for that brilliantly written info. Really helped clear all of my confusion regarding scales, especially the difference between interval and ratio. Thanks again.
4. Thanks this helped me a lot!
5. Thank you so so much. Im doing a BA in Psychotherapy and one of our modules is Psychology so we only touch on it in one class so Im not au fait at all. You have explained to me in 10 minutes what
I could not understand from our lecturer in a 2 hour long lecture. Feel more confident about the exam for this module next Monday!
6. Thanks so much,since now i understand those scales especial to differentiate them.
7. Very formative article, thanks to author for such a great job!
8. Brilliant article though, however I had one doubt regarding oil prices in exact USD figure over a monthly period. On which scale should these values lie. Appreciate your inputs.
9. Difficult things made really simple and easy to understand.
10. very helpful | {"url":"http://www.mymarketresearchmethods.com/types-of-data-nominal-ordinal-interval-ratio/","timestamp":"2014-04-20T10:46:55Z","content_type":null,"content_length":"56186","record_id":"<urn:uuid:3bc458a2-d2b0-4762-9f8e-33ca13d535bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
The puzzle in last Saturday edition of Le Monde is made of two parts: Given a 10×10 grid, what is the maximum number of nodes one can highlight before creating a parallelogram with one side parallel
to one of the axes of the grid? What is the maximum number of nodes one can highlight before
New Le Monde puzzle
$New Le Monde puzzle$
When I first read Le Monde puzzle this weekend, I though it was even less exciting than the previous one: find and , such that is a multiple of . The solution is obtained by brute-force checking
through an R program: and then the a next solution is (with several values for N). However, while
Welcome, Robin!
Robin Ryder started his new blog with his different solutions to Le Monde puzzle of last Saturday (about the algebraic sum of products…), solutions that are much more elegant than my pedestrian
rendering. I particularly like the one based on the Jacobian of a matrix! (Robin is doing a postdoc in Dauphine and CREST—under my | {"url":"http://www.r-bloggers.com/tag/le-monde/page/5/","timestamp":"2014-04-21T07:23:51Z","content_type":null,"content_length":"28561","record_id":"<urn:uuid:8c1cda4d-b970-4080-821c-ae5e31e61114>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In which type of relationship will the graph never pass through the coordinate pair (0,0) 1.linear 2.quadratic 3.inverse 4.direct
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0a2a65e4b014c09e6466f6","timestamp":"2014-04-18T00:29:29Z","content_type":null,"content_length":"34898","record_id":"<urn:uuid:5316a7c1-f68b-47e8-a335-bb55343faa7b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
y=-2375x+18000 at what value of x would y be less than 1400.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
is that 14000 or r u sure about 1400.
Best Response
You've already chosen the best response.
Set y as 1400 and solve for x 1400=-2375x+18000 2375x=16600 x=7 Since the slope is negative, x has to be greater than 7 (if you draw the graph you will see)
Best Response
You've already chosen the best response.
When will be y = 1400 ? first of all , let y =1400 1400 = -2375x + 18000 1400 - 18000 = -2375 x x = -6.98 so at x = -6.98 (approx. => x = -7) y will be 1400
Best Response
You've already chosen the best response.
at -6.97 y < 1400.
Best Response
You've already chosen the best response.
May be I am wrong .. satellite73 may correct me
Best Response
You've already chosen the best response.
You could also arbitrarily plug in points greater or less than 7 and less than 7 to check the solution
Best Response
You've already chosen the best response.
I meant x = 6.98 there
Best Response
You've already chosen the best response.
mathslover, the answer would round up anyways, even at 2 significant digits. anyways i already posted the solution
Best Response
You've already chosen the best response.
thanks .
Best Response
You've already chosen the best response.
my text book gives x=8
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5061c40fe4b02e13941150a0","timestamp":"2014-04-19T07:08:49Z","content_type":null,"content_length":"49355","record_id":"<urn:uuid:f351ae3f-f3d5-4ecf-8912-fa5f0ea201a7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Canvas Network
Precalculus Algebra
Ball State University
May 12, 2013 to Apr 26, 2014
Cost per enrollment: Free
Enrollment Closed
• Provides video lectures
• Some of your work will be assessed by a content expert
• Provides opportunities to interact with the instructor or students
• Uses discussion forums
• Requires the purchase of a textbook or other course materials
• Contains external social networking participation or elements
• Some of your work will be assessed by peers
• You will be expected to work with a group of other students
Full course description
Students often encounter grave difficulty in calculus if their algebraic knowledge is insufficient. This course is designed to provide students with algebraic knowledge needed for success in a
typical calculus course. We explore a suite of functions used in calculus, including polynomials (with special emphasis on linear and quadratic functions), rational functions, exponential functions,
and logarithmic functions. Along the way, basic strategies for solving equations and inequalities are reinforced, as are strategies for interpreting and manipulating a variety of algebraic
expressions. Students enrolling in the course are expected to have good number sense and to have taken an intermediate algebra course.
John Lorch, Ph.D.
Professor of Mathematical Sciences
John Lorch is a professor of mathematics at Ball State University and a graduate of Grant Elementary School, the University of Colorado, and Oklahoma State University. His mathematical interests
include combinatorial designs, the history of mathematics, and the content preparation of pre-service secondary school mathematics teachers. He enjoys sharing mathematical ideas with undergraduate
students, often leading to senior honors theses or other creative projects. When not engaging in mathematics, Dr. Lorch enjoys blues guitar, classic supernatural fiction, and stupid, juvenile,
wife-annoying movies. | {"url":"https://www.canvas.net/courses/precalculus-algebra","timestamp":"2014-04-21T04:34:05Z","content_type":null,"content_length":"14935","record_id":"<urn:uuid:2e60c393-19e7-4b70-96ba-e705acc546e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Teacher Support - ESCOT October 1, 2000
"In the Dark with an Elephant" Teacher Support
In the Dark with an Elephant Archived PoW || Student Version
In the Dark with an Elephant is no longer the current ESCOT Problem of the Week. The student version allows teachers to use the problem with their students without giving the students access to the
archived answers. Teachers can use the link to the archived problem to get ideas about student thinking.
[Standards] [Activities] [Lessons] [Dr. Math] [Vocab] [Other Resources]
In the Dark with an Elephant asks students to explore how the look of a graph of a function can vary, depending on how the domain and range of the graph window are set. The students will investigate
why the same function can sometimes look straight, while at other times it looks curved.
This ESCOT PoW could be used as an introductory exploration on domain or range, or as a part of larger units on graphing and the effects of scale.
If you have something to share with us as you use any of the links or suggestions on this page (something you tried and changed or a new idea), we would love to hear from you. Please email us.
Alignment to the NCTM Standards - Grades 6-8
- understand patterns, relations, and functions
- analyze characteristics and properties of two-dimensional geometric shapes - use visualization, spatial reasoning, and geometric modeling to solve problems
Problem Solving
- solve problems that arise in mathematics and in other contexts
- communicate mathematical thinking coherently and clearly to peers, teachers, and others
- use the language of mathematics to express mathematical ideas precisely
- recognize and apply mathematics in contexts outside of mathematics
Possible Activities:
- Introduce the idea of the elephant by having the students imagine that they are looking through a rectangular periscope (like the spectator periscopes used at golf matches to see over people).
They should think about pointing the periscope at a certain part of the elephant, and then giving the coordinates that would tell someone else how to aim it.
- Find an appropriate viewing rectangle in order to get a complete graph. For example:
1. Which of the following points lie in the viewing rectangle [-3,5] by [1,8]?
a. (0,0)
b. (0,4)
c. (4,0)
d. (3,1)
e. (1,4)
f. (2,-2)
2. Choose a viewing rectangle that includes all of the indicated points:
a. (-8,9), (1,7) and (4,11)
b. (19, -2), (12, 48) and (-9,3)
- Quadratics: polynomial form by ExploreMath.com. If you use the magnifying glasses at the bottom of the screen, the graph will zoom in and out, helping to illustrate how the change in scale
affects the graph. This can drive home the point of scale change in a very visual way.
Related Lessons Online: [top]
Resources to use with students: [top]
Other Resources | {"url":"http://mathforum.org/escot/elephants.html","timestamp":"2014-04-19T23:58:24Z","content_type":null,"content_length":"9293","record_id":"<urn:uuid:abaa407b-b15f-4c54-9495-283ce87976ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digital Waveguides
Search Physical Audio Signal Processing
Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog?
Digital Waveguides
A (lossless) digital waveguide is defined as a bidirectional delay line at some wave impedance 430,433]. Figure 2.11 illustrates one digital waveguide.
As before, each delay line contains a sampled acoustic traveling wave. However, since we now have a bidirectional delay line, we have two traveling waves, one to the ``left'' and one to the
``right'', say. It has been known since 1747 [100] that the vibration of an ideal string can be described as the sum of two traveling waves going in opposite directions. (See Appendix C for a
mathematical derivation of this important fact.) Thus, while a single delay line can model an acoustic plane wave, a bidirectional delay line (a digital waveguide) can model any one-dimensional
linear acoustic system such as a violin string, clarinet bore, flute pipe, trumpet-valve pipe, or the like. Of course, in real acoustic strings and bores, the 1D waveguides exhibit some loss and
dispersion^3.4 so that we will need some filtering in the waveguide to obtain an accurate physical model of such systems. The wave impedance 6) is needed for connecting digital waveguides to other
physical simulations (such as another digital waveguide or finite-difference model).
Subsections Previous: SummaryNext: Physical OutputsAbout the Author: Julius Orion Smith III
Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at
Stanford's Center for Computer Research in Music and Acoustics (CCRMA)
, teaching courses and pursuing research related to signal processing applied to music and audio systems. See
for details. | {"url":"http://www.dsprelated.com/dspbooks/pasp/Digital_Waveguides.html","timestamp":"2014-04-16T16:08:01Z","content_type":null,"content_length":"150145","record_id":"<urn:uuid:605a97a1-0366-4205-85fb-275f97fd91ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Need help please
October 30th 2006, 09:35 PM
[SOLVED] Need help please
i have a math problem where i need to find the area of a scalene triangle. i dont know how to find the area because i dont have the height, i just have the dimensions:330,270 and 240. Help would
be much apreciated.
October 30th 2006, 09:52 PM
I think you can use the cos rule then sin rule for area of triangle.
cosA = (330^2 + 270^2 - 240^2 )/ (2 * 330 * 270)
Then use the angle value in this.
0.5 * 330 * 270sin(A) = Area of triangle.
October 31st 2006, 02:39 AM
Use Heron's formula: if a, b, c are the sides of a triangle let s=(a+b+c)/2,
then the area is: | {"url":"http://mathhelpforum.com/geometry/7048-solved-need-help-please-print.html","timestamp":"2014-04-17T06:58:28Z","content_type":null,"content_length":"4725","record_id":"<urn:uuid:da0d92b6-72cf-4cab-83ca-fecb414ab432>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus/Further Methods of Integration/Contents
Basic Integration RulesEdit
$\int 0\ du = C$
$\int ku\ du = k\times \int u\ du + C$
$\int (u + v)\ du = \int u\ du + \int v\ du + C$
Partial IntegrationEdit
For two functions u and dv of a variable x,
$\int u dv = u v - \int v du$
where u is chosen by precedence according to LIPET:
• Logarithmic
• Inverse Trigonometric
• Polynomial
• Exponential
• Trigonometric
Improper IntegralsEdit
For any function f of variable x, continuous on the given infinite domain:
$\int_{a}^{\infin} f(x)\, dx$=$\lim_{b \to \infin}\int_{a}^{b} f(x)\, dx$
$\int_{-\infin}^{b} f(x)\, dx$=$\lim_{a \to -\infin}\int_{a}^{b} f(x)\, dx$
$\int_{-\infin}^{\infin} f(x)\, dx$=$\int_{-\infin}^{c} f(x)\, dx + \int_{c}^{\infin} f(x)\, dx$
For any function f of variable x continuous on the given interval, but with an infinite discontinuity at (1) a, (2) b, or some (3) c in [a,b]:
$\int_{a}^{b} f(x)\, dx$=$\lim_{c \to b^-}\int_{a}^{c} f(x)\, dx$ (1)
$\int_{a}^{b} f(x)\, dx$=$\lim_{c \to a^+}\int_{c}^{b} f(x)\, dx$ (2)
$\int_{a}^{b} f(x)\, dx$=$\int_{a}^{c} f(x)\, dx+\int_{c}^{b} f(x)\, dx$ (3)
Last modified on 3 December 2009, at 14:49 | {"url":"http://en.m.wikibooks.org/wiki/Calculus/Further_Methods_of_Integration/Contents","timestamp":"2014-04-16T04:28:28Z","content_type":null,"content_length":"19606","record_id":"<urn:uuid:71785056-1c9f-4f3e-9b3a-e438a37a289c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double integral and polar coordinates
May 27th 2008, 10:17 AM #1
Jul 2005
Double integral and polar coordinates
I don't understand how to do these transformations as depicted on the picture.
There's 2R = pi/3 and R/2=0 How to get these pi/3 and 0?
There are a number of typographical errors in your post (at a glance, there's two dx's and a missing term in the transformation Jacobian).
From what I can tell, the region of integration is the upper half of the circle $(x - R)^2 + y^2 = R^2$ between the line x = R/2 and the y-axis. Do you realise this?
A transformation is then made to polar coordinates. Do you realise that? Are you familiar with them?
Do you know how to describe the region of integration using polar coordinates? In particular:
1. Do you see how to express the line x = R/2 and the circle in polar coordinates.
2. Do you see how to get the integral terminals for the angle?
There are a number of typographical errors in your post (at a glance, there's two dx's and a missing term in the transformation Jacobian).
From what I can tell, the region of integration is the upper half of the circle $(x - R)^2 + y^2 = R^2$ between the line x = R/2 and the y-axis. Do you realise this?
A transformation is then made to polar coordinates. Do you realise that? Are you familiar with them?
Do you know how to describe the region of integration using polar coordinates? In particular:
1. Do you see how to express the line x = R/2 and the circle in polar coordinates.
2. Do you see how to get the integral terminals for the angle?
$x = \frac{R}{2} \Rightarrow r \cos \phi = \frac{R}{2} \Rightarrow r = \frac{R}{2 \cos \phi}$.
$y = \sqrt{2Rx - x^2} \Rightarrow y^2 = 2 R x - x^2$
$\Rightarrow r^2 \sin^2 \phi = 2 R r \cos \phi - r^2 \cos^2 \phi \Rightarrow r = 2 R \cos \phi$. (r = 0 is rejected - why?)
At the point (2R, 0) $\phi = 0$.
The line $x = \frac{R}{2}$ cuts the semi-circle at $\left( \frac{R}{2}, ~ \frac{R \sqrt{3}}{2}\right)$. At this point, $\tan \phi = \frac{R \sqrt{3}/2}{R/2} = \sqrt{3} \Rightarrow \phi = \frac{\
May 27th 2008, 04:28 PM #2
May 27th 2008, 07:07 PM #3 | {"url":"http://mathhelpforum.com/calculus/39775-double-integral-polar-coordinates.html","timestamp":"2014-04-19T20:57:17Z","content_type":null,"content_length":"41946","record_id":"<urn:uuid:e7080aa2-9aa4-467e-9790-f46d76802b8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Generate Random Numbers in JavaScript
Edit Article
Edited by Welli, Myles, BR, Maluniu and 4 others
Random numbers are useful for all kinds of things like umm... never mind. The problem with the Math.random() method that we will be using is that it generates a long, practically useless decimal
number such as 0.77180239201218795. Here you'll learn how to use other methods of the Math object to get a whole number in the range of your choice.
This article assumes you know HTML and JS to a proficient standard in order to follow the article. If not, there are plenty of great sites out there to learn from. With a little effort you'll be back
here in no time at all.
1. 1
Create a basic page up with your head, body and the like. Open a <script> tag in the body and simply try alert() on the value of Math.random(). Basically, alert(Math.random()) and then save the
page as an .html file for ease of use.
2. 2
Open up the page. Then enjoy the mass of pseudo-random digits you have created! Useful, right? Maybe not so what's next?
3. 3
Jumping up to a whole number. At this point you need to choose a value for the highest point in your range. The number generated will not go above this point. Let's go with 7 for the purpose of
this article. All you do now is multiply your generated random number by your desired high point. So; Math.random() * 7 essentially. alert() the output and take a look.
4. 4
So there's still a huge trail of decimal places? If only life were that simple. Fortunately, JavaScript provides another useful method of the Math object called floor(). This method rounds the
number down to the nearest whole number. Notice the emphasis on down. Even if the value is above .5, floor() will still lower it. The line in your script should now look something like Math.floor
(Math.random() * 6). Once again, check your output using alert() or similar. Try it a few times. You will get a range of 0-6; but what if we wanted 1-7?
5. 5
The final stage. Now it's time to set a lower limit to our range. You do this simply by adding onto zero to bring you to your desired lowest point. To show this clearly, we can change our old 0-6
script into the 1-7 script we've always dreamt of; Math.floor(Math.random() * 6 + 1). You don't need any extra brackets because multiplication is always performed before addition. As we've done
before, check how it went. The full line, if you've been following the example, should look like this: alert(Math.floor(Math.random() * 6 + 1));. It looks confusing but hopefully it isn't so much
6. 6
3-9? As a final example, if you were looking to create a range of 3-9 for a script, you would multiply the result of Math.random() by 7. Once you've given it the old Math.floor() treatment you
can add 3 on (remember, count up from zero) and voila!
• The other Math rounding methods are round() which rounds how you would actually expect and ceil() which rounds up regardless. You could make a script using either of these too but floor() seems
the most intuitive.
• Semi-colons have been left off the ends of most examples for clarity. It's highly recommended you put them in even just as a programming convention.
• Take careful to capitalise the 'M' in Math as it won't work without!
• Achieving true randomness in computers is something that cannot be done by anybody, in any programming language. However, there are algorithms that can yield results very close to random. These
numbers are called pseudo-random.
Article Info
Categories: JavaScript
Recent edits by: PriyamvadaS, Kercyn, Webster
Thanks to all authors for creating a page that has been read 14,796 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Generate-Random-Numbers-in-JavaScript","timestamp":"2014-04-18T20:53:07Z","content_type":null,"content_length":"65878","record_id":"<urn:uuid:783de665-fe45-4047-877a-f2d6aeb99b5f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Franconia, VA Geometry Tutor
Find a Franconia, VA Geometry Tutor
...I minored in economics and went on to study it further in graduate school. My graduate work was completed at the University of Maryland College Park, where I specialized in international
development and quantitative analysis. I currently work as a professional economist.
16 Subjects: including geometry, calculus, econometrics, ACT Math
...Being open about your needs, concerns, and things that are going well is very helpful to improving your math skills. My promise is to be supportive and to ensure that you have the best chance
to do well. Yours in math, ~DexterReviewing basic algebraic concepts (e.g. variables, order of operati...
15 Subjects: including geometry, chemistry, calculus, algebra 1
...I have tutored a high school student about 15 hours for AP Computer Science II last year in the spring under subject Java. I am able to understand, write, and fix codes written in languages C,
C++, and Java. I have great math background from working toward master's degree in computer science.
15 Subjects: including geometry, chemistry, physics, statistics
...I teach by example and practice.I was an elementary school principal for about 10 years. I am able to tutor young children in most areas of mathematics. I consider myself to be patient.
12 Subjects: including geometry, calculus, algebra 1, ASVAB
Hello students and parents! I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry,
Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Che...
11 Subjects: including geometry, chemistry, calculus, French
Related Franconia, VA Tutors
Franconia, VA Accounting Tutors
Franconia, VA ACT Tutors
Franconia, VA Algebra Tutors
Franconia, VA Algebra 2 Tutors
Franconia, VA Calculus Tutors
Franconia, VA Geometry Tutors
Franconia, VA Math Tutors
Franconia, VA Prealgebra Tutors
Franconia, VA Precalculus Tutors
Franconia, VA SAT Tutors
Franconia, VA SAT Math Tutors
Franconia, VA Science Tutors
Franconia, VA Statistics Tutors
Franconia, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Baileys Crossroads, VA geometry Tutors
Cameron Station, VA geometry Tutors
Camp Springs, MD geometry Tutors
Dale City, VA geometry Tutors
Fort Hunt, VA geometry Tutors
Jefferson Manor, VA geometry Tutors
Kingstowne, VA geometry Tutors
Lake Ridge, VA geometry Tutors
Lincolnia, VA geometry Tutors
North Bethesda, MD geometry Tutors
North Springfield, VA geometry Tutors
Oak Hill, VA geometry Tutors
Saint Charles, MD geometry Tutors
Springfield, VA geometry Tutors
West Springfield, VA geometry Tutors | {"url":"http://www.purplemath.com/franconia_va_geometry_tutors.php","timestamp":"2014-04-20T16:37:05Z","content_type":null,"content_length":"24215","record_id":"<urn:uuid:6ca06e41-23d9-481d-a43b-a875cfc9c6c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the equation of a circle with diameter AB; A is (5,4) and B is (-1,-4)? - Homework Help - eNotes.com
What is the equation of a circle with diameter AB; A is (5,4) and B is (-1,-4)?
First calculate the radius of the circle:
The horizontal distance between the points is the distance between the x coordinates of A and B, |5| + |-1| = 6, and the vertical distance is the distance between the y coordinates of A and B, |4| +
|-4| = 8
The diameter is then the direct distance between A and B which is sqrt(6^2 + 8^2) = sqrt(36+64) = sqrt(100) = 10. This makes the radius 5.
We also need to know where the center of the circle is. This has coordinates halfway between the coordinates of A and B. Half the horizontal distance between A and B is 3 and half the vertical
distance is 4, so that the center is at C(x,y) = B(x,y) + (3,4) = (-1,-4) + (3,4) = (2,0) (adding (3,4) again gives the coordinates of B).
The equation for a circle written in standard form is given by
`(x-x_c)^2 + (y-y_c)^2 = r^2`
where `r` is the radius and `(x_c,y_c)` are the coordinates of the center C.
Therefore the circle whose diameter is AB can be written in standard form as
`(x-2)^2 + y^2 = 25`
Since we have learned in geometry that angle in semi circle is right angle. Apply this result and can calculate equation of circle.
Let P(x,y) be any point on the circle.Thus A(5,4),B(-1,-4) and P(x,y) on the circle.
Also PA is perpendicular to PB.
slope of PA=(y-4)/(x-5)
slope of PB=(y+4)/(x+1)
PA is perpendicular to PB ,therefore
which is required circle.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-equation-circle-with-diameter-ab-5-4-b-1-4-435939","timestamp":"2014-04-17T10:15:09Z","content_type":null,"content_length":"28474","record_id":"<urn:uuid:2a6f26a7-b0ff-4bf5-a2b8-357c96e71569>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grid'5000 user report for
Grid'5000 user report for
User information
( user)
More user information in the user management interface.
• Solving the SFAS problem in the video card (Other) [achieved]
Description: At STEEP, we work on some computer vision problems, being SFAS (Shape From Ambient Shading) one of those. This problem consists in recover the 3D shape of a surface given only one B&
W image of the surface illuminated with "ambient light". We already have a theoretical solution for a particular variation of this problem and we are extending it to other situations. We need to
run several tests with different surfaces to evaluate the convergence speed and quality of this new approximation. We also have an implementation of this new solution written in C, using
off-screen OpenGL rendering and MPI to recover a single surface across several computers without a noticeable performance degradation up to 16 machines.
Results: After several tests using the adonis cluster we found that OpenGL approach to compute the visibility cone in a surface has some problem. First of, there are several technical problems
that don't permit the use of the two video cards of the same machine at the same time for the OpenGL offscreen rendering (simply the driver doesn't work after try it with several versions and
configurations, lending a non working pbuffer, or a extremely slow rendering time, even worse than using just one video card). After this technical problems, there are also numerical problems
given by the fact that we are computing an integral as the sum of so many terms computed by the GPU. And finally, there's also a trade-of between the speed and the accuracy of the result governed
by the size of the rendered image. Provided this, the quality of the result and the algorithm's speed weren't good enough to test our solution to this problem. Thus, we change the implementation
for a completely different approach to compute the VC, using the CPU in this case (see next experiment).
• Solving the SFAS problem in the CPU (Other) [in progress]
Description: At STEEP, we work on some computer vision problems, being SFAS (Shape From Ambient Shading) one of those. This problem consists in recover the 3D shape of a surface given only one B&
W image of the surface illuminated with "ambient light". In this case, we developed a new way to compute the visibility cone and we use it to apply the ideas from other SFS problems to the SFAS
Results: Preliminary results show that this implementations performs better than the OpenGL approach to this problem, in terms of both speed and accuracy, with a better usage of the resources.
Success stories and benefits from Grid'5000
last update: 2011-11-29 17:21:24 | {"url":"https://www.grid5000.fr/mediawiki/index.php/Special:G5KExperiments?menu=renderreport&username=adeymonnaz","timestamp":"2014-04-20T10:58:00Z","content_type":null,"content_length":"17583","record_id":"<urn:uuid:7de54d5d-5efb-426e-85cb-48a63538bdb4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 66,
Volume 66, Issue 2, 15 January 1977
View Description Hide Description
The solid state physical properties of an isomorphous series of TTF salts (TTF[11] (SCN)[6], TTF[11] (SeCN)[6], and TTF[7]I[5]) were examined. While there was no noticeable trend in the
conductivity (as a function of temperature and anion), the effective Fermi energy (?[ F ]) and the magnetic susceptibilitytransition temperature exhibited a definite trend as a function of anion.
There is a correlation between the above properties and subtle variations in solid state structure of each of the above salts.
View Description Hide Description
Ro‐vibronic resolved two‐photon excitation spectra of benzene, C[6]H[6], and C[6]D[6] have been measured in the region of the S [1] ←S [0] transition which is parity and symmetry forbidden in
two‐photon absorption but can be induced by suitable ungerade vibrations. Rotational envelopes for the vibronic transitions are calculated for circular and linear polarized light excitation and
are in good agreement with experiment. The polarization behavior is shown to change strongly across even a single band contour of a totally symmetric vibronic two‐photon transition. The
polarizationanalysis even for a randomly rotating gas phase molecule provides a severe constraint on the possible assignments and hence is an important tool for the assignment of new transitions.
About 85% of the observed two‐photon excitation spectrum of benzene can thus be assigned. The appearance of combination bands shows that anharmonic mixing plays an important role in the excited
View Description Hide Description
The solid‐state synthesis of a new phase of (SN)[ x ] is reported. X‐ray diffraction results show that mechanical shear induces a transformation of the known, monoclinic phase (space group P2[1]/
c) to a phase of orthorhombic symmetry(probable space group P2[1]2[1]2[1]). The unit cell parameters of the shear‐induced phase of (SN)[ x ] are a=6.251, b=4.429, and c=4.807 Å. Since the
approximate chain repeat dimension (b) and the chain symmetry (2[1]) are the same before and after the transformation, the chain geometries for the two phases of (SN)[ x ] are predicted to be
quite similar, if not identical. However, important differences are expected for interchain interactions that have been previously shown to be essential in understanding various properties of the
monoclinic phase of (SN)[ x ]. Both phases of (SN)[ x ] are shown to be stable with respect to thermally‐induced interconversion for temperatures up to 175°C. Molecular packing calculations are
used to predict structural aspects of the shear‐induced phase and the relationship between parent and daughter phases. The latter results are consistent with the monoclinic‐to‐orthorhombic
transformation being martensitic in character.
View Description Hide Description
The effects of non‐Maxwellian ion speed distributions on ion–neutral reaction rate constantsmeasured in drift tubes are examined experimentally and are compared to the predictions of recent
theories. The rate constants of strongly kinetic‐energy‐dependent ion–molecule reactions of O^+ with O[2], N[2], and NO are measured separately in helium and argon buffer gases, in which the O^+
speed distributions are expected to be very different. The differences between the helium‐buffered and argon‐buffered rate constants are often substantial. When different, the argon‐buffered
values are generally larger than the helium‐buffered values at the same mean energy, indicating that the O^+‐in‐argon distribution has a larger high‐energy ’’tail’’ than the O^+‐in‐helium
distribution. The differences between the two sets of data are compared to predictions from (a) the Monte Carlo trajectory calculations of Lin and Bardsley, and (b) the moment solution of the
Boltzmann equation of Viehland and Mason, both described in accompanying papers. The excellent agreement demonstrates that the non‐Maxwellian ion speed distributions now pose no problems in the
interpretation and application of the k i n e t i c‐e n e r g y aspects of the rate constants of atomic–ion reactions studied in rare‐gas‐buffered drift tubes.
View Description Hide Description
A theory is developed for gas‐phase swarm measurements of ion–molecule reactions in electrostatic fields of arbitrary strength. The theory allows measurements of reaction rate coefficients made
at low temperatures and strong electric fields to be converted directly to equivalent thermal rate coefficients at elevated temperatures inaccessible by direct methods. It is not necessary to
calculate the ion velocitydistribution function explicitly, or to unfold the reaction cross section from the rate data. In first approximation the measured rate coefficient is equal to the
thermal rate coefficient at an effective temperature calculated directly from the measured ion drift velocity. Higher approximations are obtained from more detailed analysis of the dependence of
the rate coefficient and drift velocity on electric field strength. Comparison is made with experimental data reported in an accompanying paper by Albritton e t a l. In another accompanying
paper, Lin and Bardsley compare the present theory with their detailed Monte Carlo calculations.
View Description Hide Description
The motion of a swarm of ions in a uniform electric field is studied by simulating the motion of a single ion through many collisions with neutral atoms in order to obtain the drift velocity,
average energy, and velocity distribution for the ions. For K^+ions in He at low field strengths, the results agree well with the solutions of the Boltzmann equation by Kumar and Robson; and for
K^+ in Ar at all field strengths, the computed mobilities demonstrate that the Viehland–Mason moment method can give useful results, especially if carried through to third order. The velocity
distributions computed for O^+ions in He and Ar are used in the accompanying paper by Albritton e t a l. to analyze drift tube measurements of O^+ reaction rates. Significant deviations from the
Maxwell–Boltzmann form have been found and are seen to have important effects in that application. Velocity distributions have also been obtained for Li^+ in He. The sensitivity of ionic
mobilities to changes in the ion–atom interaction potential is examined with particular reference to K^+ions in Ar.
View Description Hide Description
The photoionization efficiency curves of the Kr[2] and Ar[2] van der Waals dimers were obtained with the molecular beam technique in the wavelength ranges 850–965 Å (12.848–14.586 eV) and 750–855
Å (14.501–16.531 eV), respectively. The ionization potential of Kr[2] was found to be 12.87±0.015 eV (963.7±1.2 Å), which agrees with the value obtained by Samson and Cairns. The ionization
potential of Ar[2] was found to be 14.54±0.02 eV (852.7±1.2 Å). Using the known ground statedissociation energies of Kr[2] and Ar[2], the dissociation energy of Kr^+ [2], D [0](Kr^+ [2]), is
deduced to be 1.15±0.02 eV and that for Ar^+ [2], D [0](Ar^+ [2]), is 1.23±0.02 eV. The photoion yield curves of Kr[2] and Ar[2] are compared with that of Xe[2]. Prominent autoionization
structure was observed to correspond to Rydberg molecular states which are derived from the combination of a normal and an excited atom in the 4p ^5 n s (or 4p ^5 n d) configuration for Kr and 3p
^5 n s (or 3p ^5 n d) configuration for Ar.
View Description Hide Description
The results of low‐temperature studies of the magnetic characteristics of Cs[2]CoCl[4] are reported. Single‐crystal principal‐axes magnetic susceptibility measurements between 1.5 and 20 K have
been fit to an exchange‐modified spin‐3/2 single‐ion model. Heat capacitymeasurements between 1 and 30 K on Cs[2]CoCl[4] and between 4 and 25 K on Cs[2]ZnCl[4] have been used with a corresponding
states procedure to obtain the magnetic heat capacity of Cs[2]CoCl[4]. All the results generally indicate a large zero‐field splitting of 14 to 16 K with the possibility of a rhombic distortion
of the CoCl[4] ^2− ion. Both the susceptibility and heat capacitymeasurements indicate the presence of significant magnetic exchange. At the lower temperatures the heat capacity results appear to
be describable by an X Y linear chain model, in agreement with structural considerations in conjunction with the theoretically expected behavior of tetrahedral cobalt(II).
View Description Hide Description
The ’’Principle of Increasing Mixing Character,’’ first described by Ruch and the present author, is presented in a somewhat modified formulation. The theory is then developed in such a way as to
make it applicable to irreversible processes in nonideal systems. In systems described by a large number of parameters for which the ’’exponential assumption’’ (essentially the same as the
well‐known local‐equilibrium assumption) can be made, the entire content is contained in a single, intuitively appealing expression. The result is applied to a simple example: diffusion in the
Landau model of a nonideal solution.
View Description Hide Description
SCF calculations for a hydrogen atom in a spherical box have been performed. The basis functions are products of STO’s and cutoff functions. The variations of the energy and the hyperfine
splitting as functions of r [0], the sphere radius, have been studied.
View Description Hide Description
Cross correlations of the velocity with the total force, the mean force, and the fluctuating force are evaluated for the Langevin equation and for a generalized version which has been suggested
for a Brownian particle in an incompressible fluid of nonzero density. In the latter case all of the cross correlations follow a power law decay with the first ∼t ^−5/2 and the second two ∼t ^−3/
2 for large t so that the t ^−3/2 decays cancel. It is shown that the failure to consider the effects of these cross correlations leads to inconsistencies, in particular, the failure to satisfy
the stationarity condition. The cross correlation of the fluctuating force with the velocitymeasured at the same time gives the rate at which energy is put into the system by this force and is
shown to be equal to the rate at which energy is lost due to the dissipative effect of the mean force. The inertial response of the fluid to the motion of the Brownian particle is also evaluated.
Comparisons are made between the original general treatment by Kubo and more recent developments in which a specific frequency dependent hydrodynamic drag is used. A relationship between the
velocity autocorrelation function, the fluctuating force autocorrelation function, and the velocity–fluctuating force cross correlation is proven.
View Description Hide Description
Lanczos tridiagonalization is applied to several problems: the one dimensional quartic oscillator, the two dimensional quartic oscillator, and a finite difference version of the latter. In each
case high accuracy upper and lower bounds are established in excellent agreement with previous calculations.
View Description Hide Description
From Knudsen effusion mass spectrometric examination of reactions of the type M(g)+MO(g) =M[2]O(g), 2MO(g) =M[2]O[2](g), and 2MO(g) =MO[2](g)+M(g), the atomization energies ΔH°[0] (kcal/mole) of
the following new species have been estimated: Eu[2]O(g), 174±12; Gd[2]O(g), 236±10; Tb[2]O(g), 243±12; Ho[2]O(g), 216±14; Lu[2]O(g), 266±14; Eu[2]O[2](g), 324±17; Gd[2]O[2](g), 427±17; Tb[2]O[2]
(g), 432±21; Ho[2]O[2](g), 407±26; GdO[2](g), 314±17; and HoO[2](g), 307±25. Atomization energies ΔH°[0] (kcal/mole) revised from literature are presented for the following: Sc[2]O(g), 236±16; Y
[2]O(g), 249±13; La[2]O(g), 265±13; Y[2]O[2](g), 438±28; La[2]O[2](g), 459±28; Ce[2]O[2](g), 472±15; CeO[2](g), 344±5; and NdO[2](g), 318±20. The variation of the atomization energies of the M[2]
O(g), M[2]O[2](g), and MO[2](g) species along the lanthanide series follows a similar pattern observed for the atomization energies of the MO(g) species and the heat of sublimation of the
corresponding metals. Predictions of the atomization energies of the yet unobserved rare earth oxide species of the types above have been made. The standard heats of formation at 0°K of the
gaseous rare earth oxides are also presented.
View Description Hide Description
A permutation‐inversion group theoretical classification scheme for the tunneling–rotational levels of the water dimer molecule is given. Electric dipole selection rules and nuclear spin
statistics are discussed. Application of these results to the microwave spectrum of water dimer, observed by molecular beam techniques, is also presented.
View Description Hide Description
Molecular beams of hydrogen bonded water dimer, generated in a supersonic nozzle, have been studied using electric resonance spectroscopy. Radiofrequency and microwave transitions have been
observed in (H[2] ^16O)[2], (D[2] ^16O)[2], and (H[2] ^18O)[2]. Transitions arising from both pure rotation and rotation–tunneling occur. The pure rotational transitions have been fit to a
rigid rotor model to obtain structural information. Information on the relative orientation of the two monomer units is also contained in the electric dipole moment component along the A inertial
axis μ[ a ], which is obtained from Stark effect measurements. The resultant structure is that of a ’’t r a n s‐linear’’ complex with an oxygen–oxygen distance R [OO] of 2.98(1) Å, the proton
accepting water axis is 58(6) ° with respect to R [OO], and the proton donating water axis at −51(6) ° with respect to R [OO]. This structure is consistent with a linear hydrogen bond and the
proton acceptor tetrahedrally oriented to the hydrogen bond. The limits of uncertainty are wholly model dependent and are believed to cover variations from the zero‐point vibrational structure
observed to the equilibrium structure. μ[ a ] shows strong dependence on J and K and is about 2.6 D. Centrifugal distortion constants have been interpreted in terms of the monomer–monomer
stretching frequency and give ω=150 cm^−1.
View Description Hide Description
The concept of the angular entropy arises from consideration of the information content of a scattering pattern, i.e., an angular distribution of collision products. It is shown that information
theory (I.T.) provides the framework for evaluation and interpretation of the entropy (and entropy deficiency) of an angular distribution of reactive, inelastic, or elasticscattering. The
differential cross section σ (ϑ) is converted to a normalized probability density function (pdf), P (u) [u= (1/2)(1−cosϑ)], from which the angular surprisal is obtained as −lnP (u). The average
over u of the surprisal yields the angular entropy deficiency. (A histogrammic approximation to the continuous pdf can provide a simple estimate of ΔS). Examples are presented of reactive and
inelastic molecular scattering patterns and of various prototype angular distributions giving insight into the angular entropy. The I.T. method is also applied to elasticscattering of atoms and
molecules. It inherently demands the elimination of the well‐known ’’classical divergencies’’ (the forward infinity and rainbow spike). These problems disappear when quantal (or semiclassical)
differential cross sections are used. Nevertheless, the forward cone makes the dominant contribution to the angular entropy deficiency for elasticscattering at moderate energies. The rainbow
structure introduces some entropy deficiency, but the quantal interferences in σ (ϑ) contain little information (in the strict I.T. sense). However, nuclear symmetry effects are found to be
View Description Hide Description
The metal concentration and matrix conditions which favor the dimerization of vanadium atoms to divanadium molecules are quantitatively assessed using optical spectroscopy. A simple kinetic
theory is presented which enables small metal clusters to be identified in the presence of atomic species. This approach makes use of the fact that a metal atom being deposited is capable of
diffusing either on the matrix surface or within a narrow region (the reaction zone) near the matrix surface before its kinetic energy is dissipated sufficiently to immobilize it. The
surfacediffusion pathway is found to predominate over the statistical generation of dimers. The kinetic result, which suggests that V[2] is formed on the matrix surface rather than in the gas
phase, is also borne out by the intriguing observation that for a given metal deposition rate the dimer‐to‐monomer ratio decreases as one increases the atomic weight of the noble gas used to
isolate them, with Ar giving the most V[2] and Xe the least. Careful concentration experiments in Ar, Kr, and Xe matrices permit the uv–visible transitions of V[2] to be identified and the
extinction coefficient ratio ε[ V ]/ε[ V ] [2] to be determined. A qualitative molecular orbital description of V[2] is presented in the light of iterative extended Hückel calculations. These
computations suggest that high spin divanadium has a strong metal–metal bond which is mainly 4s in character with only small contributions from the degenerate d [ x z,y z ] π‐bonding set. Visible
absorptions observed in the 600–450 nm region are tentatively assigned to electronic transitions localized mainly between the V–V σ‐bond and the d‐orbital manifold.
View Description Hide Description
Cross sections for rotational excitation of ortho formaldehyde due to collision with helium are computed following the coupled states (CS) formalism and compared with recent coupled channel (CC)
results obtained employing the same a b i n i t i oconfiguration interactionintermolecular potential. The CS equations are integrated at 9 scattering energies between 25 and 95°K using a basis
set of 16 ortho H[2]CO states (1⩽j⩽5). The CS procedure with the orbital angular momentum quantum number l set equal to the total angular momentumJ yields the correct order of magnitude for
scattering cross sections. Qualitative differences are found, however, in the energy dependence of some inelastic transitions.
View Description Hide Description
Gaseous ion mobilities are mainly dependent on ion–neutral collision energies in the range 0.03–1 eV and, using a recently developed kinetic theory method, can be directly related to ion–neutral
interaction potentials. In this paper, experimental mobilities are used to test recent theoretical calculations based on the electron–gas model of the interaction potentials for the twelve
combinations of Li^+, Na^+, K^+, and Rb^+ with He, Ne, and Ar. The model potentials are quite good, but some systematic discrepancies with experimental mobilities exist. These discrepancies are
analyzed in terms of the relation between the mobility and the ion–atom potential.
View Description Hide Description
A rigorous formal theory of relaxation phenomena within the framework of the averaged j [ z ]‐conserving coupled states (j [ z ]CCS) (A j [ z ]) approximation is developed. Using the expressions
obtained in this paper, the A j [ z ] approximation is shown to yield pressure broadening cross sections which are in exact agreement with full close coupling results for a He+HCl model system.
Previous j [ z ]CCS approximations to pressure broadening cross sections, which were in complete disagreement with close coupling for all systems studied, are shown to be the result of incorrect
labeling of j [ z ]CCS T‐matrices by total angular momentumJ rather than by the effective orbital angular momentum ?. The correctly labeled results are not only found to be highly accurate but
also considerably simpler than the expressions resulting from incorrectly labeled j [ z ]CCS T‐matrices. Expressions are also derived for NMR spin–lattice relaxation cross sections within the A j
[ z ] approximation. Because the decoupled dynamical equations may be solved much more easily than the full exact CC ones, it is now feasible for the A j [ z ] pressure broadening and
spin–lattice relaxation cross sections to be computed in an iterative procedure. This will enable one to use such experimental data as a tool for determining molecular interactions. | {"url":"http://scitation.aip.org/content/aip/journal/jcp/66/2","timestamp":"2014-04-17T22:10:47Z","content_type":null,"content_length":"179603","record_id":"<urn:uuid:cd5b9c42-f3ad-4d0d-8d44-f9436815c569>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Generate Random Numbers in JavaScript
Edit Article
Edited by Welli, Myles, BR, Maluniu and 4 others
Random numbers are useful for all kinds of things like umm... never mind. The problem with the Math.random() method that we will be using is that it generates a long, practically useless decimal
number such as 0.77180239201218795. Here you'll learn how to use other methods of the Math object to get a whole number in the range of your choice.
This article assumes you know HTML and JS to a proficient standard in order to follow the article. If not, there are plenty of great sites out there to learn from. With a little effort you'll be back
here in no time at all.
1. 1
Create a basic page up with your head, body and the like. Open a <script> tag in the body and simply try alert() on the value of Math.random(). Basically, alert(Math.random()) and then save the
page as an .html file for ease of use.
2. 2
Open up the page. Then enjoy the mass of pseudo-random digits you have created! Useful, right? Maybe not so what's next?
3. 3
Jumping up to a whole number. At this point you need to choose a value for the highest point in your range. The number generated will not go above this point. Let's go with 7 for the purpose of
this article. All you do now is multiply your generated random number by your desired high point. So; Math.random() * 7 essentially. alert() the output and take a look.
4. 4
So there's still a huge trail of decimal places? If only life were that simple. Fortunately, JavaScript provides another useful method of the Math object called floor(). This method rounds the
number down to the nearest whole number. Notice the emphasis on down. Even if the value is above .5, floor() will still lower it. The line in your script should now look something like Math.floor
(Math.random() * 6). Once again, check your output using alert() or similar. Try it a few times. You will get a range of 0-6; but what if we wanted 1-7?
5. 5
The final stage. Now it's time to set a lower limit to our range. You do this simply by adding onto zero to bring you to your desired lowest point. To show this clearly, we can change our old 0-6
script into the 1-7 script we've always dreamt of; Math.floor(Math.random() * 6 + 1). You don't need any extra brackets because multiplication is always performed before addition. As we've done
before, check how it went. The full line, if you've been following the example, should look like this: alert(Math.floor(Math.random() * 6 + 1));. It looks confusing but hopefully it isn't so much
6. 6
3-9? As a final example, if you were looking to create a range of 3-9 for a script, you would multiply the result of Math.random() by 7. Once you've given it the old Math.floor() treatment you
can add 3 on (remember, count up from zero) and voila!
• The other Math rounding methods are round() which rounds how you would actually expect and ceil() which rounds up regardless. You could make a script using either of these too but floor() seems
the most intuitive.
• Semi-colons have been left off the ends of most examples for clarity. It's highly recommended you put them in even just as a programming convention.
• Take careful to capitalise the 'M' in Math as it won't work without!
• Achieving true randomness in computers is something that cannot be done by anybody, in any programming language. However, there are algorithms that can yield results very close to random. These
numbers are called pseudo-random.
Article Info
Categories: JavaScript
Recent edits by: PriyamvadaS, Kercyn, Webster
Thanks to all authors for creating a page that has been read 14,796 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Generate-Random-Numbers-in-JavaScript","timestamp":"2014-04-18T20:53:07Z","content_type":null,"content_length":"65878","record_id":"<urn:uuid:783de665-fe45-4047-877a-f2d6aeb99b5f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Science of Programming/How Low Can You Go?
Recall from the last chapter that the symbol d with respect to Calculus means 'to take tiny pieces of'. Let's practice taking small pieces of a number and, at the same time, learn some programming.
And, because you must, as future Computer Scientists, you must learn to let go and be wild, we will take a small piece of a small piece, and a small piece of that very small piece, and so on and so
on, ad infinitum.
At this point, you should install Sway on your system.
Using the Sway InterpreterEdit
We begin by starting the Sway interpreter, Sway being the Programming Language you will learn. Once started, the interpreter rewards you with a prompt:
This prompt signals to you to enter some code. When you do so, the interpreter will tell you the result of running (or evaluating or executing) that code.
sway> 3;
INTEGER: 3
To exit the interpreter, type <Ctrl-d> or a <Ctrl-c>. These key strokes are generated by holding the Control key down while at the same type tapping the 'd' or 'c' key at the same time.^[1] Here, we
have entered a 3 followed by a semicolon (the semicolon tells the interpreter that we are done entering code). Sway (actually the Sway interpreter) responds by saying 3 is an integer with a value of
3. Of course, we already knew all that, so the interpreter doesn't seem to be all that useful. But did you know that 43 times 112 is 4816?
sway> 43 * 112;
INTEGER: 4816
Sway does! Let's use the fact that the interpreter is pretty good at math to figure out what a small part of a big number is. Let's assume that a small part is $\frac{1}{16}$ of the whole.^[2] First,
let's figure out what $\frac{1}{16}$ is as a decimal number, or real number, since that will be easier to type in:
sway> 1/16;
SOURCE CODE ERROR
file stdin,line 1
an expression was expected, found token of type BAD_NUMBER
error occurred prior to: 16;[END OF LINE]
Uh oh. Sway did not like the '1' followed so closely by the '/'. In fact, Sway requires spaces around things like division signs in most instances. Let's try again:
sway> 1 / 16;
INTEGER: 0
Better, at least we got an answer instead of an error. However, it doesn't seem that the interpreter is that good at math after all. If we put on our Sherlock Holmes cap and ponder, we see that the
interpreter said the result of dividing 1 by 16 is an integer, but we know it should be a real number. It turns out that the Sway language, like most programming languages, uses a rule that combining
two things of the same type yields a result of the same type. In this case, zero happens to be the largest integer that is less than the desired result. In other words, the interpreter truncated the
fractional part of the real number and gave us the integer that was left. Let's experiment and see if this is so:
sway> 7 / 2;
INTEGER: 3
Seems to be. So getting back to our original problem, how do we find out what is 1 divided by 16 as a real number? Let's enter the numbers as real numbers instead of integers:
sway> 1.0 / 16.0;
REAL_NUMBER: 0.0625000000
That's much better!
Now let's figure out what a small part of a million is (using our assumption of what is small):
sway> 1000000 * 0.0625;
REAL_NUMBER: 62500.0000000000
Sixty-two and a half thousand. Still big in an absolute sense, but much smaller that a million. Being wild and crazy, let's take a small part of a small part:
sway> 1000000 * .0625 * .0625;
REAL_NUMBER: 3906.2500000000
About 4000. Much smaller.
You should familiarize yourself with Sway primitives and combinations, including Sway's precedence and associativity rules, at this point.
Using VariablesEdit
Shall we continue taking ever smaller parts?
Before we do, I must admit that I am, at heart, a lazy person, as are most Computer Scientists.^[3] Typing in all those numbers is just too much work! I am going to use two short symbols to represent
both the million and the fraction:
sway> var x = 1000000;
INTEGER: 1000000
sway> var f = .0625;
REAL_NUMBER: 0.0625000000
What I've done is created a variable to stand in for the million and a variable to stand in for the fraction, x and f respectively.^[4] Now I can use those variables instead of the numbers. Let's
check to see if I did things right:
sway> x * f * f;
REAL_NUMBER: 3906.2500000000
Looks like I did. Let's go a step further:
sway> x * f * f * f;
REAL_NUMBER: 244.1406250000
It seems variables are a nice way to reduce the amount of typing needed. The only drawback is remembering what a variable stands for. This is why it is so important to name your variables in such a
way as to make it easy for you to recall their meanings. Generally, single letter variable names are 'not a good idea (although there are exceptions to this rule).
To learn more, see Sway variables.
Using functionsEdit
We could go further with this, but you have yet to understand the depths of my laziness. Even typing in * f repeatedly is too much for my sensibilities. I am going to define (or write) a function to
do the work for me (you can cut and paste this function into the interpreter if you are lazy like me and don't want to type it in yourself):
function smaller(amount,fraction)
inspect(amount * fraction);
Whether you paste it in or type it in, you should get the following response from the interpreter:
sway> function smaller(amount,fraction)
more> {
more> inspect(amount * fraction);
more> }
FUNCTION: <function smaller(amount,fraction)>
The more> prompt indicates that the Sway interpreter is expecting some more code.
There are two things you must do when dealing with functions, (1) define them and (2) call them. Here, we have just defined a function; now we need to call it. We will call it by typing the name of
the function followed by a parenthesized list of arguments. Sometimes we say this is "passing the arguments to the function".
When calling the function smaller^[5], the values of the arguments will be bound to the variables amount and fraction, which are found after the function name in its definition. These variables are
known as the formal parameters of the function. This passing and binding, in essence, defines those variables for us. Note that the arguments and formal parameters do need to be separated by
whitespace; the comma serves as a separator.
After these implicit variable definitions, the code between the curly braces {} will be evaluated as if you had typed them in directly to the interpreter. This is why I said in the previous chapter
that a function does the work for you. If we look at the code (or body) of the function we just defined, we see that a function named inspect is being called. Since we haven't defined inspect, we can
assume that this function already exists within the interpreter. By its name, we can guess that it will tell us something about what happens when we multiply amount by fraction:
sway> smaller(x,f);
amount * fraction is 62500.000000
REAL_NUMBER: 62500.000000
There is a whole lot going on here that needs explanation. The first is that the value of x (1000000) was bound to the formal parameter amount of the function smaller. Likewise, the value of f
(0.0625) was bound to the formal parameter fraction. Then the body of the function smaller was evaluated, triggering a call to the inspect function. What inspect does is print out its literal
argument, followed by the string " is ", followed by the value of its argument. Since the call to inspect is that last thing smaller does, whatever inspect returns is returned by smaller. This return
value appears to be the evaluated argument.
Note that interpreter reported two things. The first is the string produced by inspect. The second is the report of the return value as we have seen before. We call an action that has an effect
outside the called function (in this case, the printing by inspect) a side effect.
To learn more, see Sway Functions.
Now, at this point, you are probably thinking that, not only am I lazy, I must be a rather dim bulb as well, because I spent a lot more effort writing the smaller function and calling it than I would
have simply typing:
sway> x * f;
REAL_NUMBER: 62500.000000
If that was all I was going to do, you would be quite right in your assessment. But I am not finished yet. Now I am going to make smaller much, much more powerful:
function smaller(amount,fraction)
inspect(amount * fraction);
smaller(amount * fraction,fraction);
Notice that I have added a call to the smaller function after the call to the inspect function. When a function calls itself, it is said to be a recursive function, that it exhibits the property of
recursion and that at the point of the recursive call, the function recurs.^[6] Notice further that the first argument to smaller in that internal call will send a (hopefully) smaller number to the
smaller function. In that call, smaller will be called again with yet an even smaller number, and so on. Let's try it:
sway> smaller(1000000,.0625);
amount * fraction is 62500.0000000000
amount * fraction is 3906.2500000000
amount * fraction is 244.1406250000
amount * fraction is 15.2587890625
amount * fraction is 0.9536743164
amount * fraction is 0.0596046448
amount * fraction is 0.0037252903
amount * fraction is 0.0002328306
amount * fraction is 1.4551915228e-05
amount * fraction is 9.0949470177e-07
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
amount * fraction is 0.0000000000e+00
encountered a fatal error...
stack overflow.
Wow! Unless you have very quick eyes, all you saw is the bottom part of this output. What happened? What we did was to define a function that fell into an infinite loop when we called it. An infinite
loop occurs because we never told our function when to stop calling itself. Thus, it tried to call itself ad infinitum. Of course, a computer has a limited amount of memory, so in this particular
case, the calls could not go on forever.^[7] Let's redefine our function so that it pauses after every inspection so we can slow down the output:
function smaller(amount,fraction)
inspect(amount * fraction);
smaller(amount * fraction,fraction);
You can start up the Sway interpreter again and paste in our revised function definition.
Now when we call our function we see this output (assuming you repeatedly hit the enter key on your keyboard):
sway> smaller(1000000,.0625);
amount * fraction is 62500.0000000000
amount * fraction is 3906.2500000000
amount * fraction is 244.1406250000
amount * fraction is 15.2587890625
amount * fraction is 0.9536743164
amount * fraction is 0.0596046448
amount * fraction is 0.0037252903
amount * fraction is 0.0002328306
You can stop the interpreter by typing in a <Ctrl>-c which is entered from your keyboard by holding the Control key down while tapping the 'c' once.
We can see that amount * fraction gets very small very quickly. If you start again and keep going, you will see that amount * fractioneventually reaches zero. Theoretically, it doesn't, but at some
point the quantity gets smaller that can be represented by Sway, so Sway reports zero.
You should read more about recursion and learn about if expressions before continuing.
Let's try a new version of the function, this one calls itself a given number of times before stopping:
function smaller(x,f,n)
if (n == 0)
smaller(x * f,f,n - 1);
This time, we have a new formal parameter n, which represents the number of times to the function recursively calls itself again. We've also changed the call to inspect to print out the value of x.
Note that the recusive call not only makes x smaller, it makes n smaller as well. When n gets small enough, the function returns the symbol done.
We must remember to add an extra argument to the call. Let's send in 8 as the number of times to recursively call:
sway> smaller(1000000,.0625,8);
x is 1000000
x is 62500.000000
x is 3906.2500000
x is 244.14062500
x is 15.258789062
x is 0.9536743164
x is 0.0596046448
x is 0.0037252903
SYMBOL: :done
Yay! Our program stopped recuring infinitely. Formally, the if inside the function has two cases: the base case, which does not contain a recursive call and a recursive case, which does. Infinite
recursive loops occur when the base case is never reached, usually due to some misstatement of the base case condition or a misunderstanding of the range of values taken on by the formal parameter
being tested in the base case condition.
Back to CalculusEdit
What did this exercise with finding ever smaller numbers have to do with calculus? Well, the $d$ symbol means 'take a tiny piece of'. How tiny? Infinitesimally small, or in other words, the value
computed by an infinite number of recursive calls to smaller. Of course, this is smaller than we can fathom, but that is not a big deal. We can't fathom how our brains work, but we get along OK, or
at least most of us do.
All formulas are written using Grammar School Precedence.
1. What happens when you combine an integer and a real number?
2. This question and subsequent questions refer to the last version of smaller. Why did the call to smaller return a real number?
3. Redefine smaller so that it prints out nine inspections rather than eight, for the same initial value of 8.
4. What happens when zero is passed as the count to smaller?
5. What happens when -1 is passed as the count to smaller? Why?
6. What happens if the recursive call in smaller is replaced by smaller(x *f,f,n)? Why?
7. Write a Sway function to represent $y = 2 x ^ a$, and evaluate it for $x = 1, 2, 3$, and $a = 2, 4, 6$.
8. Using the Sway function for $y$ in the previous problem, write a new Sway function $z = a y$, and evaluate as before.
9. A series is a sequence of terms that are added together. If as more and more terms are added together, the sum of the terms get closer and closer to a number k, then, k is said to be the limit of
that series. The book explains this with Zeno's paradox. Let's say you start a distance 1 away from a wall and with each step, you travel half the remaining distance to the wall (why is this a
paradox?). Using recursion and Sway, define a function zeno(n) that demonstrates this process. The argument to zeno is the number of steps taken and the return value should be the total distance
travelled. What is the limit of the zeno series?
10. Let $g(x) = x^2$ and $h(x) = (g(1+x) - g(1-x))/(3 x)$. Calculate $h(1)$, $h(2)$, $h(3)$, and $h(0.5)$. Express $h(x)$ in terms of $x$. Draw a graph of h in the range $-2 \leq x \leq 2$.
11. The formula for the chance of an NPC (Non-Player Character) landing a crushing blow in the World of Warcraft (a popular Massively Multiplayer Online Role-Playing Game or MMORPG), is $(\mathit
{NPCLevel} * 5 - \mathit{PlayerDefense}) * 2% - 15%$. For a level 70 NPC, express this as a function of $PlayerDefense(x)$ and give the value of defense that makes the chance 0.
12. The amount of experience to level a character is $h(x) = g(x) * (1344 - ((69 - x) * ((69 - x) * 4 + 3))) + 155$ where x is the character level and where $g(x) = 5x + 45$ is the basic amount of
experience earned for dispatching a mob of level equal to the character. Calculate $h(61)$, $h(69)$, $h(0.5)$, and $h(1)$. Express $h(x)$ in terms of x. Plot a graph of h for $61 \leq x \leq 69$.
Last modified on 25 November 2010, at 14:56 | {"url":"http://en.m.wikibooks.org/wiki/The_Science_of_Programming/How_Low_Can_You_Go%3F","timestamp":"2014-04-20T03:13:24Z","content_type":null,"content_length":"39412","record_id":"<urn:uuid:c3d484a2-698e-49a3-8179-08df94895ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiferroic Materials
Multiferroics are a class of materials with coexisting order parameters. There is great interest in multiferroics in which ferroelectric and magnetic order coexist, largely driven by the goal of
“spintronic” applications. It has become increasingly clear that implementation of spintronics requires the ability to manipulate the magnetic state of a material through the application of electric
fields. Multiferroics are a promising route to this goal, because of the coupling between their ferroelectric and magnetic order parameters.
In our lab we are currently exploring the properties of the multiferroic compound, bismuth ferrite (BiFeO 3 or just BFO). The reason for this choice is that BFO has the very rare (and potentially
useful) attribute that both its ferroelectric and antiferromagnetic order parameters are well established at room temperature. Despite its obvious potential, research into the properties of BFO has
proceeded relatively slowly because of difficulties in synthesizing large single crystals and high quality thin films. However, in the last few years our collaborator Prof. Ramesh and his group at
Berkeley have made rapid progress in the growth of thin films of BFO.
Control over Variants
One of the challenges that we encounter in trying to use multiferroic materials is large number of possible domains. The structure of BFO, shown below, is rhombohedral, only slightly distorted from
cubic. The polarization P, which is the ferroelectric order parameter, points along any one of the four body diagonals, or 111 directions of the cube. As P can be either parallel or antiparallel
with 111, there 8 possible directions, or variants of P, as shown below.
The antiferromagnet order parameter is the staggered magnetization or L, which is defined as the vector difference between nearest neighbor Fe3+ spins along the 111 direction. Theory predicts that L
lies in the plane perpendicular to 111 (the basal plane) and is parallel to one of the three mirror planes that are perpendicular to the basal plane. Naturally, L can be positive or negative as well,
corresponding to 6 possible variants. So far we have 6 x 8 = 48 possible domains. However, we are not done yet! In BFO the nearest neighbor spins are not precisely antiparallel. They are canted, or
tipped slightly away from antiparallel.The canting produces a weak magnetization, M, that lies in the basal plane and is perpendicular to L. The two possible directions of canting finally yields 96
possible domains!
It clear that control over these variants is a crucial step in understanding and applying BFO. The Ramesh group has demonstrated that they can grow single crystalline thin films with the growth
direction parallel to either 111, 110, or 001. Moreover, they can control the number of variants of P in each of these structures. So, for example, you can ask for (and receive) a 110 film with
precisely two variants of P.
Electric field controlled second harmonic generation
One goal of our research in this area is to demonstrate reversible switching of L through control of P by application of E. To do this we need a probe of P and L that can respond to rapid changes. A
serious technological contender should be able to switch with a bandwidth of at least 100 GHz.
Second harmonic generation (SHG) is an ideal probe for this purpose. SHG is far more sensitive to the symmetry of a crystal than the linear optical response. For example, in linear response a
cubic crystal and a glass cannot be distinguished; both produce an isotropic response. However, the second harmonic tensor, [c(2)]ijk for a glass and cubic crystal can readily be distinguished.
For single variant films, we use SHG to reveal the crystal symmetry. In the experiment, we set the polarization states of the fundamental and second harmonic beam to be either parallel or
perpendicular. Then we rotate the sample and measure the second harmonic output as a function of angle. The data that is obtained from a single variant, 111 oriented thin film of BFO is shown below
on the left hand side:
For single variant films, we use SHG to reveal the crystal symmetry. In the experiment, we set the polarization states of the fundamental and second harmonic beam to be either parallel or
perpendicular. Then we rotate the sample and measure the second harmonic output as a function of angle. The data that is obtained from a single variant, 111 oriented thin film of BFO is shown below
on the left hand side:
The symmetric six fold pattern is a fingerprint that unambiguously defines the crystal’s point group as 3m, which means that you can twirl the crystal though 120 and 240 degrees about the 111 axis
without changing the positions of the atoms. Furthermore, there are three mirror planes that contain the 111 axis and are separated by angles of 120 degrees.
The polar plot on the above right shows an angular dependence that is quite intricate (and also is a rather pretty butterfly). This is what we obtain doing the same experiment, this time on a thin
film grown in the 001 direction, also with a single variant of P. Despite the obvious difference, this pattern is also completely consistent with 3m symmetry. Demonstrating this is a fun exercise in
transforming 3rd rank tensors (see technical note).
After gaining a thorough understanding of the SHG response in single domain (or variant) films, we are turning to dynamical measurements of domain wall motion in two-variant films. The polar plot
(below left) is obtained from a two-variant 001 film. The fit to the data is achieved with a coherent superposition of the response from equal volume fractions of the two-variants. The middle plot
simulates the SHG response of a sample with equal volume fractions and a sample in which the volume fraction is imbalanced by 5%. Finally, the plot on the right shows the resulting difference
signal. We expect that a magnitude field, applied in a direction such that the free energy of the two-variants becomes inequivalent, will cause one variant to grow at the expense of the other.
Measuring SHG as a function of electric field modulation (varying both frequency and amplitude) has great potential for studying the dynamics of domain wall motion in this model multiferroic system.
Far-infrared spectroscopy of multiferroics
Multiferroics are systems of fundamental interest because in their ordered state both time reversal symmetry and space inversion symmetry are spontaneously broken. This is a fancy way of saying that
the material is simultaneously an antiferromagnet and a ferroelectric. Breaking both symmetries allows new interactions to occur. Perhaps most striking is the magnetoelectric effect, the presence
of cross coupling terms in the susceptibility tensor. Normally we imagine that the constitutive relations of the medium can be expressed in the form P=χeE and M=χmH. We can also express this in a
tensor form, as
Notice that we have included off-diagonal terms that don’t appear in the standard textbooks. But after all, why shouldn’t P depend on H and M depend on E? The reason that these off-diagonal terms
are zero for most materials goes back to time-reversal and inversion symmetry. For example, P changes sign with spatial inversion but is invariant with respect to time reversal. H on the other
hand, has just the opposite properties: invariant under inversion, changing sign on time reversal. Now consider the relation P=χemH. Perform the operation of spatial inversion on both sides of the
equation, sending P→ -P and H→H. If the material possesses inversion symmetry, then (together with all material properties) χem must remain the same, yielding the relation -P =χemH, which is
consistent with the original relation only if χem =0. The reasoning is the same for the time-reversal operation. We see that the condition for off-diagonal terms in the constitutive tensor is that
the material must break both inversion and time-reversal symmetry.
The potential for coupling M and E is what makes multiferroics interesting from fundamental and applied points of view. Measuring the coupling with static probes can be difficult. This is
especially true for BFO where the large background conductivity complicates measurements of polarization. One of the most promising ways to probe the coupling of electricity and magnetism in BFO is
through the modes of oscillation of the different order parameters. With the onset of antiferromagnetism a new low-frequency mode, the antiferromagnetic resonance (AFMR), appears. For a perfectly
isotropic antiferromagnetic this mode is at zero frequency. In most materials spin-orbit coupling breaks the isotropy and the AFMR acquires a nonzero frequency, even in the absence of an applied
magnetic field. In a typical antiferromagnet, AFMR appears in optical reflectivity or transmission through the coupling of the magnetic dipole moment of the spins to the magnetic field of the
light. The signature of coupling via the magnetic dipole moment is quite pretty and is discussed below.
Let’s remind ourselves of the reflectivity resulting from the typical electric-dipole response (above left), for example an infrared active phonon. The reflectivity has the classic “dispersive”
shape. Starting from high frequency, there is a dip, a rapid rise, then a gradual decrease. Surprisingly enough, the pattern is just the opposite for a mode that couples to the magnetic component of
the light through the magnetic moment (shown above right).
Recently, we have been studying the far-infrared optical response of BFO crystals and thin films. Working together with the group of Prof. Zack Schlesinger at UC Santa Cruz, we have discovered a
low-frequency mode that had not been reported previously. The frequency of the mode, about 20 cm-1; or 600 GHz, is squarely in the spectral region where AFMR modes of materials whose magnetic
structure is similar to BFO, for example the orthoferrites, RFeO3. However, this mode shows a frequency-dependent reflectivity that is characteristic of an electric-dipole active mode.
The graph above left illustrates the temperature dependence of the reflectivity. The graph on the right compares the reflectivity at room temperature with a fit to a sum of Lorentzian oscillators.
The lowest energy feature is the new one. Note that it has the shape of an electric-dipole active mode despite being well below the frequency of optically active phonons. We have further evidence of
the strength of this mode from THz spectroscopy. Below left is the THz transmission spectrum of a crystal of BFO. We compare this with the same experiment performed on a typical orthoferrite, TbFeO
[3]. The single sharp peak in TbFeO[3] is known to be caused by AFMR. Note that the absorption in BFO takes place in the same frequency range, but is much stronger.
Currently we believe the mode in BFO is a form of “electromagnon,” that is, a mode of the spin system that acquires an electric dipole moment as a result of magnetoelectric coupling. By studying this
mode carefully, we hope to learn the origin and the strength of the magnetoelectric coupling in BFO. | {"url":"http://physics.berkeley.edu/research/orenstein/Multiferroic%20web%20content.html","timestamp":"2014-04-17T21:25:28Z","content_type":null,"content_length":"16819","record_id":"<urn:uuid:0f6ca432-c2a9-4e8f-80cb-e78e9d90a3be>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Accelerator (Gauss Rilfe)
This is a little bit of a mouthful but any help would be greatly appreciated. [:)]
I am building a linear accelerator similar to the one found on this site:
This is for a physics 12 project with my friends, but only it is on a larger scale. The project involves launching a regular tennis ball as far as possible without using combustion or fuels. We are
constructing it as follows: A copper pipe is cut in half to create the "grooved" barrel were the steel balls will roll. (We are using copper because it has a low coefficient for friction.) At one end
is a compressed spring which will release the first steel ball towards the first magnet. (Magnets are 1.0" x .125" neodymium-iron-boron (NdFeB) rare-earth permanent magnets fixed between two washers
for safety and enhanced strength.) There will be 5-6 magnets with an undetermined spacing between them. We are using 10-12 stainless steel balls with a diameter of 1.0" weighing at approximately
66.4g or 0.0664kg we think. This entire barrel will be placed in an aluminum pipe, also cut in half, so that the copper barrel will allow the last steel ball to preform an inelastic collision against
the tennis ball directly (thus an oblige collision is avoided). Because of projectile motion we will be attempting to fire this at an angle (probably 45 degrees, or a little less if wind resistance
occurs) to achieve a maximum distance.
My questions are as follows: 1)How can we find the resultant velocity of the tennis ball(Please supply me with formulas I won't bother you with all the exact measurements, and besides I don't have
all of them.)?
2)How can we calculate the pulling force of our magnets fields (in newtons) at a distance in meters?
For question 1:
A)I know that we need to find the spring constant (k) in (newton meters)of our spring using Hook's Law (F=kx) which is easily done.
B)Then we use (K) to find the Elastic Potential Energy for our spring (EPE=1/2kx^2) were (x) is the springs compression in meters, EPE is measured in joules.
C)Then we have to deal with Static Friction. (I need help here!)
D)Followed by drawing an incline plane problem at 45 degrees, where the Elastic Potential Energy is converted into Kinetic Energy (KE)(Both in joules). The kinetic energy and the attraction force of
the magnet (this is where question 2 comes in) are pulling the steel ball up the slope after the spring system is released. The kinetic friction and acceleration due to gravity are pulling the ball
down the slope and into the slope resulting in a loss of velocity. [Normal force is mgcos(45) {Where m is mass (kg) and g is gravity (9.8m/s)}]
E)When the ball does hit the magnet most of the kinetic energy is transferred to the furthest ball on the other side of the magnet, where it in turn overcomes static friction. (More static friction
help needed!)
F)Another incline plan problem with KE and the 2nd magnets attraction force are pulling uphill. The acceleration due to gravity, kinetic friction, and 1st magnets attraction force slowing the
G)Parts E and F repeat until the inelastic collision of the last steel ball against the tennis ball where we dip into momentum. p=mv where p is momentum in (kgm/s), m is mass in (kg) and v is
velocity (m/s). Because we have to objects (one at rest) we use the equation p=(m1)(v1)+(m2)(v2) were mass 2 is at rest (the tennis ball). Using momentum we then find the resulting velocity of the
tennis ball!
Other formula: Ffr=(mu)N were Ffr is force of friction in newtons, (mu) is coefficient of friction and (N) is Normal force.
Ffr=(mu)mg involving force of friction, coefficient of friction, mass and acceleration due to gravity. (But someone told me this is for objects kept horizontal?)
That really was a mouthful, it covers a lot of different physics topics too! [:)] Please inform me if I'm even approaching this right? Thanks in advance. | {"url":"http://www.physicsforums.com/showthread.php?p=110083","timestamp":"2014-04-16T07:36:02Z","content_type":null,"content_length":"46745","record_id":"<urn:uuid:e406b8bc-235f-47a8-bd3d-c30b32d7969a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multistage negotiation for distributed constraint satisfaction
Results 1 - 10 of 70
- IEEE Transactions on Knowledge and Data Engineering , 1998
"... In this paper, we develop a formalism called a distributed constraint satisfaction problem (distributed CSP) and algorithms for solving distributed CSPs. A distributed CSP is a constraint
satisfaction problem in which variables and constraints are distributed among multiple agents. Various applica ..."
Cited by 270 (22 self)
Add to MetaCart
In this paper, we develop a formalism called a distributed constraint satisfaction problem (distributed CSP) and algorithms for solving distributed CSPs. A distributed CSP is a constraint
satisfaction problem in which variables and constraints are distributed among multiple agents. Various application problems in Distributed Artificial Intelligence can be formalized as distributed
CSPs. We present our newly developed technique called asynchronous backtracking that allows agents to act asynchronously and concurrently without any global control, while guaranteeing the
completeness of the algorithm. Furthermore, we describe how the asynchronous backtracking algorithm can be modified into a more efficient algorithm called an asynchronous weak-commitment search,
which can revise a bad decision without exhaustive search by changing the priority order of agents dynamically. The experimental results on various example problems show that the asynchronous
weak-commitment search algorithm ...
- In CP , 2000
"... . When multiple agents are in a shared environment, there usually exist constraints among the possible actions of these agents. A distributed constraint satisfaction problem (distributed CSP) is
a problem to find a consistent combination of actions that satisfies these inter-agent constraints. Vario ..."
Cited by 203 (7 self)
Add to MetaCart
. When multiple agents are in a shared environment, there usually exist constraints among the possible actions of these agents. A distributed constraint satisfaction problem (distributed CSP) is a
problem to find a consistent combination of actions that satisfies these inter-agent constraints. Various application problems in multi-agent systems can be formalized as distributed CSPs. This paper
gives an overview of the existing research on distributed CSPs. First, we briefly describe the problem formalization and algorithms of normal, centralized CSPs. Then, we show the problem
formalization and several MAS application problems of distributed CSPs. Furthermore, we describe a series of algorithms for solving distributed CSPs, i.e., the asynchronous backtracking, the
asynchronous weak-commitment search, the distributed breakout, and distributed consistency algorithms. Finally,we showtwo extensions of the basic problem formalization of distributed CSPs, i.e.,
handling multiple local variables, and dealing with over-constrained problems. Keywords: Constraint Satisfaction, Search, distributed AI 1.
, 1991
"... The Functionally-Accurate, Cooperative (FA/C) paradigm provides a model for task decomposition and agent interaction in a distributed problem-solving system. In this model, agents need not have
all the necessary information locally to solve their subproblems, and agents interact through the asynchro ..."
Cited by 96 (25 self)
Add to MetaCart
The Functionally-Accurate, Cooperative (FA/C) paradigm provides a model for task decomposition and agent interaction in a distributed problem-solving system. In this model, agents need not have all
the necessary information locally to solve their subproblems, and agents interact through the asynchronous, co-routine exchange of partial results. This model leads to the possibility that agents may
behave in an uncoordinated manner. This paper traces the development of a series of increasingly sophisticated cooperative control mechanisms for coordinating agents. They include integrating data-
and goal-directed control, using static meta-level information specified by an organizational structure, and using dynamic meta-level information developed in partial global planning. The framework
of distributed search motivates these developments. Major themes of this work are the importance of sophisticated local control, the interplay between local control and cooperative control, and the
use of s...
, 1995
"... Coordination, as the act of managing interdependencies between activities, is one of the central research issues in Distributed Artificial Intelligence. Many researchers have shown that there is
no single best organization or coordination mechanism for all environments. Problems in coordinating the ..."
Cited by 88 (18 self)
Add to MetaCart
Coordination, as the act of managing interdependencies between activities, is one of the central research issues in Distributed Artificial Intelligence. Many researchers have shown that there is no
single best organization or coordination mechanism for all environments. Problems in coordinating the activities of distributed intelligent agents appear in many domains: the control of distributed
sensor networks; multi-agent scheduling of people and/or machines; distributed diagnosis of errors in local-area or telephone networks; concurrent engineering; `software agents' for information
gathering. The design of coordination mechanisms for group...
"... The distributed coordination problem can be described as how should the local scheduling of activities at each agent be affected by non-local concerns and constraints. Partial global planning
(PGP) is a flexible approach to distributed coordination that allows agents to respond dynamically to thei ..."
Cited by 87 (27 self)
Add to MetaCart
The distributed coordination problem can be described as how should the local scheduling of activities at each agent be affected by non-local concerns and constraints. Partial global planning (PGP)
is a flexible approach to distributed coordination that allows agents to respond dynamically to their current situation. It is based on detecting relationships in the computational goal structures of
the distributed agents. However, the detailed PGP mechanisms depend on the existence and availability of certain characteristics and structures ...
, 1996
"... This paper presents a new algorithm for solving distributed constraint satisfaction problems (distributed CSPs) called the distributedbreakout algorithm, which is inspired by the breakout
algorithm for solving centralized CSPs. In this algorithm, each agent tries to optimize its evaluation valu ..."
Cited by 87 (14 self)
Add to MetaCart
This paper presents a new algorithm for solving distributed constraint satisfaction problems (distributed CSPs) called the distributedbreakout algorithm, which is inspired by the breakout algorithm
for solving centralized CSPs. In this algorithm, each agent tries to optimize its evaluation value (the number of constraint violations) by exchanging its current value and the possible amount of its
improvement among neighboring agents. Instead of detecting the fact that agents as a whole are trapped in a local-minimum, each agent detects whether it is in a quasi-local-minimum, which is a weaker
condition than a local-minimum, and changes the weights of constraint violations to escape from the quasi-local-minimum. Experimental evaluations show this algorithm to be much more efficient than
existing algorithms for critically difficult problem instances of distributed graph-coloring problems.
, 1999
"... Abstract. Distributed problem solving involves the collective effort of multiple problems solvers to combine their knowledge, information, and capabilities so as to develop solutions to problems
that each could not have solved as well (if at all) alone. The challenge in distributed problem solving i ..."
Cited by 74 (0 self)
Add to MetaCart
Abstract. Distributed problem solving involves the collective effort of multiple problems solvers to combine their knowledge, information, and capabilities so as to develop solutions to problems that
each could not have solved as well (if at all) alone. The challenge in distributed problem solving is thus in marshalling the distributed capabilities in the right ways so that the problem solving
activities of each agent complement the activities of the others, so as to lead efficiently to effective solutions. Thus, while working together leads to distributed problem solving, there is also
the distributed problem of how to work together that must be solved. We consider that problem to be a distributed planning problem, where each agent must formulate plans for what it will do that take
into account (sufficiently well) the plans of other agents. In this paper, we characterize the variations of distributed problem solving and distributed planning, and summarize some of the basic
techniques that have been developed to date. 1
, 1998
"... A distributed constraint satisfaction problem can formalize various application problems in MAS, and several algorithms for solving this problem have been developed. One limitation of these
algorithms is that they assume each agent has only one local variable. Although simple modifications enable th ..."
Cited by 70 (9 self)
Add to MetaCart
A distributed constraint satisfaction problem can formalize various application problems in MAS, and several algorithms for solving this problem have been developed. One limitation of these
algorithms is that they assume each agent has only one local variable. Although simple modifications enable these algorithms to handle multiple local variables, obtained algorithms are neither
efficient nor scalable to larger problems. We develop a new algorithm that can handle multiple local variables efficiently, which is based on the asynchronous weak-commitment search algorithm. In
this algorithm, a bad local solution can be modified without forcing other agents to exhaustively search local problems. Also, the number of interactions among agents can be decreased since agents
communicate only when they find local solutions that satisfy all of the local constraints. Experimental evaluations show that this algorithm is far more efficient than an algorithm that uses the
prioritization among agents. 1
- Autonomous Agents and Multi-Agent Systems , 1998
"... The development of enabling infrastructure for the next generation of multi-agent systems consisting of large numbers of agents and operating in open environments is one of the key challenges
for the multi-agent community. Current infrastructure support does not materially assist in the development ..."
Cited by 63 (11 self)
Add to MetaCart
The development of enabling infrastructure for the next generation of multi-agent systems consisting of large numbers of agents and operating in open environments is one of the key challenges for the
multi-agent community. Current infrastructure support does not materially assist in the development of sophisticated agent coordination strategies. It is the need for and the development of such a
high-level support structure that will be the focus of this paper. A domain-independent (generic) agent architecture is proposed that wraps around an agent’s problem-solving component in order to
make problem-solving responsive to real-time constraints, available network resources and the need to coordinate — both in the large and small, with problem-solving activities of other agents. This
architecture contains five components, local agent scheduling, multi-agent coordination, organizational design, detection and diagnosis and on-line learning, that are designed to interact so that a
range of different situation-specific coordination strategies can be implemented and adapted as the situation evolves. The presentation of this architecture is followed by a more detailed discussion
on the interaction among these components and the
- Principles and Practice of Constraint Programming , 1997
"... Abstract. Many problems in multi-agent systems can be described as distributed Constraint Satisfaction Problems (distributed CSPs), where the goal is to nd a set of assignments to variables that
satis es all constraints among agents. However, when real problems are formalized as distributed CSPs, th ..."
Cited by 62 (13 self)
Add to MetaCart
Abstract. Many problems in multi-agent systems can be described as distributed Constraint Satisfaction Problems (distributed CSPs), where the goal is to nd a set of assignments to variables that
satis es all constraints among agents. However, when real problems are formalized as distributed CSPs, they are often over-constrained and have no solution that satis es all constraints. This paper
provides the Distributed Partial Constraint Satisfaction Problem (DPCSP) as a new framework for dealing with over-constrained situations. We also present new algorithms for solving Distributed
Maximal Constraint Satisfaction Problems (DM-CSPs), which belong to an important class of DPCSP. The algorithms are called the Synchronous Branch and Bound (SBB) and the Iterative Distributed
Breakout (IDB). Both algorithms were tested on hard classes of over-constrained random binary distributed CSPs. The results can be summarized as SBB is preferable when we are mainly concerned with
the optimality ofasolution, while IDB is preferable when we want to get a nearly optimal solution quickly. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=39431","timestamp":"2014-04-18T01:51:34Z","content_type":null,"content_length":"39970","record_id":"<urn:uuid:d5b4511b-b0f2-44f5-ab55-16bba69bf0a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Problem with Risk Assessment
People worry about far-fetched outcomes, like a plane crash, which will likely never happen to you in your lifetime. But they ignore real risks, like a car crash - which you will inevitably be
involved in, within your lifetime.
Risk assessment is a difficult thing for most people. Most of us assess personal risk based on
, and not statistics or the laws of probability. Thus, for example, many people are afraid of flying, as they believe there is a real risk of a plane crash. While statistically, they are at greater
risk for getting killed in a car crash on the way to the airport.
Our assessment of risk, based on emotional response, is skewed. We drive cars every day and rarely get involved in a crash (but most people will be in one or more in their lifetimes, I've been in
three so far). And yet airline crashes are very, very rare and given the huge number of flights and miles flown, the odds of you being in a plane crash are pretty infinitesimal.
Familiarization is one aspect of this skewed risk-assessment. Cars are part of our every day life, and they appear harmless, so we discount the risk. And Control is, I think, a second factor. People
believe (wrongly, to some extent) that they have more control over their car, and thus can take actions to avoid an accident. In a plane, someone else is driving.
But the control theory is wrong, of course. Oftentimes the people who are "afraid of flying" are the worst drivers in the world - rolling stop signs, speeding, drinking a latte and texting while on
the Interstate. Yes, they have control, but it would be better if they didn't.
Again, these are emotionally-based assessments of risk, not mathematical ones. And yet most of us use emotions in risk assessment, but rarely use mathematics. To be sure, this is partially due to the
lack of education in this country regarding mathematics and in particular, probability. And to some extent, it is due to the widespread belief in superstition, which often trumps even religion in
this country. Ask the ordinary citizen, and they will tell you they believe in things like "luck" and "premonitions".
But basic mathematics? Only wizards in pointy hats believe in that!
The emotional basis of risk assessment is to blame, in large part, for the recent economic downturn. People did not "do the math" on these crazy mortgages and crazy home prices, but instead were
willing to believe that others would pay them even more for homes later on.
Meanwhile, folks who thought rationally came to the conclusion that when no one could afford to buy a home, people would stop buying them - and bad things would pile up in a hurry. And they did. It
is not like many of us did not see this coming - although we all hoped it would be a "soft" crash and not the hard one it turned out to be.
And this irrational assessment of risk continues on today. People are buying Gold on the premise that the Government will fall apart and we will be in chaos within a few years - a very unlikely
outcome, given the history of this country, which has survived far worse downturns. And yet the very, very likely outcome that the gold bubble will burst (as it did in 1982) is discounted as
far-fetched. Emotional assessment of risk is, well, risky.
Things like
also play into people's emotional assessment of risk and probability. I've had gamblers tell me that they have "winning streaks" and that you just have to wait until your "luck" comes around. Slot
machine players will tell you then can figure out when a machine is going to "pay off" simply by watching to see whether a machine hasn't paid off in a while and is "due" to pay off. What these folks
don't understand is that probability doesn't work that way, and unrelated probabilistic events cannot be chained together.
Consider the simplest probabilistic event - a coin toss. I would not have a hard time convincing you that, for a normal coin, the odds of it landing heads up or tails up would be 50%. If you toss a
coin, there is a 50-50 chance it will be heads.
Now, if you toss a coin 100 times, chances are, 50 times it will be heads and 50 times it will be tails. And most people go along with that. But within that 100 tosses, there may be a string of heads
or tails in a row. This is to be expected, and is not some "streak" of headness or tailness that can predict anything. If you flip 10 "heads" in a row, it does not mean you are "due" a tails anytime
soon, or that the liklihood of "tailness" has gone up. Each flip is still a 50/50 proposition, and even if you flip 100 heads in a row, the next flip still has a 50/50 chance of heads or tails.
It is a probablistic distribution. Suppose you have a classroom of 100 students and have them each flip a coin 100 times. Most will flip about 50 heads and 50 tails. But a few will be skewed toward a
70/30 mix (or 30/70) and if you plotted out the distribution, it would follow a Bell Curve.
A sample Bell Curve distribution. No relation to the author.
So you might have one student who flips mostly heads and another who flips mostly tails. But that does not mean either is "lucky" or has some inherient "headness" or "tailness" in their flipping. And
it does not mean that they are "due" for another heads or tails or the opposite. Each successive flip is.....50/50 chance of either outcome.
You see, if you take
all the 100 results from all 100 students
and put them together (10,000 flips), chances are, the overall ratio would be very close to 50/50. And if the student who flipped 30/70 kept flipping 100 more times, or 1000 more times, chances are,
his ratio would eventually tend toward the 50/50 mark.
But most folks fail to grasp this simple probability model. They assume that the student who flips 10 "heads" in a row is either "lucky" and will continue to flip heads, or the contrary, that he is
"due" for a tails anytime soon. Either way, they will tell you that the "odds" of the next flip being heads or tails is not 50/50, but rather somehow skewed by the previous flips.
But think about it, suppose he hands the coin to another person who is making their first flip? Does this "luck" transfer? Of course not. Each event is independent and unique, and the odds stick at a
stubborn 50/50. Sorry.
So gamblers who think they are on a "winning streak" or having a "lucky run" are just lying to themselves. The cards, the roulette wheel, or the slot machine, are all randomized events with no
relation to previous deals, spins, or pulls. And yet, it is tempting, even for educated people, to believe otherwise.
And that is the point of this posting. Not to stay that
people are stupid idiots who blithly ignore the laws of probability in daily decision-making, but that
most of us
do this on a regular basis. We all secretly believe in "luck" or predestination or whatever. And even the most astute mathematician might buy a lottery ticket now and again.
The key, I think, is to understand when you are being superstitious and using emotional assessment of risks, versus mathematical ones. And when you find yourself making more and more important
decisions in your life based on emotional needs, you have to watch out.
As I noted time and time again in this blog, to make money in America, you don't need to be a financial genius, but merely
act rationally in an irrational world
. If you can be the person who sees through the fog and snags a good bargain, while others are fearfully shying away, you can make out like a badit.
People who bought real estate in 1995, when it was unpopular, made a lot of money. People who bought it in 2005, lost their shirts. The same is true for gold, stocks, or whatever else. The folks who
rationally look at investments and say "Gee, after doing the math, this seems like an underpriced bargain" do well. The folks who say, "Hey, everyone is making money in Gold, I should buy some too!"
get fleeced.
No comments:
Post a Comment
Sorry, Comments have been disabled due to the large amount of SPAM and TROLLING as well as GROOMING comments. Thanks for reading, though.
Note: Only a member of this blog may post a comment. | {"url":"http://livingstingy.blogspot.com/2011/03/problem-with-risk-assessment.html","timestamp":"2014-04-17T12:30:38Z","content_type":null,"content_length":"73701","record_id":"<urn:uuid:4f5fbfb5-062f-48f1-8d2b-03505b0b5849>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the N-th digit of Pi
Here is a very interesting formula for pi, discovered by David Bailey, Peter Borwein, and Simon Plouffe in 1995:
Pi = SUM[k=0 to infinity] 16^-k [ 4/(8k+1) - 2/(8k+4) - 1/(8k+5) - 1/(8k+6) ].
The reason this pi formula is so interesting is because it can be used to calculate the N-th digit of Pi (in base 16) without having to calculate all of the previous digits!
Moreover, one can even do the calculation in a time that is essentially linear in N, with memory requirements only logarithmic in N. This is far better than previous algorithms for finding the N-th
digit of Pi, which required keeping track of all the previous digits!
Presentation Suggestions:
You might start off by asking students how they might calculate the 100-th digit of pi using one of the other pi formulas they have learned. Then show them this one...
The Math Behind the Fact:
Here's a sketch of how the BBP formula can be used to find the N-th hexadecimal digit of Pi. For simplicity, consider just the first of the sums in the expression, and multiply this by 16^N. We are
interested in the fractional part of this expression. The numerator of a given term in this sum is 16^N-k, and it can be evaluated very easily mod (8k+1) using a binary algorithm for exponentiation.
Division by (8k+1) is straightforward via floating point arithmetic. Not many more than N terms of this sum need be evaluated, since the numerator decreases very quickly as k gets large so that terms
become negligible. The other sums in the BBP formula are handled similarly. This yields the hexadecimal expansion of Pi starting at the (N+1)-th digit. More details can be found in the
Bailey-Borwein-Plouffe reference.
The BBP formula was discovered using the PSLQ Integer Relation Algorithm. However, the Adamchik-Wagon reference shows how similar relations can be discovered in a way that the proof accompanies the
discovery, and gives a 3-term formula for a base 4 analogue of the BBP result.
How to Cite this Page:
Su, Francis E., et al. "Finding the N-th digit of Pi." Math Fun Facts. <http://www.math.hmc.edu/funfacts>. | {"url":"http://www.math.hmc.edu/funfacts/ffiles/20010.5.shtml","timestamp":"2014-04-18T15:40:03Z","content_type":null,"content_length":"21574","record_id":"<urn:uuid:03d8f7b7-b61e-458b-b17d-0cd7f0c16483>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Constructivist logic
Intuitionistic logic
, or
constructivist logic
, is the
symbolic logic
system originally developed by
Arend Heyting
to provide a formal basis for
's programme of
. The system preserves
, rather than
, across transformations yielding derived propositions. From a practical point of view, there is also a strong motivation for using intuitionistic logic, since it has the
existence property
, making it also suitable for other forms of
mathematical constructivism
The syntax of formulæ of intuitionistic logic is similar to propositional logic or first-order logic. However, intuitionistic connectives are not interdefinable in the same way as in classical logic,
hence their choice matters. In intuitionistic propositional logic it is customary to use →, ∧, ∨, ⊥ as the basic connectives, treating ¬ as the abbreviation . In intuitionistic first-order logic both
quantifiers ∃, ∀ are needed.
Many tautologies of classical logic can no longer be proven within intuitionistic logic. Examples include not only the law of excluded middle , but also Peirce's law , and even double negation
elimination. In classical logic, both and also are theorems. In intuitionistic logic, only the former is a theorem: double negation can be introduced, but it cannot be eliminated.
The observation that many classically valid tautologies are not theorems of intuitionistic logic leads to the idea of weakening the proof theory of classical logic.
Sequent calculus
discovered that a simple restriction of his system LK (his sequent calculus for classical logic) results in a system which is sound and complete with respect to intuitionistic logic. He called this
system LJ.
Hilbert-style calculus
Intuitionistic logic can be defined using the following
Hilbert-style calculus
. Compare with the deduction system at
Propositional calculus#Alternative calculus
In propositional logic, the inference rule is modus ponens
• MP: from φ and φ → ψ infer ψ
and the axioms are
• THEN-1: φ → (χ → φ)
• THEN-2: (φ → (χ → ψ)) → ((φ → χ) → (φ → ψ))
• AND-1: φ ∧ χ → φ
• AND-2: φ ∧ χ → χ
• AND-3: φ → (χ → (φ ∧ χ))
• OR-1: φ → φ ∨ χ
• OR-2: χ → φ ∨ χ
• OR-3: (φ → ψ) → ((χ → ψ) → (φ ∨ χ → ψ))
• FALSE: ⊥ → φ
To make this a system of first-order predicate logic, the generalization rules
• ∀-GEN: from ψ → φ infer ψ → (∀x φ), if x is not free in ψ
• ∃-GEN: from φ → ψ infer (∃x φ) → ψ, if x is not free in ψ
are added, along with the axioms
• PRED-1: (∀x φ(x)) → φ(t), if no free occurrence of x in φ is bound by a quantifier quantifying a variable occurring in the term t
• PRED-2: φ(t) → (∃x φ(x)), with the same restriction as for PRED-1
Optional connectives
If one wishes to include a connective ¬ for negation rather than consider it an abbreviation for φ → ⊥, it is enough to add:
• NOT-1′: (φ → ⊥) → ¬φ
• NOT-2′: ¬φ → (φ → ⊥)
There are a number of alternatives available if one wishes to omit the connective ⊥ (false). For example, one may replace the three axioms FALSE, NOT-1′, and NOT-2′ with the two axioms
• NOT-1: (φ → χ) → ((φ → ¬χ) → ¬φ)
• NOT-2: φ → (¬φ → χ)
as at Propositional calculus#Axioms. Alternatives to NOT-1 are (φ → ¬χ) → (χ → ¬φ) or (φ → ¬φ) → ¬φ.
The connective ↔ for equivalence may be treated as an abbreviation, with φ ↔ χ standing for (φ → χ) ∧ (χ → φ). Alternatively, one may add the axioms
• IFF-1: (φ ↔ χ) → (φ → χ)
• IFF-2: (φ ↔ χ) → (χ → φ)
• IFF-3: (φ → χ) → ((χ → φ) → (φ ↔ χ))
IFF-1 and IFF-2 can, if desired, be combined into a single axiom (φ ↔ χ) → ((φ → χ) ∧ (χ → φ)) using conjunction.
Relation to classical logic
The system of classical logic is obtained by adding any one of the following axioms:
• φ ∨ ¬φ (Law of the excluded middle. May also be formulated as (φ → χ) → ((¬φ → χ) → χ).)
• ¬¬φ → φ (Double negation elimination)
• ((φ → χ) → φ) → φ (Peirce's law)
In general, one may take as the extra axiom any classical tautology that is not valid in the two-element Kripke frame $circ\left\{longrightarrow\right\}circ$ (in other words, that is not included in
Smetanich's logic).
Another relationship is given by the Gödel–Gentzen negative translation, which provides an embedding of classical first-order logic into intuitionistic logic: a first-order formula is provable in
classical logic if and only if its Gödel–Gentzen translation is provable intuitionistically. Therefore intuitionistic logic can instead be seen as a means of extending classical logic with
constructivist semantics.
Non-interdefinability of operators
In classical propositional logic, it is possible to take one of
, or
as primitive, and define the other two in terms of it together with
, such as in
three axioms of propositional logic
. It is even possible to define all four in terms of a
sole sufficient operator
such as the
Peirce arrow
(NOR) or
Sheffer stroke
(NAND). Similarly, in classical first-order logic, one of the quantifiers can be defined in terms of the other and negation.
These are fundamentally consequences of the law of bivalence, which makes all such connectives merely boolean functions. The law of bivalence does not hold in intuitionistic logic, only the law of
non-contradiction. As a result none of the basic connectives can be dispensed with, and the above axioms are all necessary. Most of the classical identities are only theorems of intuitionistic logic
in one direction, although some are theorems in both directions. They are as follows:
Conjunction versus disjunction:
• $\left(phi wedge psi\right) to neg \left(neg phi vee neg psi\right)$
• $\left(phi vee psi\right) to neg \left(neg phi wedge neg psi\right)$
• $\left(neg phi vee neg psi\right) to neg \left(phi wedge psi\right)$
• $\left(neg phi wedge neg psi\right) leftrightarrow neg \left(phi vee psi\right)$
Conjunction versus implication:
• $\left(phi wedge psi\right) to neg \left(phi to neg psi\right)$
• $\left(phi to psi\right) to neg \left(phi wedge neg psi\right)$
• $\left(phi wedge neg psi\right) to neg \left(phi to psi\right)$
• $\left(phi to neg psi\right) leftrightarrow neg \left(phi wedge psi\right)$
Disjunction versus implication:
• $\left(phi vee psi\right) to \left(neg phi to psi\right)$
• $\left(neg phi vee psi\right) to \left(phi to psi\right)$
• $neg \left(phi to psi\right) to neg \left(neg phi vee psi\right)$
• $neg \left(phi vee psi\right) leftrightarrow neg \left(neg phi to psi\right)$
Universal versus existential quantification:
• $\left(forall x phi\left(x\right)\right) to neg \left(exist x neg phi\left(x\right)\right)$
• $\left(exist x phi\left(x\right)\right) to neg \left(forall x neg phi\left(x\right)\right)$
• $\left(exist x neg phi\left(x\right)\right) to neg \left(forall x phi\left(x\right)\right)$
• $\left(forall x neg phi\left(x\right)\right) leftrightarrow neg \left(exist x phi\left(x\right)\right)$
So, for example, "a or b" is a stronger statement than "if not a, then b", whereas these are classically interchangeable. On the other hand, "neither a nor b" is equivalent to "not a, and also not
If we include equivalence in the list of connectives, some of the connectives become definable from others:
• $\left(phileftrightarrow psi\right) leftrightarrow \left(\left(phi to psi\right)land\left(psitophi\right)\right)$
• $\left(phitopsi\right) leftrightarrow \left(\left(philorpsi\right) leftrightarrow psi\right)$
• $\left(phitopsi\right) leftrightarrow \left(\left(philandpsi\right) leftrightarrow phi\right)$
• $\left(philandpsi\right) leftrightarrow \left(\left(phitopsi\right)leftrightarrowphi\right)$
• $\left(philandpsi\right) leftrightarrow \left(\left(\left(philorpsi\right)leftrightarrowpsi\right)leftrightarrowphi\right)$
In particular, {∨, ↔, ⊥} and {∨, ↔, ¬} are complete bases of intuitionistic connectives.
As shown by Alexander Kuznetsov, either of the following defined connectives can serve the role of a sole sufficient operator for intuitionistic logic:
• $\left(\left(plor q\right)landneg r\right)lor\left(neg pland\left(qleftrightarrow r\right)\right),$
• $pto\left(qlandneg rland\left(slor t\right)\right).$
The semantics are rather more complicated than for the classical case. A model theory can be given by
Heyting algebras
or, equivalently, by
Kripke semantics
Heyting algebra semantics
In classical logic, we often discuss the
truth values
that a formula can take. The values are usually chosen as the members of a
Boolean algebra
. The meet and join operations in the Boolean algebra are identified with the ∧ and ∨ logical connectives, so that the value of a formula of the form
is the meet of the value of
and the value of
in the Boolean algebra. Then we have the useful theorem that a formula is a valid sentence of classical logic if and only if its value is 1 for every
—that is, for any assignment of values to its variables.
A corresponding theorem is true for intuitionistic logic, but instead of assigning each formula a value from a Boolean algebra, one uses values from a Heyting algebra, of which Boolean algebras are a
special case. A formula is valid in intuitionistic logic if and only if it receives the value of the top element for any valuation on any Heyting algebra.
It can be shown that to recognize valid formulas, it is sufficient to consider a single Heyting algebra whose elements are the open subsets of the real line R. In this algebra, the ∧ and ∨ operations
correspond to set intersection and union, and the value assigned to a formula A → B is int(A^C ∪ B), the interior of the union of the value of B and the complement of the value of A. The bottom
element is the empty set ∅, and the top element is the entire line R. Negation is as usual defined as ¬A = A → ∅, so the value of ¬A reduces to int(A^C), the interior of the complement of the value
of A, also known as the exterior of A. With these assignments, intuitionistically valid formulas are precisely those that are assigned the value of the entire line.
For example, the formula ¬(A ∧ ¬A) is valid, because no matter what set X is chosen as the value of the formula A, the value of ¬(A ∧ ¬A) can be shown to be the entire line:
Value(¬(A ∧ ¬A)) =
int((Value(A ∧ ¬A))^C) =
int((Value(A) ∩ Value(¬A))^C) =
int((X ∩ int((Value(A))^C))^C) =
int((X ∩ int(X^C))^C)
A theorem of topology tells us that int(X^C) is a subset of X^C, so the intersection is empty, leaving:
int(∅^C) = int(R) = R
So the valuation of this formula is true, and indeed the formula is valid.
But the law of the excluded middle, A ∨ ¬A, can be shown to be invalid by letting the value of A be {y : y > 0 }. Then the value of ¬A is the interior of {y : y ≤ 0 }, which is {y : y < 0 }, and the
value of the formula is the union of {y : y > 0 } and {y : y < 0 }, which is {y : y ≠ 0 }, not the entire line.
The interpretation of any intuitionistically valid formula in the infinite Heyting algebra described above results in the top element, representing true, as the valuation of the formula, regardless
of what values from the algebra are assigned to the variables of the formula. Conversely, for every invalid formula, there is an assignment of values to the variables that yields a valuation that
differs from the top element. No finite Heyting algebra has both these properties.
Kripke semantics
Building upon his work on semantics of modal logic, Saul Kripke created another semantics for intuitionistic logic, known as Kripke semantics or relational semantics .
Relation to other logics
Intutionistic logic is related by duality to a paraconsistent logic known as Brazilian logic or dual-intuitionistic logic.
See also
• Van Dalen, Dirk, 2001, "Intuitionistic Logic," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell.
• Morten H. Sørensen, Paweł Urzyczyn, 2006, Lectures on the Curry-Howard Isomorphism (chapter 2: "Intuitionistic Logic"). Studies in Logic and the Foundations of Mathematics vol. 149, Elsevier.
External links | {"url":"http://www.reference.com/browse/Constructivist+logic","timestamp":"2014-04-20T15:42:27Z","content_type":null,"content_length":"99711","record_id":"<urn:uuid:6b94f10b-eefd-429b-bc7f-80ff363ca6c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing Homology via Sheaf Cohomology
up vote 2 down vote favorite
Let $X$ be a finite CW complex. We can compute the cohomology groups of $X$ via sheaf cohomology of the constant sheaf. Are the homology groups of $X$ with $\mathbb{Z}$ coefficents the cohomology
groups of some sheaf of abelian groups $S$ on $X$?
$$H_i(X, \mathbb{Z}) \cong H^i(X, S)$$
(If necessary, feel free to impose additional conditions on $X$, eg that $X$ is a topological manifold, ... )
More specifically, let $\mathbb{Z}_X$ denote the usual constant sheaf on $X$, and $\mathbb{Z}_X^{\vee} = Hom ( \mathbb{Z}_X, \mathbb{Z}_X)$ the sheaf-theoretic Hom. Is there an isomorphism $H_i(X, \
mathbb{Z}) \cong H^i(X, \mathbb{Z}_X^{\vee})$?
3 Your $\mathbb Z_X^\vee$ is often just $\mathbb Z_X$. – Mariano Suárez-Alvarez♦ Mar 13 '13 at 1:06
add comment
1 Answer
active oldest votes
There is such a sheaf $S$ (or at least a complex of sheaves), but your first guess as for what it might be is not correct (see Mariano's comment).
In the case when $X$ is a compact topological $n$-manifold, then $S$ is given by the orientation sheaf (shifted into cohomological degree $-n$). Then the isomorphism $H_n(X, \mathbb
Z) \cong H^n(X, S)$ is Poincare duality.
up vote 8 down
vote For a finite CW complex (or maybe some more general compact topological space), $S$ is something called the dualizing complex $\omega _X$. This is part of the more general story of
Verdier duality.
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/124364/computing-homology-via-sheaf-cohomology/124378","timestamp":"2014-04-18T14:09:28Z","content_type":null,"content_length":"50882","record_id":"<urn:uuid:8bf9105a-08da-491b-a6fa-b48fcc26dc60>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. A point is an ordered pair of real numbers.
2. The plane is the set of all ordered pairs of real numbers.
3. The midpoint between (x[1], y[1]) and (x[2], y[2]) is
4. The distance between (x[1], y[1]) and (x[2], y[2]) is
5. The slope between (x[1], y[1]) and (x[2], y[2]) is
6. A line is the set of all points whose coordinates are solutions to a linear equation in two unknowns.
7. Two lines are parallel if their slopes are the same.
8. Two lines are perpendicular if their slopes are negative reciprocals of each other.
9. Given a line and a point, the point where the lilne through the given point perpendicular to the given line intersects the given line is called the foot of the point in the line.
10. A circle has a center and all of the points on the circle are the same distance from the center. This distance is called the radius of the circle.
11. A chord of a circle is a line segment whose endpoints are on the circle.
12. A line that meets a circle in exactly one point is called a tangent to the circle.
13. If A, B, and C are points on a circle, then angle BAC is called an inscribed angle.
14. If O is the center of a circle and B and C are points on the circle, then angle BOC is called a central angle.
15. A parallelogram is a four sided figure where the opposite sides are parallel.
16. A rectangle is a four sided figure where all four angles are right angles.
17. A rhombus is a four sided figure where all four sides have the same length.
18. A tirangle is an isosceles triangle if two of its sides have the same length.
19. A triangle is an equilateral triangle if all three sides have the same length.
20. A median of a triangle is a line segment from one verted to the midpoint of the opposite side.
21. An altitude of a triangle is a line segment from one vertex perpendicular to the opposite side
22. The point where the perpendicular bisectors of the three sides of a triangle meet is called the circumcenter of the triangle.
23. The point where the three angle bisectors of a triangle meet is called the incenter of the triangle.
24. The point where the three medians of a triangle meet is called its centroid or center of gravity.
25.The point where the three altitudes of a triangle meet is called the orthocenter of the triangle.
26.A regular polygon is one where all the sides have the same length and all of the angles are the same size.
27. The point where all of the angle bisectors meet in a regular polygon is called the center of the polygon.
28. A line segment from the center of a regular polygon perpendicular to a side is called an apothem. | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_100/Definitions.html","timestamp":"2014-04-18T11:20:21Z","content_type":null,"content_length":"5096","record_id":"<urn:uuid:3b6c8cf6-9a40-4217-8c96-3754dace1410>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacoima Prealgebra Tutor
...I have often been told that I'm good at thinking on my feet, and I use that skill to find a different approach to a certain topic, if the way that I am explaining something does not work for
you. One thing I always practice in my classroom is starting from a level where the students can understa...
11 Subjects: including prealgebra, physics, geometry, algebra 1
...I have many years of teaching experience and I understand that students are unique individuals who learn in different ways. I will work with each student to achieve the best possible results. I
know the material extremely well, and I have many years of experience teaching Algebra I.
16 Subjects: including prealgebra, French, calculus, geometry
...I also show them how to work with the synonym sections, by use of the process of elimination and working on enhancing their vocabularies. For the younger students, I also help with the
quantitative reasoning and mathematics portions, and show them how to approach each problem with strategies tha...
33 Subjects: including prealgebra, English, reading, Spanish
...Though I have not continued with my own education, I have remained firmly planted in academics by assisting both my wife (completing her bachelor's degree in Business Administration) and my
stepson (recently completed the 9th grade). My stepson is diagnosed with Asperger's Syndrome (a form of hig...
14 Subjects: including prealgebra, reading, public speaking, SAT math
...I have taught in the Los Angeles Unified School District (LAUSD) for the last 15 years at one of the top rated middle schools in the district. Our students consistently score well in the state
tests in math, as well as in the other academic subjects. There are two reasons why I tutor--1) I woul...
3 Subjects: including prealgebra, algebra 1, elementary math | {"url":"http://www.purplemath.com/pacoima_prealgebra_tutors.php","timestamp":"2014-04-21T10:48:48Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:20755735-b1e3-4bb6-88de-205fc325ae49>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Swampscott Statistics Tutor
Find a Swampscott Statistics Tutor
...I am thankful for your time and consideration. I look forward to hearing from you soon. Sincerely, JerryI have an educator license from Massachusetts for Mathematics, and have worked in a
variety of classroom setting, including special education (study skills) and remedial mathematics.
6 Subjects: including statistics, geometry, prealgebra, algebra 1
...I take the topics in Algebra, Exponentials, Logarithms, Trigonometry, and make them easier to understand. Once my students begin to understand the material, positive results usually follow. I
have taught a course involving statistics and concentrated in several stats courses at the PhD level.
24 Subjects: including statistics, chemistry, calculus, physics
...Finally you learn about the wide variety of real world situations that can be modeled to predict future outcomes from current data. Calculus is the study of rates of change, and has numerous
and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong.
23 Subjects: including statistics, physics, calculus, geometry
...I have several years part-time experience holding office hours and working in a tutorial office. I have worked with students who are taking the GED specifically. As an undergraduate I read
extensively in philosophy, literature, and sociology.
29 Subjects: including statistics, reading, English, writing
My name is Derek H. and I recently graduated from Cornell University's College of Engineering with a degree in Information Science, Systems, and Technology. I have a strong background in Math,
Science, and Computer Science. I currently work as software developer at IBM.
17 Subjects: including statistics, geometry, algebra 1, economics | {"url":"http://www.purplemath.com/swampscott_ma_statistics_tutors.php","timestamp":"2014-04-21T15:22:20Z","content_type":null,"content_length":"24069","record_id":"<urn:uuid:5a9d3032-3eda-40c6-8018-6fb3fad2e5b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
90 Degree Clockwise Rotation | Rotation of Point through 90° about the Origin
90 Degree Clockwise Rotation
Learn about the rules for 90 degree clockwise rotation about the origin.
How do you rotate a figure 90 degrees in clockwise direction on a graph?
Rotation of point through 90° about the origin in clockwise direction when point M (h, k) is rotated about the origin O through 90° in clockwise direction. The new position of point M (h, k) will
become M’ (k, -h).
Worked-out examples on 90 degree clockwise rotation about the origin:
1. Plot the point M (-2, 3) on the graph paper and rotate it through 90° in clockwise direction, about the origin. Find the new position of M.
When the point is rotated through 90° clockwise about the origin, the point M (h, k) takes the image M' (k, -h).
Therefore, the new position of point M (-2, 3) will become M' (3, 2).
2. Find the co-ordinates of the points obtained on rotating the point given below through 90° about the origin in clockwise direction.
(i) P (5, 7)
(ii) Q (-4, -7)
(iii) R (-7, 5)
(iv) S (2, -5)
When rotated through 90° about the origin in clockwise direction, the new position of the above points are;
(i) The new position of point P (5, 7) will become P' (7, -5)
(ii) The new position of point Q (-4, -7) will become Q' (-7, 4)
(iii) The new position of point R (-7, 5) will become R' (5, 7)
(iv) The new position of point S (2, -5) will become S' (-5, -2)
3. Construct the image of the given figure under the rotation of 90° clockwise about the origin O.
We get rectangular PQRS by plotting the points P (-3, 1), Q (3, 1), R (3, -1), S (-3, -1). When rotated through 90°, P' (1, 3), Q' (1, -3), R' (-1, -3) and S' (-1, 3).
Now join P'Q'R'S'.
Therefore, P'Q'R'S' is the new position of PQRS when it is rotated through 90°.
4. Draw a quadrilateral PQRS joining the points P (0, 2), Q (2, -1), R (-1, -2) and S (-2, 1) on the graph paper. Find the new position when the quadrilateral is rotated through 90° clockwise about
the origin.
Plot the point P (0, 2), Q (2, -1), R (-1, -2) and S (-2, 1) on the graph paper. Now join PQ, QR, RS and SP to get a quadrilateral. On rotating it through 90° about the origin in clockwise direction,
the new positions of the points are
The new position of point P (0, 2) will become P' (2, 0)
The new position of point Q (2, -1) will become Q' (-1, -2)
The new position of point R (-1, -2) will become R' (-2, 1)
The new position of point S (-2, 1) will become S' (1, 2)
Thus, the new position of quadrilateral PQRS is P'Q'R'S'.
● Related Concepts
● Order of Rotational Symmetry
● Reflection
● Reflection of a Point in x-axis
● Reflection of a Point in y-axis
● Reflection of a point in origin
● Rotation
● 90 Degree Clockwise Rotation
● 90 Degree Anticlockwise Rotation
7th Grade Math Problems
8th Grade Math Practice
From 90 Degree Clockwise Rotation to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"http://www.math-only-math.com/90-degree-clockwise-rotation.html","timestamp":"2014-04-20T18:27:18Z","content_type":null,"content_length":"39323","record_id":"<urn:uuid:ae67e403-b647-463e-ab1f-650093593272>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting Polar to Rectangular
September 19th 2008, 05:46 PM #1
Junior Member
Nov 2007
Converting Polar to Rectangular
Hi Guys,
I'm trying to figure out how to convert a polar number to a rectangular number and vice-versa using the Texas Instruments TI-83 Plus calculator.
Can anyone help me here?
under the angle menu, look at #7 and #8 items ...
7. syntax is ... P "arrow" Rx(r, theta)
where r is the ray length and theta is the ray angle relative to the x-axis. output is the x-coordinate.
8. syntax is ... P "arrow" Ry(r, theta)
output is the y-coordinate.
items #5 and #6 under the same menu convert rectangular to polar ... input is R "arrow" Pr(x,y) and R "arrow" P theta (x,y) respectively; output is r for #5 and theta for #6.
don't forget to check the mode switch for the correct form for angles you're using, degrees or radians.
Ok, I found the menu but I'm not sure how to input the numbers.
For example, I have the rectangular number 1 + j1
How would I input that into the caluculator to get me the polar number?
Figured it out - it's not the Angle menu, it's the math menu and you have to use 'i' after the imaginary term.
September 19th 2008, 06:35 PM #2
September 19th 2008, 06:49 PM #3
Junior Member
Nov 2007
September 19th 2008, 11:25 PM #4
Junior Member
Nov 2007 | {"url":"http://mathhelpforum.com/math-topics/49802-converting-polar-rectangular.html","timestamp":"2014-04-18T09:31:38Z","content_type":null,"content_length":"37412","record_id":"<urn:uuid:bbaa50d5-75c5-48f8-8aa8-c3d54438174f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
DEVELOPMENT AND REALIZATION OF GEOINFORMATIONAL SMALL SCALE MAPPING
A.G. Ivanov, S.A. Krylov
The Moscow State University of the Geodesy and Cartography, Faculty of Cartography, Moscow, Russia
The solution of a small-scale mapping automation problem on the basis of creation, transformation and usage of unified multifunctional system of a cartographical database is offered.
At the phase of a cartographical database creation: the initial cartographical and reference-statistical materials are analyzed and determined; the initial contents of the cartographical database in
a volume of the cartographical basis of a geographical map in the scale of 1:2 500 000 is proved; the structure of the digital cartographical information and its creation by a territorial principle
is determined; analytical-synthetic processing of the cartographical information, including classification and coding is realized; presentation formats and processing of the digital cartographical
information are developed; the selection of the cartographical database program-technical implementation is proved; the procedure of the cartographical database creation with simultaneous building
the digital cartographical bases in the scale of 1:2 500 000 for administrative areas of the Russian Federation is developed.
Already at the phase of the cartographical database creation is considered not as "storehouse" of the digital cartographical information files, and as the complex program-technical system- realizing
automation of the technological and informational processes.
At a phase of the cartographic database contents transformation by means of the automated cartographical generalization: usage of the empirical-mathematical way for the solution of cartographical
problems is proved. It is based on analysis of traditional maps and detection of legitimacies and contents characteristics, development of the approximating transformation mathematical method, on
development of algorithms and programs for the problem solution. On the basis of this way the procedure of the cartographical objects automated selection with usage of data on their density and-or
graphics load is developed. The factors influencing selection of objects are revealed and realized. The mathematical method and the program of generalization quantitative aspect implementation are
developed as result. The qualitative part of selection bound to the solution of mathematical problems is executed by means of dialogue "cartographer-computer". Besides the solution of the
cartographical database transformation problem the current development provides co-ordination and eligibility of the contents at transition from scale to scale with use of standardization and
unification of the initial and derivative digital cartographic bases.
At the operational phase of the cartographical database for automation of thematic mapping technological processes and development of GIS-projects the base and derivative cartographical bases are
developed. Besides it, algorithms of the other problems solution of extremely necessary for mapping of the thematic information worked out:
· Definition of mapping scale depending on density and-or graphics load of the contents;
· A selection of a cartographical projection by means of the interactive "hint" realizing logical links between the object of mapping, a projection and distortions;
· Construction of the mockup of layout on the basis of the cartographical database and the mathematical method for selection of cartographical objects;
· A selection of the thematic information way graphics mapping method by means of the interactive "hint" realizing links between a state and accommodation of the phenomenon and graphics means.
Implementation of a method includes calculation of form parameters, designing and its accommodation on a map.
Development of the cartographical automated system from technological and informational ones and their joint integration in bank of cartographical data on the basis of a unified multifunctional
cartographical database are considered stage-by-stage. | {"url":"http://icaci.org/files/documents/ICC_proceedings/ICC2007/abstracts/html/25_Oral2_4.htm","timestamp":"2014-04-18T13:15:04Z","content_type":null,"content_length":"17673","record_id":"<urn:uuid:34b97c5d-2bd9-4347-8474-62ed64c35cbd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Basic Geometry: Third Edition
             
AMS Chelsea Publishing A highly recommended high-school text by two eminent scholars.
1959; 294 pp; hardcover "Offers a sound mathematical development ... and at the same time enables the student to move rapidly into the heart of geometry."
Volume: 120 -- The Mathematics Teacher
ISBN-10: 0-8218-2101-6 "Should be required reading for every teacher of geometry."
ISBN-13: 978-0-8218-2101-5 -- The Mathematical Gazette
List Price: US$43 • Reasoning. The nature of proof
• The five fundamental principles
Member Price: US$38.70 • The seven basic theorems
• Parallel lines and networks
Order Code: CHEL/120.H • The circle and regular polygons
• Constructions with straightedge and compasses
• Area and length
• Continuous variation
• Loci
• Reasoning. Abstract logical systems
• Laws of number
• Index | {"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-120-H","timestamp":"2014-04-20T20:02:56Z","content_type":null,"content_length":"14962","record_id":"<urn:uuid:820836be-09d9-4493-be23-36fdd830032e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 164
, 2002
"... We prove optimal, up to an arbitrary ffl? 0, inapproximability results for Max-Ek-Sat for k * 3, maximizing the number of satisfied linear equations in an over-determined system of linear
equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for ..."
Cited by 648 (8 self)
Add to MetaCart
We prove optimal, up to an arbitrary ffl? 0, inapproximability results for Max-Ek-Sat for k * 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations
modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular,
for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. Warning: Essentially this paper has been published in JACM and is subject to copyright restrictions. In particular it is for personal use only.
- in Proc. 25th Annual ACM Symposium on Theory of Computing, ACM , 1993
"... Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model
of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This constructi ..."
Cited by 482 (5 self)
Add to MetaCart
Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a
quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing
machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar
primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of
polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T) bits of precision suffice to
support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal
evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved
in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are
efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P ♯P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing
machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.
"... We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of
the value of the function on instances drawn according to some distribution, and possibly may query the fun ..."
Cited by 421 (57 self)
Add to MetaCart
We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of the
value of the function on instances drawn according to some distribution, and possibly may query the function on instances of its choice. First, we establish some connections between property testing
and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being k-colorable or having a ae-clique (clique of
density ae w.r.t the vertex set). Our graph property testing algorithms are probabilistic and make assertions which are correct with high probability, utilizing only poly(1=ffl) edge-queries into the
graph, where ffl is the distance parameter. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph
which corre...
- SIAM Journal on Computing , 1998
"... We show that a parallel repetition of any two-prover one-round proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The
constant in the exponent (in our analysis) depends only on the original probability of error and on the t ..."
Cited by 324 (11 self)
Add to MetaCart
We show that a parallel repetition of any two-prover one-round proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The
constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of
possible answers is logarithmic, which was recently proved to be almost the best possible [U. Feige and O. Verbitsky, Proc. 11th Annual IEEE Conference on Computational Complexity, IEEE Computer
Society Press, Los Alamitos, CA, 1996, pp. 70--76].
, 1996
"... The study of self-testing and self-correcting programs leads to the search for robust characterizations of functions. Here we make this notion precise and show such a characterization for
polynomials. From this characterization, we get the following applications. ..."
Cited by 323 (37 self)
Add to MetaCart
The study of self-testing and self-correcting programs leads to the search for robust characterizations of functions. Here we make this notion precise and show such a characterization for
polynomials. From this characterization, we get the following applications.
- In Proceedings of the 37th IEEE Symposium on Foundations of Computer Science (FOCS’96 , 1996
"... Abstract. We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For every fixed c � 1 and given any n nodes in � 2, a randomized version of the scheme finds a
(1 � 1/c)-approximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. When the nodes a ..."
Cited by 320 (3 self)
Add to MetaCart
Abstract. We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For every fixed c � 1 and given any n nodes in � 2, a randomized version of the scheme finds a (1 �
1/c)-approximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. When the nodes are in � d, the running time increases to O(n(log n) (O(�dc))d�1). For every fixed c, d the running
time is n � poly(log n), that is nearly linear in n. The algorithm can be derandomized, but this increases the running time by a factor O(n d). The previous best approximation algorithm for the
problem (due to Christofides) achieves a 3/2-approximation in polynomial time. We also give similar approximation schemes for some other NP-hard Euclidean problems: Minimum Steiner Tree, k-TSP, and
k-MST. (The running times of the algorithm for k-TSP and k-MST involve an additional multiplicative factor k.) The previous best approximation algorithms for all these problems achieved a
constant-factor approximation. We also give efficient approximation schemes for Euclidean Min-Cost Matching, a problem that can be solved exactly in polynomial time. All our algorithms also work,
with almost no modification, when distance is measured using any geometric norm (such as �p for p � 1 or other Minkowski norms). They also have simple parallel (i.e., NC) implementations.
- IN PROC. 29TH ACM SYMP. ON THEORY OF COMPUTING, 475-484. EL PASO , 1997
"... We introduce a new low-degree--test, one that uses the restriction of low-degree polynomials to planes (i.e., affine sub-spaces of dimension 2), rather than the restriction to lines (i.e.,
affine sub-spaces of dimension 1). We prove the new test to be of a very small errorprobability (in particular, ..."
Cited by 281 (22 self)
Add to MetaCart
We introduce a new low-degree--test, one that uses the restriction of low-degree polynomials to planes (i.e., affine sub-spaces of dimension 2), rather than the restriction to lines (i.e., affine
sub-spaces of dimension 1). We prove the new test to be of a very small errorprobability (in particular, much smaller than constant). The new test enables us to prove a low-error characterization of
NP in terms of PCP. Specifically, our theorem states that, for any given ffl ? 0, membership in any NP language can be verified with O(1) accesses, each reading logarithmic number of bits, and such
that the error-probability is 2 \Gamma log 1\Gammaffl n . Our results are in fact stronger, as stated below. One application of the new characterization of NP is that approximating SET-COVER to
within a logarithmic factors is NP-hard. Previous analysis for low-degree-tests, as well as previous characterizations of NP in terms of PCP, have managed to achieve, with constant number of
accesses, error...
, 1996
"... This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight non-approximability results via consideration of measures like
the "free bit complexity" and the "amortized free bit complexity" of proof systems. ..."
Cited by 208 (40 self)
Add to MetaCart
This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight non-approximability results via consideration of measures like the
"free bit complexity" and the "amortized free bit complexity" of proof systems.
- Journal of Algorithms , 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself
in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..."
Cited by 188 (0 self)
Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our
book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by
their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder)
presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
- Journal of Computer and System Sciences , 1996
"... We present a new technique, inspired by zero-knowledge proof systems, for proving lower bounds on approximating the chromatic number of a graph. To illustrate this technique we present simple
reductions from max-3-coloring and max-3-sat, showing that it is hard to approximate the chromatic number wi ..."
Cited by 178 (8 self)
Add to MetaCart
We present a new technique, inspired by zero-knowledge proof systems, for proving lower bounds on approximating the chromatic number of a graph. To illustrate this technique we present simple
reductions from max-3-coloring and max-3-sat, showing that it is hard to approximate the chromatic number within \Omega\Gamma N ffi ), for some ffi ? 0. We then apply our technique in conjunction
with the probabilistically checkable proofs of Hastad, and show that it is hard to approximate the chromatic number to within\Omega\Gamma N 1\Gammaffl ) for any ffl ? 0, assuming NP 6` ZPP. Here, ZPP
denotes the class of languages decidable by a random expected polynomial-time algorithm that makes no errors. Our result matches (up to low order terms) the known gap for approximating the size of
the largest independent set. Previous O(N ffi ) gaps for approximating the chromatic number (such as those by Lund and Yannakakis, and by Furer) did not match the gap for independent set, and do not | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=169338","timestamp":"2014-04-21T06:42:58Z","content_type":null,"content_length":"39098","record_id":"<urn:uuid:1767c66e-a7af-4c86-8f18-e62daf8b8179>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evesham Twp, NJ Algebra 2 Tutor
Find an Evesham Twp, NJ Algebra 2 Tutor
...One of the things that I make sure whenever I tutor someone is that they have a strong foundation in the content area. I feel that as soon as the foundation is built, I can easily assist the
student in what he/she is having trouble with. Each lesson would probably have a little bit of exercise to help build the foundation, then focus on the troublesome topic.
7 Subjects: including algebra 2, chemistry, physics, calculus
...I graduated from the University of Maryland in 2007 with a degree in physics and I have been teaching ever since. I am very passionate about my profession and about physics in particular. My
tutoring style is centered around getting students to understand concepts instead of blindly plugging and chugging into random equations.
4 Subjects: including algebra 2, physics, geometry, algebra 1
...It sometimes made me feel "dumb" and I did not enjoy it at all. I use this to connect with my students and let them know I understand how they may sometimes feel. I make learning fun by using
iPad techniques, computer games, and enriching activities.
23 Subjects: including algebra 2, reading, English, biology
...Many students feel confident in some areas, while lacking experience in others. Spending a little extra time with an English tutor can help fill in the gaps that don't get addressed in every
classroom. Whether you're trying to go from 350 to 500 or 750 to 800, everyone can benefit from extra help on SAT Math.
36 Subjects: including algebra 2, reading, English, calculus
...I am convinced that my skills in those areas are above average. I have taught those subjects for the Rutgers High School Summer Program. If you have any questions please contact me as soon as
6 Subjects: including algebra 2, geometry, algebra 1, linear algebra
Related Evesham Twp, NJ Tutors
Evesham Twp, NJ Accounting Tutors
Evesham Twp, NJ ACT Tutors
Evesham Twp, NJ Algebra Tutors
Evesham Twp, NJ Algebra 2 Tutors
Evesham Twp, NJ Calculus Tutors
Evesham Twp, NJ Geometry Tutors
Evesham Twp, NJ Math Tutors
Evesham Twp, NJ Prealgebra Tutors
Evesham Twp, NJ Precalculus Tutors
Evesham Twp, NJ SAT Tutors
Evesham Twp, NJ SAT Math Tutors
Evesham Twp, NJ Science Tutors
Evesham Twp, NJ Statistics Tutors
Evesham Twp, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/evesham_twp_nj_algebra_2_tutors.php","timestamp":"2014-04-17T04:35:50Z","content_type":null,"content_length":"24414","record_id":"<urn:uuid:32cb1996-cd87-4939-bc5f-06d784aae053>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
why do we perform inverse z transform.what is the use of it.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It allows you to go from the Z-plane complex frequency-domain representation to a time-series, e.g. deriving a time-series signal from a transfer function.
Best Response
You've already chosen the best response.
For designing a electronic system , it is important to check its behavior in both domains time domain and frequency domain . For continuous time domain signal there is laplace transform , similar
for discrete signals there is z-transform. Frequency domain analysis is also important to check whether a system is stable or not. means if u are designing an amplifier , it may behave as a
oscillator at some frequency if it not perfectly designed. So these type of transforms are used there.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c9dc63e4b09c5571448552","timestamp":"2014-04-18T03:45:09Z","content_type":null,"content_length":"30513","record_id":"<urn:uuid:2f43abf1-b08d-49ab-bdac-3db5ec8e9b86>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
What do you think?
Real Member
Re: What do you think?
ye,gAr.see ya.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Hi gAr;
I completed the problem. I think you will like the solution.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Yes, I'd like to see the solution!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Last edited by gAr (2012-02-09 17:36:16)
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
Thanks! What edition do you have? I got the matrix method to work!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
It's third edition.
That's nice! How's the matrix form, can we get asymptotic form directly?
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
I do not think so it just gets a formula for x(n) in a slightly simpler form.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Maybe complex analysis is the only way to deal with asymptotics.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
There are methods like Darboux but I never could understand it. I use complex analysis.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Okay, I think g.fs are good enough to deal with it.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
I forgot to mention. Very good work! You solved the problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
I found a thesis which deals with asymptotics, looks interesting: http://citeseerx.ist.psu.edu/viewdoc/su … 1.131.2041
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
Thanks. What do you think of Feller's books?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
The Feller's books are good.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
I like them too. There is information in them that you can not find anywhere else.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Yes, there's a lot of information in it.
What you suggest is right, thanks. But the recurrence in that post is also correct, I think. See you later.
Last edited by gAr (2012-02-09 23:28:32)
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
The recurrence in post #1543?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Now I understand, you were writing the recurrence from the g.f. I thought that was a correction!
See you a bit later.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
Okay, see you later and thanks for working on the problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
New problem:
A biased coin is to be flipped a certain number of times. The probability of heads is .4 If the coin is repeatedly flipped what is the probability of 15 heads in a row before 15 tails in a row occur?
A says) (.4)^15 * (.6)^15
B says) That is not right I am getting .003412
C says) All the way with A, baby!
D says) A is the man!
E says) A is incorrect.
What is the correct answer?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: What do you think?
Hi bobbym
I think that this isn't correct, but I just thought that I should share what I got just so I could find out what I did wrong.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Hi anonimnystefy;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=12968&p=63","timestamp":"2014-04-20T16:52:30Z","content_type":null,"content_length":"44408","record_id":"<urn:uuid:1586601a-3d9e-4791-887c-e4a42e247067>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cleanest way to check if a number is an integer.
December 6th, 2011, 03:42 AM #1
Junior Member
Join Date
Dec 2011
Cleanest way to check if a number is an integer.
So I need to find factors of my numbers to speed up my algorithm and I'm dividing by prime factors and to find them I obviously need to know if they are still integers after I divide into each
prime. For 2 and 3 this is easy, since I can do simple bit operations before I divide. For any prime | prime > 3, I want to do a brute force approach and check if my result is an integer or a
I thought about this and I could use %, if % > 0, I have a fraction.
I could put the result in an integer and in a float, if float and integer differ, I have a fraction.
I could simply use a float, do a bit operation on the exponent part (bits 2-9 iirc) to see if the float has a decimal.
I could use Math.Truncate(double) from the .Net library which takes out the integer part for me, if I check thsi against the double and if it's not 0, I have a fraction.
Which method would you use?
Last edited by Candystore; December 6th, 2011 at 03:50 AM.
Re: Cleanest way to check if a number is an integer.
What about
bool isNotInteger = (number > (float)((int)number);
You'd be casting the number to integer (removing fractions), then back to float to have the same data type for comparing. If your original number is greater than your truncated one, you have a
Would that work?
Re: Cleanest way to check if a number is an integer.
What about
bool isNotInteger = (number > (float)((int)number);
You'd be casting the number to integer (removing fractions), then back to float to have the same data type for comparing. If your original number is greater than your truncated one, you have a
Would that work?
Converting a float to integer and then back to float is going to be very detrimental to performance. You are going to encounter a significant load-hit-store penalty. You should generally avoid
casting floats <--> ints whenever possible.
You could try one of these two solutions:
bool isInteger(float num)
return num % 1 == 0;
bool isInteger(float num)
return (float)Math.Floor(num) == num;
I suspect the Math.Floor version will perform better, but you should check yourself.
Re: Cleanest way to check if a number is an integer.
Thank you. I will try Nikel.
I used bit checking before you replied and it seems to be working, although someone said it doesn't for some negative number (I don't know why he said that at all), and he recommended %
this is my bit checker for factors of 2 (even)
while ((number & 1) == 0)
number = number / 2;
counterOf2 = counterOf2 + 1;
I'm using a class that reads the next prime number from a text file and does a "%" for all primes > 2 atm.
Seems to work ok in a small test, it's spitting out the right factors.
Re: Cleanest way to check if a number is an integer.
Thank you Chris also, I will check that.
Re: Cleanest way to check if a number is an integer.
Only use floating point numbers if you're ok with the inaccuracy they will give you. You will get false positives and negatives for certain number combinations. If you want accuracy you should
use the modulus operator to see if the number is exactly divisible.
www.monotorrent.com For all your .NET bittorrent needs
NOTE: My code snippets are just snippets. They demonstrate an idea which can be adapted by you to solve your problem. They are not 100% complete and fully functional solutions equipped with error
Re: Cleanest way to check if a number is an integer.
Sorry, but do I understand the problem correctly: you have an integer you wish to factorize?
The best way to avoid all of this (which I think MutantFruit is trying to point out) is to not use floats and just stick with integers:
//Naive method: requires knownPrimes to contain all primes up to sqrt(number) + 1
//For number=12 this method should return a list: { 2, 2, 3}
List<int> factorize(int number, List<int> knownPrimes)
//The maximum factor of a number is given by it's square root (+1 to avoid an off by one error)
int maxFactor = (int)Math.sqrt(number) + 1;
List<int> factors = new List<int>();
for(int i = 0; i < knownPrimes.Count && number != 1; i++)
if( knownPrimes[i] > maxFactor)
//While the prime is a factor...
while( number % knownPrimes[i] == 0 )
number /= knownPrimes[i];
//Add any non-one value in number (which must be prime)
if( number != 1 )
The only place I used a float (actually a double) was in the sqrt call, but it's only to immediately approximate an integer that slightly overestimates the true square, so no need for any sort of
Last edited by BioPhysEngr; December 6th, 2011 at 11:12 PM.
Best Regards,
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
December 6th, 2011, 06:30 AM #2
Join Date
Jun 2011
Buenos Aires, Argentina
December 6th, 2011, 06:53 AM #3
Member +
Join Date
Aug 2008
December 6th, 2011, 07:00 AM #4
Junior Member
Join Date
Dec 2011
December 6th, 2011, 07:02 AM #5
Junior Member
Join Date
Dec 2011
December 6th, 2011, 08:09 AM #6
Senior Member
Join Date
May 2007
December 6th, 2011, 11:08 PM #7
Join Date
Feb 2011
United States | {"url":"http://forums.codeguru.com/showthread.php?518847-Cannot-implicitly-convert-type-quot-string-quot-to-quot-int-quot&goto=nextnewest","timestamp":"2014-04-19T21:41:26Z","content_type":null,"content_length":"103267","record_id":"<urn:uuid:f21aa14c-7eaf-4724-9850-b7149f3db0df>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In An Economist Who Is Studying The Relationship Between The an economist
who is studying the relationship between the money supply interest rates and the rate of inflation is engaged in a An Economist Who Is Studying The Relationship Between The.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In 1 An Economist Who Is Studying The Relationship Between 1 an economist
who is studying the relationship between the money supply interest rates and the rate of inflation is engaged in a microeconomic research b 1 An Economist Who Is Studying The Relationship Between.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In An Economist Who Is Studying The Relationship Between The user an
economist who is studying the relationship between the money supply interest rates and the rate of inflation is engaged in weegy macroeconomic research An Economist Who Is Studying The Relationship
Between The.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In 1 An Economist Who Is Studying The Relationship 1 an economist who is
studying the relationship between the money supply interest rates and the rate of inflation is engaged in a microeconomic research 1 An Economist Who Is Studying The Relationship.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In The Relationship Between Interest Rate Inflation Ehow relationship
between interest rate inflation there is a strong correlation between interest rates and inflation between money supply and The Relationship Between Interest Rate Inflation Ehow.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In Can Anybody Help Me With Eco 365 Final Exam 1 an economist who is
studying the relationship between the money supply interest rates and the rate of inflation is engaged in a microeconomic Can Anybody Help Me With Eco 365 Final Exam.
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In An Economist Who Is Studying The Relationship Between The 1 an economist
who is studying the relationship between the money supply interest rates and the rate of inflation is engaged in 2 a basic difference between An Economist Who Is Studying The Relationship Between
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In An Economist Who Is Studying The Relationship Between The an economist
who is studying the relationship between the money supply interest rates and the rate of inflation is engaged in you are here home An Economist Who Is Studying The Relationship Between The.
Eco 365 Final Exam Answers
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In eco 365 final exam answers an economist who is studying the relationship
between the money supply interest rates and the rate of inflation is the distinction between supply and the Eco 365 Final Exam Answers.
Mr Goodhart And The Emu In Word Barnard College
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In mr goodhart and the emu in word barnard college while in monetarist
fashion the money supply determines price inflation credit relationship between a banker central bank can control interest rates Mr Goodhart And The Emu In Word Barnard College.
Estimating Exchange Rate Pass Through In The Republic Of
An Economist Who Is Studying The Relationship Between The Money Supply Interest Rates And The Rate Of Inflation Is Engaged In estimating exchange rate pass through in the republic of relationship
between exchange rates and price levels money supply interest rate literature of exchange rate inflation relationship 38 Estimating Exchange Rate Pass Through In The Republic Of.
anesthesia seminars 2013
HOW TO BILL FOR J1050
texas minimum wage going up 2013
2013 maryland child support guideline calculator
NHL 2012 2013 MVP
average monthly income fo americans in 2013
seattle public schools 2013 calendar
2013 wv state pay calendar
texas legislature 2013 raises
us postal rates 2013
somoma county crab season 2013
When Does School Start 2013 lausd
hird form 2013
will military furloughs go from 14 to 7 days
va national income thresholds fact sheet 164 12
chad l brommer
pearson 2013 ap biology test prep
calpers 2013 cola
ny walleye season 2013
fill out qr7 online
mar 4 w08 2 25 p 30 par 8
minimum wages 2013 philippines
borang be 2012
criminology licesure exam march 2013 list of board passers
2 THESSALONIANS 2 1 3 9 17 SUNDAY SCHOOL
may 13 2013 philippine election holiday | {"url":"http://www.cizwin.com/an-economist-who-is-studying-the-relationship-between-the-money-supply-interest-rates-and-the-rate-of-inflation-is-engaged-in.htm","timestamp":"2014-04-18T11:01:45Z","content_type":null,"content_length":"20690","record_id":"<urn:uuid:65c0269c-0403-4d9e-8380-deaff7019d2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Book advice
Light [but deep and intuition-building] reading (with few sensationalized speculations which are often found in many pop-sci books):
Feynman's QED, Geroch's General Relativity from A to B
In preparation for college physics, I agree with eep: spend some time learning basic mechanics (and electromagnetism)... and learn some basic mathematics (calculus, vector algebra, complex numbers). | {"url":"http://www.physicsforums.com/showthread.php?t=120262","timestamp":"2014-04-18T08:14:31Z","content_type":null,"content_length":"46048","record_id":"<urn:uuid:fe6add60-0cf6-4855-9d89-a4c85368dacb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mixture models: Theory, Geometry and Applications
Results 1 - 10 of 101
- INSTITUTE OF INTERNATIONAL ECONOMICS PROJECT ON INTERNATIONAL COMPETITION POLICY," COM/DAFFE/CLP/TD(94)42 , 1997
"... ..."
- Psychological Methods , 1999
"... A developmental trajectory describes the course of a behavior over age or time. A group-based method for identifying distinctive groups of individual trajectories within the population and for
profiling the characteristics of group members is demonstrated. Such clusters might include groups of " ..."
Cited by 56 (1 self)
Add to MetaCart
A developmental trajectory describes the course of a behavior over age or time. A group-based method for identifying distinctive groups of individual trajectories within the population and for
profiling the characteristics of group members is demonstrated. Such clusters might include groups of "increasers. " "decreasers," and "no changers. " Suitably defined
probability distributions are used to handle 3 data types—count, binary, and psychometric scale data. Four capabilities are demonstrated: (a) the capability to identify rather than assume distinctive
groups of trajectories, (b) the capability to estimate the proportion of the population following each such trajectory group, (c) the capability to relate group membership probability to individual
characteristics and circumstances, and (d) the capability to use the group membership probabilities for various other purposes such as creating profiles of group members. Over the past decade, major
advances have been made in methodology for analyzing individual-level developmental trajectories. The two main branches of methodology are hierarchical modeling (Bryk &
"... We consider the problem of learning mixtures of distributions via spectral methods and derive a tight characterization of when such methods are useful. Specifically, given a mixture-sample, let
i , C i , w i denote the empirical mean, covariance matrix, and mixing weight of the i-th component. We ..."
Cited by 54 (0 self)
Add to MetaCart
We consider the problem of learning mixtures of distributions via spectral methods and derive a tight characterization of when such methods are useful. Specifically, given a mixture-sample, let i , C
i , w i denote the empirical mean, covariance matrix, and mixing weight of the i-th component. We prove that a very simple algorithm, namely spectral projection followed by single-linkage clustering,
properly classifies every point in the sample when each i is separated from all j by 2 (1/w i +1/w j ) plus a term that depends on the concentration properties of the distributions in the mixture.
This second term is very small for many distributions, including Gaussians, Log-concave, and many others. As a result, we get the best known bounds for learning mixtures of arbitrary Gaussians in
terms of the required mean separation. On the other hand, we prove that given any k means i and mixing weights w i , there are (many) sets of matrices C i such that each i is separated from all j by
2 (1/w i + 1/w j ) , but applying spectral projection to the corresponding Gaussian mixture causes it to collapse completely, i.e., all means and covariance matrices in the projected mixture are
- Journal of Computer and System Sciences , 2002
"... We show that a simple spectral algorithm for learning a mixture of k spherical Gaussians in R works remarkably well --- it succeeds in identifying the Gaussians assuming essentially the minimum
possible separation between their centers that keeps them unique (solving an open problem of [1]). The ..."
Cited by 43 (5 self)
Add to MetaCart
We show that a simple spectral algorithm for learning a mixture of k spherical Gaussians in R works remarkably well --- it succeeds in identifying the Gaussians assuming essentially the minimum
possible separation between their centers that keeps them unique (solving an open problem of [1]). The sample complexity and running time are polynomial in both n and k. The algorithm also works for
the more general problem of learning a mixture of "weakly isotropic" distributions (e.g. a mixture of uniform distributions on cubes).
- Statistics and Computing
"... Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data
containing a group or groups of observations with longer than normal tails or atypical observations, the ..."
Cited by 42 (1 self)
Add to MetaCart
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data
containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we
consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the
context of clustering multivariate data in the presence of atypical observations in the form of background noise.
- Statistica Sinica , 2002
"... Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related
to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive ..."
Cited by 40 (1 self)
Add to MetaCart
Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to
the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive use of the popular uniform Dirichlet prior leads to an inconsistent posterior. However, a simple
adjustment to the parameters in the prior induces a random probability measure that approximates the Dirichlet process and yields a posterior that is strongly consistent for the density and weakly
consistent for the unknown mixing distribution. The dimension of the resulting sieve can be selected easily in practice and a simple and efficient Gibbs sampler can be used to sample the posterior of
the mixing distribution. Key words and phrases: Bose-Einstein distribution, Dirichlet process, identification, method of sieves, random probability measure, relative entropy, weak convergence.
, 2002
"... Clusteringisafundamentalprobleminunsuper-vised learning, andhasbeenstudiedwidelyboth asaproblemoflearningmixture modelsandasanoptimizationproblem. Inthispaper, we studyclusteringwithrespectthe
k-median objectivefunction, anaturalformulationofclusteringin whichweattempttominimize the average distance ..."
Cited by 32 (2 self)
Add to MetaCart
Clusteringisafundamentalprobleminunsuper-vised learning, andhasbeenstudiedwidelyboth asaproblemoflearningmixture modelsandasanoptimizationproblem. Inthispaper, we studyclusteringwithrespectthe
k-median objectivefunction, anaturalformulationofclusteringin whichweattempttominimize the average distancetoclustercenters. Oneofthe maincontributionsofthispaperisasimplebutpowerful
samplingtechniquethatwecall successivesampling thatcouldbeofindependentinterest. Weshowthatoursamplingprocedurecan rapidlyidentify asmallsetofpoints(ofsizejust O(k log n/k))
thatsummarizetheinputpoints forthepurposeofclustering. Usingsuccessive sampling, we develop analgorithmforthe k-medianproblemthatrunsin O(nk) timeforawiderangeof valuesof k andisguaranteed, with high
probability, to return a solution with cost at most a constant factor times optimal. We also establish a lower bound of \Omega ( nk) onanyrandom-izedconstant-factorapproximation algorithm for the
k-median problem that succeeds with even a negligible (say
, 2005
"... This article develops, and describes how to use, results concerning disintegrations of Poisson random measures. These results are fashioned as simple tools that can be tailor-made to address
inferential questions arising in a wide range of Bayesian nonparametric and spatial statistical models. The P ..."
Cited by 32 (10 self)
Add to MetaCart
This article develops, and describes how to use, results concerning disintegrations of Poisson random measures. These results are fashioned as simple tools that can be tailor-made to address
inferential questions arising in a wide range of Bayesian nonparametric and spatial statistical models. The Poisson disintegration method is based on the formal statement of two results concerning a
Laplace functional change of measure and a Poisson Palm/Fubini calculus in terms of random partitions of the integers {1,...,n}. The techniques are analogous to, but much more general than,
techniques for the Dirichlet process and weighted gamma process developed in [Ann. Statist. 12
- Journal of the Royal Statistical Society, Series B , 2002
"... This paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial
relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our ..."
Cited by 31 (2 self)
Add to MetaCart
This paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships,
and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson distributed data, and applications in disease mapping. We work in
a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatially-dependent weights, based on
transformations of autoregressive gaussian processes: in one (the Logistic normal model), the mixture component labels are exchangeable, in the other (the Grouped continuous model), they are ordered.
Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performance of both of these formulations is examined on synthetic data and real data on
mortality from rare disease.
- Applied Stochastic Models in Business and Industry , 2001
"... We model a call center as a queueing model with Poisson arrivals having an unknown varying arrival rate. We show how to compute prediction intervals for the arrival rate, and use the Erlang
formula for the waiting time to compute the consequences for the occupancy level of the call center. We compar ..."
Cited by 29 (4 self)
Add to MetaCart
We model a call center as a queueing model with Poisson arrivals having an unknown varying arrival rate. We show how to compute prediction intervals for the arrival rate, and use the Erlang formula
for the waiting time to compute the consequences for the occupancy level of the call center. We compare it to the current practice of using a point estimate of the arrival rate (assumed constant) as | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=431805","timestamp":"2014-04-19T17:53:00Z","content_type":null,"content_length":"38038","record_id":"<urn:uuid:bbd7dd20-f3a2-4ab0-98bd-d18f25dda6be>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/needsgoodanswer/asked/1","timestamp":"2014-04-20T03:18:02Z","content_type":null,"content_length":"98328","record_id":"<urn:uuid:a8e8c348-e8c2-4742-a574-f2ff68ee3a43>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
/Water Nanofluid as Coolant in a Double-Tube Heat Exchanger Flowing under a Turbulent Flow Regime
Advances in Mechanical Engineering
Volume 2012 (2012), Article ID 891382, 8 pages
Research Article
Performance Evaluation of /Water Nanofluid as Coolant in a Double-Tube Heat Exchanger Flowing under a Turbulent Flow Regime
^1Department of Mechanical Engineering, Islamic Azad University, Abadan Branch, Abadan, Iran
^2Department of Mechanical Engineering, Imam Khomeini International University, Qazvin, Iran
Received 21 June 2012; Revised 21 August 2012; Accepted 24 August 2012
Academic Editor: Hakan F. Oztop
Copyright © 2012 Navid Bozorgan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Nanofluids are expected to be a promising coolant candidate in chemical processes for water waste remediation and heat transfer system size reduction. This paper focuses on the potential mass
flowrate reduction in exchanger with a given heat exchange capacity using nanofluids. Al[2]O[3] nanoparticles with diameters of 7nm dispersed in water with volume concentrations up to 2% are
selected as a coolant, and their performance in a horizontal double-tube counterflow heat exchanger under turbulent flow conditions is numerically studied. The results show that the flowrate of
nanofluid coolant decreases with the increase of concentration of nanoparticles in the exchanger with a given heat exchange capacity. The mass flowrate of the nanofluid at a volume concentration of
2vol.% is approximately 24.5% lower than that of pure water (base fluid) for given conditions. For the pressure drop, the results show that the pressure drop of nanofluid is slightly higher than
water and increases with increase of volume concentrations. In addition, the reduction of wall temperature and heat transfer area is estimated.
1. Introduction
Cooling is one of the top technical challenges to obtain the best heat performance in the heat exchange devices. In chemical processes, one of the most important devices related to energy and heat
transfer is heat exchanger. The heat exchangers have an important role in the energy conservation, conversion, and recovery. Due to the rapid development of modern technology, heat exchangers used by
various industries require high heat-flux cooling to the level of tens of MW/m^2. At this level, cooling with conventional fluids such as water and ethylene glycol, and so forth. (because of poor
conductivity) is challenging. Therefore, it is necessary to increase the heat transfer capabilities of working fluids in the heat transfer devices. A recent advancement in nanotechnology has been the
introduction of nanofluids, that is, colloidal suspensions of nanometer-sized solid particles instead of common working fluids. Nanofluids were first innovated by Choi and Eastman [1] in 1995 at the
Argonne National Laboratory, USA. Compared with traditional solid-liquid suspensions containing millimeter-or micrometer-sized particles, nanofluids as coolants in the heat exchangers have shown
better heat transfer performance because of the small size of suspended solid particles. It causes that nanofluids have a behavior similar to base liquid molecules.
Nanofluids have attracted attention as a new generation of heat transfer fluids in building heating, in heat exchangers, in chemical plants, and in automotive cooling applications, because of their
excellent thermal performance. Recently, there have been considerable research findings highlighting superior heat transfer performances of nanofluids. Demir et al. [2] investigated numerically
laminar and turbulent forced convection flows of Al[2]O[3]/water nanofluid as working fluid in a horizontal smooth tube with constant wall temperature and reported an enhancement in heat transfer
coefficient. Gherasim et al. [3] presented numerical simulations for a radial flow cooling system with an Al[2]O[3]/water nanofluid flow. The results indicate that the addition of nanoparticles to
the base fluid enhances heat transfer performance. Also the numerical results show that the average Nusselt number and pumping power of nanofluid increase with increasing the particle volume
concentration. Mohammed et al. [4] numerically studied the effects of using nanofluid on the performance of a square shaped microchannel heat exchanger (MCHE). Their results demonstrated that Al[2]O
[3] and Ag nanoparticles have the highest heat transfer coefficient and lowest pressure drop among all nanoparticles tested, respectively. They concluded that the benefits of nanofluids such as
enhancement in heat transfer coefficient are dominant over the shortcomings such as increasing in pressure drop. Ollivier et al. [5] investigated the use of nanofluids as a jacket water coolant in a
gas spark ignition engine. They numerically simulated the unsteady heat transfer through the cylinder and inside the coolant flow. Authors reported that because of higher thermal diffusivity of
nanofluids, the thermal signal variations for knock detection increased by 15% over the predicted using water alone. Vajjha et al. [6] numerically investigated the heat transfer augmentation by
application of two different nanofluids consisting Al[2]O[3] and CuO nanoparticles in an ethylene glycol and water mixture circulating through the flat tubes of an automobile radiator. Their results
showed that at a Reynolds number of 2000, the percentage increase in the average heat transfer coefficient over the base fluid for a 10% Al[2]O[3] nanofluid is 94% and that for a 6% CuO nanofluid is
89%. They found that the average heat transfer coefficient increases with the Reynolds number and also with the particle volumetric concentration. Leong et al. [7] have studied the application of
nanofluids as working fluids in shell and tube heat recovery exchangers in a biomass heating plant and showed that about 7.8% of the heat transfer enhancement could be achieved with the addition of
1% copper nanoparticles in ethylene glycol-based fluid at 26.3kg/s and 111.6kg/s mass flow rate for flue gas and coolant, respectively. Ijam and Saidur [8] theoretically analyzed a minichannel heat
sink with a 20 × 20cm bottom for SiC/water nanofluid and TiO[2]/water nanofluid turbulent flow as coolants through hydraulic diameters. Their results showed that enhancment in thermal conductivity
by dispersed SiC in water at 4% volume fraction was 12.44% and by dispersed TiO[2] in water was 9.99% for the same volume fraction. Also, it was found that by using SiC-water nanofluid as a coolant
instead of water, an improvement of approximately 7.25%–12.43% could be achieved and by using TiO[2]-water 7.63%–12.77%. Saeedinia et al. [9] applied CuO-based oil particles varying in the range of
0.2%–2% inside a circular tube. Their results showed that the CuO nanoparticles suspended in base oil increase the heat transfer coefficient even for a very low particle concentration of 0.2% volume
concentration. Moreover, a maximum heat transfer coefficient enhancement of 12.7% is obtained for a 2% CuO nanofluid. Shafahi et al. [10] used a two-dimensional analysis to study the thermal
performance of a cylindrical heat pipe utilizing Al[2]O[3], CuO, and TiO[2] nanofluids. Their results confirmed that the thermal performance of a heat pipe is improved and temperature gradient along
the heat pipe and thermal resistance across the heat pipe are reduced and maximum capillary heat transfer of the heat pipe is observed when nanofluids are utilized as the working fluid.
From the above, it is evident that nanofluids have valuable applications in heat exchangers in all types of industries. The previous research works have reported the effect of nanofluids as coolants
on the thermal performance of heat exchangers. Their results show that using nanofluid can effectively improve the heat transfer performance, but will also increase the pressure drop and pumping
The previous research works do not concentrate on the effect of nanofluids in the reduction of coolant mass flowrate in exchanger with a given heat exchange capacity. The objective of this paper is
to provide improvements through nanofluids in place of pure working fluid in heat exchangers with a view of decreasing the mass flowrate for providing the same heat exchange capacity.
In this study, 7nm-Al[2]O[3] nanoparticle with concentration up 2 vol.% has been selected as a coolant in a typical horizontal double-tube heat exchanger because of its good thermal properties and
easy availability. Water has been chosen as heat transfer base fluid.
Al[2]O[3] nanoparticles are generally considered as safe material for human being and animals that are actually used in the cosmetic products and water treatment. In addition, Al[2]O[3] nanoparticles
are stabilized in the various ranges of PH. It shall be noted that metal oxides such as Al[2]O[3] nanoparticles are chemically more stable than their metallic counterparts.
In this investigation, first the thermophysical properties of Al[2]O[3]/water nanofluid are calculated by using the well-known correlations developed from experiments. Then, the effects of volume
fraction of the Al[2]O[3] nanoparticles dispersed in water on the thermal performance and potential reduction in mass flowrate are evaluated.
The applicability of nanotechnology towards wastewater remediation and reduction in the presently high initial and operating costs should be considered as future work.
2. Methodology
2.1. Prediction of Thermophysical Properties of Nanofluid
In order to investigate the heat transfer performance of nanofluids and use them in practical applications, it is necessary first to study their thermophysical properties such as density, specific
heat, viscosity, and thermal conductivity. In this study, to validate the numerical results, thermal properties of Al[2]O[3]/water nanofluid are determined by employing well-known empirical
Some properties of hydrophilic rod-like Al[2]O[3] nanoparticles (AF-alumina type) and base fluid (water) which have been used for assessing the nanofluid properties are tabulated in Table 1. The AF
alumina type nanoparticle is rod-like and because of its cylindrical shape and elongation, it has a better heat conduction through the fluid rather than spherical nanoparticles. However the spherical
nanoparticles are often most readily available at the best prices.
The density of Al[2]O[3]/water nanofluid can be calculated using mass balance as [11] where and are the densities of the nanoparticles and base fluid, respectively, and is volume concentration of
According to the concept of solid-liquid mixture, the specific heat of nanofluids is given by following [12]: where and , are the heat specifics of the nanoparticles and base fluid, respectively.
The viscosity of nanofluid can be calculated from the following equation: where is the slope of the relative viscosity to the particle volume fraction. Value of is a constant and calculated from the
experimental results of Chun et al. [13]. In this work, it is equal to 15.4150.
One well-known formula for computing the thermal conductivity of nanofluid is the Kang model which is expressed in the following form [14]:
In the present paper, this model is employed for calculating the thermal conductivity of Al[2]O[3]/water nanofluid.
2.2. Mathematical Modeling
This research attempts to investigate numerically the heat transfer characteristics of a double-tube exchanger with a given heat exchange capacity by water-based Al[2]O[3] nanofluid as a coolant and
pumping power.
Figure 1 represents the dimensions of horizontal double-tube heat exchanger and conditions of hot solvent and nanofluid coolant streams that have been taken into consideration in this work. However,
the following assumptions are made(i)The flow is incompressible, steady-state, and turbulent.(ii)The effect of body force is neglected.(iii)The thermophysical properties of nanofluids are constant.
Mathematical correlations shown in Sections 2.2.1–2.2.3 are taken from references [15–17]. It highlighted not only the influence of nanofluids but also the volume fraction of Al[2]O[3] nanoparticles
to the heat transfer, mass flow rate, and pumping power of a double-tube exchanger. Calculations have been done on hot solvent and coolant sides.
2.2.1. Hot Solvent Side Caculation
(a) The rate of heat transferred to the hot solvent in a double-tube heat exchanger can be written as follows: where and denote the relevant parameters of hot solvent and nanofluid coolant, and are
the inlet and outlet temperatures of hot solvent, and and are the inlet and outlet temperatures of nanofluid coolant.
In this study, the heat exchange capacity of exchanger is equal to 15.376 kW, the inlet and outlet temperatures of hot solvent stream are equal to 40°C and 30°C, respectively, the flowrate of hot
solvent stream is 0.8kgs^−1, and its specific heat capacity is equal to 1922Jkg^−1K^−1.
(b) The heat transfer coefficient of the hot solvent flowing inside the tube under a turbulent regime (Re > 10000) can be calculated as follows [15]: where is the internal diameter of the internal
tube is the viscosity correction factor. In the previous equation the Reynolds and Prandtl numbers are calculated considering the hot solvent properties as follows:
Consequently, heat transfer coefficient of hot solvent that referred to the external area, , is defined as: where is the external diameter of the internal tube.
2.2.2. Nanofluids Side Calculation
In this research, the inlet temperature of nanofluid coolant is equal to 5°C and other important thermal and hydrolic properties of nanofluid coolant such as outlet temperature and mass flowrate are
calcualted from empirical correlations obtined in this section regarding the constant heat exchange capacity of exchanger.
(a) The heat transfer coefficient of the nanofluid as coolant flowing in the annular can be calculated considering the turbulent Nusselt number presented by Li and Xuan [16] as follows: where is the
nanofluid Peclet number and is defined in the following form: where is the diameter of the nanoparticles and is the nanofluids thermal diffusivity which is defined as follows:
The Reynolds and Prandtl numbers in (9) are calculated considering the nanofluid properties as follows: where is the equivalent diameter which is expressed in the following form: where is the
internal diameter of the external tube.
It shall be noted that all physical properties of both hot solvent and nanofluid coolant that appeared in previous equations, except the viscosity correction factor, shall be evaluated at the mean
temperature between inlet and outlet conditions.
The viscosity correction factor is defined as the ratio of viscosity of the nanofluid at the mean temperature of inlet and outlet conditions to that one at the mean temperature of wall tube. The mean
temperature of wall tube, , cannot be calculated explicitly. Therefore, as a first approximation, it is assumed to be equal to 1 and the first values of heat transfer coefficients ( and ) are
calculated using (6)–(13). Then, is calculated by equating the heat transfer rates at both sides of the tube wall as follows:
By having , the exact value of viscosity correction factor is calculated and the previous values for and are modified.
(b) The friction factor of Al[2]O[3]/water nanofluid can be calculated using the formula presented as follows [17]: where
(c) The pressure drop and pumping power () for Al[2]O[3]/water nanofluid used as a coolant in a double-tube heat exchanger are calculated as follows [17]: where is the length of the tube, is the
equivalent diameter of an annulus given by , and is the annular flow area.
2.2.3. Total Heat Transfer Area and Coefficient Calculation
(a) The total heat transfer coefficient can be calculated as follows: where is the fouling resistance, is heat transfer coefficient of hot solvent that referred to the external area, and is the heat
transfer coefficient of the nanofluid coolant. In this work, the fouling resistance is assumed to be .
(b) The total heat transfer area of a double-tube heat exchanger, , is computed from the following equation: where is the total heat transfer coefficient and is the temperature correction factor,
which in the case of the countercurrent flow can be taken equal to 1.
3. Results and Discussion
As mentioned previously, the Kang model has been applied to predict the thermal conductivity of the Al[2]O[3]/water nanofluid. As shown in Figure 2, the thermal conductivity of Al[2]O[3]/water
nanofluid with different concentrations (0–2% volume fraction) has been calculated using Kang model. These results are important for evaluating the heat transfer performance and flowrate of the
coolant. As can be seen, thermal conductivity increases with increasing the nanoparticles volume concentration.
Figure 3 shows the effect of nanoparticles concentration on the heat transfer coefficient and Nusselt number. Results show that the heat transfer coefficient and Nusselt number can be enhanced by
adding nanoparticles to the base fluid.
Increasing the particles concentration raises the fluid viscosity and decreases the Reynolds number and consequently decreases the heat transfer coefficient (Figure 4). But the results shown in
Figure 3 indicate that increasing in particles concentration raises the heat transfer coefficient. Therefore, it can be concluded that the change in the coolant heat transfer coefficient is more than
the change in the fluid viscosity with increasing nanoparticles loading in the base fluid.
A further inspection of Figures 2 and 3 shows that for a volume concentration of 2%, the heat transfer coefficient increases about 64.65%, while the increase of thermal conductivity is below 40%.
In this study, the ratio of thermal conductivity and heat transfer coefficient of nanofluid in comparison with the base fluid is defined by following equations:
Figure 5 shows these defined parameters for the Al[2]O[3]/water nanofluid at various concentrations. This figure reveals that as the concentration increases, the effect of increasing nanoparticles
concentration on changing the thermal conductivity is lower than changing the heat transfer coefficient.
Enhancement of heat transfer by the nanofluid may be resulted from the following two aspects: first is the suspended particles that increase the thermal conductivity of the mixture; the other one is
that chaotic movement of ultrafine particles accelerates energy exchange process between the fluid and the wall.
Figure 6 shows the total heat transfer coefficient for Al[2]O[3]/water nanofluid coolant in a double-tube heat exchanger that has been calculated by (18). As shown in this figure, the total heat
transfer coefficient is high when the probability of collision between nanoparticles and the wall of the heat exchanger has increased under higher concentration conditions. It confirms that
nanofluids have considerable potential to use in cooling systems. A further inspection of Figure 6 shows that the total heat transfer coefficient of the Al[2]O[3]/water nanofluid for volume
concentrations in the range of 0.1% to 2% increases by 0.55%–3.5%.
As mentioned previously, the wall temperature, , shall be calculated to obtain the exact values for viscosity correction factor, , and (discussed previously). Figure 7 shows the reduction percent of
wall temperature and heat transfer area in a double-tube heat exchanger that utilizes Al[2]O[3]/water nanofluid as a coolant under turbulent flow conditions. As it can be seen from Figure 7, the wall
temperature and total heat transfer area decrease with the increasing of volume concentration of nanoparticles. For example, the reduction percent of wall temperature at 0.5%, 1%, 1.5%, and 2% volume
concentrations is about 5.35%, 9.32%, 11.74%, and 13.72%, respectively. Moreover, the reduction of the total heat transfer area at 2% volume concentration is about 3.35%.
Figure 8 shows the required flowrate of Al[2]O[3]/water nanofluid as a coolant at various volume concentrations for providing the same heat exchange capacity (discussed previously). This figure
reveals that at the same heat exchange capacity, the flowrate of nanofluid coolant decreases with the increasing concentration of nanoparticles. For a volume concentration range of 0.1% to 2%, the
mass flowrate decreases by 4.73% to 24.5%.
In order to apply the nanofluids for practical application, in addition to the heat transfer performance it is necessary to study their flow features. Therefore, the effects of Al[2]O[3]/water
nanofluid on the friction factor, pressure drop, and pumping power have been studied in this paper. Figures 9 and 10 show the effects of different concentrations of Al[2]O[3]/water nanofluid on the
friction factor, pressure drop, and pumping power in a double-tube heat exchanger. The results show that nanofluid friction factor and pressure drop increase with increasing nanoparticles loading in
the base fluid. For a concentration of 2 vol.%, the friction factor and pressure drop increase by 13.64% and 15.66%, respectively. Therefore, the pressure drop of tube must be considered when the Al
[2]O[3]/water nanofluid is applied to heat exchange.
4. Conclusions
A numerical study has been carried out on the characteristics of 7nm Al[2]O[3]/water nanofluid with volume concentrations up to 2% in a horizontal double tube counter-flow heat exchanger under
turbulent flow conditions. Thermal conductivity and viscosity for Al[2]O[3]/water nanofluid have been calculated from the experimental results of Chun et al. [13]. The results confirm that nanofluid
offers higher heat performance than water and therefore can reduce the total heat transfer area and also coolant flowrate for providing the same heat exchange capacity.
In order to determine the feasibility of Al[2]O[3]/water nanofluid as a coolant in a double-tube heat exchanger, the effects of nanoparticles on the friction factor, pressure drop, and pumping power
have been evaluated. The results show that using the Al[2]O[3]/water nanofluid at higher particle volume fraction creates a small penalty in pressure drop.
: Total heat transfer area,
: Annular flow area,
: Specific heat, J/kgK
: Internal diameter of the internal tube, m
: External diameter of the internal tube, m
: Internal diameter of the external tube, m
: Equivalent diameter for h calculations, m
: Equivalent diameter for pressure drop calculations, m
: Nanoparticle diameter, m
: Fiction factor
: Heat transfer coefficient,
: Thermal conductivity, W/mK
: Length of the tube, m
LMTD: Logarithm mean temperature difference
: Mass flow rate, kg/s
: Pressure drop, Pa
: Peclet number
: Prandtl number
: Pumping power, W
: Heat exchange capacity of exchanger, kW
: Reynolds number
: Fouling resistance,
: Inlet and outlet temperatures of hot solvent,
: Mean temperature of wall tube,
: Inlet and outlet temperatures of nanofluid coolant,
: Total heat transfer coefficient,
: Velocity, m/s
Greek Letters
: Density, kg/
: Volume concentration
: Viscosity, kg/ms
: Thermal diffusivity, .
ave: Average
: Base fluid
: Hot solvent
: Nanofluid
: Particles
: Wall tube.
The authors would like to express their appreciation to the Islamic Azad University of Abadan Branch for providing financial support.
1. S. U. S. Choi and J. A. Eastman, “Enhancing thermal conductivity of fluids with nanoparticles,” in ASME International Mechanical Engineering Congress and Exhibition, San Francisco, Calif, USA,
2. H. Demir, A. S. Dalkilic, N. A. Kurekci, B. Kelesoglu, and S. Wongwises, “A numerical investigation of nanofluids forced convection flow in a horizontal smooth tube,” in Proceedings of the 14th
International Heat Transfer Conference, vol. 6, ASME, Washington, DC, USA, August 2010. View at Publisher · View at Google Scholar
3. I. Gherasim, G. Roy, C. T. Nguyen, and D. Vo-Ngoc, “Heat transfer enhancement and pumping power in confined radial flows using nanoparticle suspensions (nanofluids),” International Journal of
Thermal Sciences, vol. 50, no. 3, pp. 369–377, 2011. View at Publisher · View at Google Scholar · View at Scopus
4. H. A. Mohammed, G. Bhaskaran, N. H. Shuaib, and H. I. Abu-Mulaweh, “Influence of nanofluids on parallel flow square microchannel heat exchanger performance,” International Communications in Heat
and Mass Transfer, vol. 38, no. 1, pp. 1–9, 2011. View at Publisher · View at Google Scholar · View at Scopus
5. E. Ollivier, J. Bellettre, M. Tazerout, and G. C. Roy, “Detection of knock occurrence in a gas SI engine from a heat transfer analysis,” Energy Conversion and Management, vol. 47, no. 7-8, pp.
879–893, 2006. View at Publisher · View at Google Scholar · View at Scopus
6. R. S. Vajjha, D. K. Das, and P. K. Namburu, “Numerical study of fluid dynamic and heat transfer performance of Al[2]O[3] and CuO nanofluids in the flat tubes of a radiator,” International Journal
of Heat and Fluid Flow, vol. 31, no. 4, pp. 613–621, 2010. View at Publisher · View at Google Scholar · View at Scopus
7. K. Y. Leong, R. Saidur, T. M. I. Mahlia, and Y. H. Yau, “Modeling of Shell and tube heat recovery exchanger operated with nanofluid based coolants,” International Journal of Heat and Mass
Transfer, vol. 55, no. 4, pp. 808–816, 2012.
8. A. Ijam and R. Saidur, “Naofluid as a coolant for electronic devices (cooling of electronic devices),” Applied Thermal Engineering, vol. 32, pp. 76–82, 2012.
9. M. Saeedinia, M. A. Akhavan-Behabadi, and M. Nasr, “Experimental study on heat transfer and pressure drop of nanofluid flow in a horizontal coiled wire inserted tube under constant heat flux,”
Experimental Thermal and Fluid Science, vol. 36, pp. 158–168, 2012.
10. M. Shafahi, V. Bianco, K. Vafai, and O. Manca, “An investigation of the thermal performance of cylindrical heat pipes using nanofluids,” International Journal of Heat and Mass Transfer, vol. 53,
no. 1–3, pp. 376–383, 2010. View at Publisher · View at Google Scholar · View at Scopus
11. B. C. Pak and Y. I. Cho, “Hydrodynamic and heat transfer study of dispersed fluids with submicron metallic oxide particles,” Experimental Heat Transfer, vol. 11, no. 2, pp. 151–170, 1998. View at
12. Y. Xuan and W. Roetzel, “Conceptions for heat transfer correlation of nanofluids,” International Journal of Heat and Mass Transfer, vol. 43, no. 19, pp. 3701–3707, 2000. View at Publisher · View
at Google Scholar · View at Scopus
13. B. H. Chun, H. U. Kang, and S. H. Kim, “Effect of alumina nanoparticles in the fluid on heat transfer in double-pipe heat exchanger system,” Korean Journal of Chemical Engineering, vol. 25, no.
5, pp. 966–971, 2008. View at Publisher · View at Google Scholar · View at Scopus
14. H. U. Kang, J. M. Oh, and S. H. Kim, “Estimation of thermal conductivity of nanofluid using experimental effective particle volume,” Experimental Heat Transfer, vol. 19, no. 3, pp. 181–191, 2006.
View at Publisher · View at Google Scholar · View at Scopus
15. E. Cao, Heat Transfer in Process Engineering, McGraw-Hill, New York, NY, USA, 2010.
16. Q. Li and Y. Xuan, “Convective heat transfer and flow characteristics of Cu-water nanofluid,” Science in China E, vol. 45, no. 4, pp. 408–416, 2002. View at Scopus
17. R. S. Vajjha, D. K. Das, and D. P. Kulkarni, “Development of new correlations for convective heat transfer and friction factor in turbulent regime for nanofluids,” International Journal of Heat
and Mass Transfer, vol. 53, no. 21-22, pp. 4607–4618, 2010. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/ame/2012/891382/","timestamp":"2014-04-21T06:04:30Z","content_type":null,"content_length":"217424","record_id":"<urn:uuid:98eea80f-e79a-492a-8f22-8416fd440c92>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
start displaying proof trees during proofs
Major Section: PROOF-TREE
Also see proof-tree and see stop-proof-tree. Note that :start-proof-tree works by removing 'proof-tree from the inhibit-output-lst; see set-inhibit-output-lst.
Proof tree displays are explained in the documentation for proof-tree. :start-proof-tree causes proof tree display to be turned on, once it has been turned off by :stop-proof-tree.
Do not attempt to invoke start-proof-tree during an interrupt in the middle of a proof. | {"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/2/0/language/acl2-html-docs/START-PROOF-TREE.html","timestamp":"2014-04-18T21:15:36Z","content_type":null,"content_length":"1508","record_id":"<urn:uuid:cfc9fb4c-f09d-4e74-99d3-c6e960dc61a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Three snails at the point of equilateral triangle start to move at each other. What is the shape of curve?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
assuming they start from vertices, to meet at centre then there are different possibilites they can meet at incentres also or at centroid too
Best Response
You've already chosen the best response.
in either case it would be along a line or a ray
Best Response
You've already chosen the best response.
it can't be a ray ... they aren't moving to center. they are moving to each other ( the other snail) |dw:1347897411584:dw|
Best Response
You've already chosen the best response.
they could meet by the vertices and be moving clockwise to each other
Best Response
You've already chosen the best response.
nop ... they all in the direction of another snail.
Best Response
You've already chosen the best response.
is it going to be some polar graph?
Best Response
You've already chosen the best response.
don't know ..haven't solved it yet. i tried to do this couple of months ago ... couldn't do it.
Best Response
You've already chosen the best response.
How can one move in two direction at once
Best Response
You've already chosen the best response.
they are moving in cycle.
Best Response
You've already chosen the best response.
A->B ->C->A
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Great riddle let us think, thanks
Best Response
You've already chosen the best response.
Well I think I know the beginning:
Best Response
You've already chosen the best response.
I guess there will always be an equilateral triangle between them
Best Response
You've already chosen the best response.
Something like this:|dw:1347902736343:dw|
Best Response
You've already chosen the best response.
The equilateral triangle shrinks and rotates at a speed proportional to the snails' pace.
Best Response
You've already chosen the best response.
Yep.. @across
Best Response
You've already chosen the best response.
|dw:1347902612412:dw| One writes the differential equation Remembering v*dt = dx
Best Response
You've already chosen the best response.
d theta = sin theta * dx
Best Response
You've already chosen the best response.
\[\frac{ d \theta }{ dx } = \sin \theta\]
Best Response
You've already chosen the best response.
This is integrable easily
Best Response
You've already chosen the best response.
sorry ... had been away ... carry on
Best Response
You've already chosen the best response.
\[\theta(x) = \cos(x)\]
Best Response
You've already chosen the best response.
This is 1-st try one needs to improve the geometry
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I think there is a possibility of that also
Best Response
You've already chosen the best response.
might be ... let's assume unit velocity for now ... we will generalize it later.
Best Response
You've already chosen the best response.
by the way my soln is a piece of spiral
Best Response
You've already chosen the best response.
|dw:1347903217601:dw| yep .. it should be spiral ...
Best Response
You've already chosen the best response.
ya did it that way @experimentX
Best Response
You've already chosen the best response.
@Mikael do u know the equation of a spiral........ Its beyond my knowledge
Best Response
You've already chosen the best response.
this is diverging spiral.
Best Response
You've already chosen the best response.
also note that I don't the answer of this Q.
Best Response
You've already chosen the best response.
This may be funny but this will depend upon the length of the snail
Best Response
You've already chosen the best response.
Because when they meet there has to be an equilateral triangle with length of the side equal to the length of the snail
Best Response
You've already chosen the best response.
|dw:1347903644328:dw| kinda seems this way.
Best Response
You've already chosen the best response.
@sauravshakya you have a knack for making problems difficult ... well i agree that this gives insight into things ... but for this case. try to take the size of snail that would simplify the
problem. take it as a point size snail.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Well I think I have got it - by geberal consideration
Best Response
You've already chosen the best response.
Here 1) After constant time Delat t ( constant) the pictire undergoes similarity transformation by A) rotating, B) Shrinking by CONSTANT factor Z(DElta t) The only spiral that is self similar is
Archimedian Spiral where r is a LInear function of t
Best Response
You've already chosen the best response.
Sorry r Linear in theta
Best Response
You've already chosen the best response.
well ... i didn't know that. let's say we know the answer. any method arising from calculus?
Best Response
You've already chosen the best response.
@experimentX , the question can be solved in numerous ways, and is open to interpretation you can also use graph theory to find the shortest path which infact will define the curve other
solutions like rotation of triangle are justified if sufficient and necessary conditions are provided
Best Response
You've already chosen the best response.
the problem is supposed to be calculus problem.
Best Response
You've already chosen the best response.
well that should have been indicated well in advance before requesting help, makes life easy than scratching heads : )
Best Response
You've already chosen the best response.
Very high-brow @psi9epsilon yet I have offered a definite solution (right or wrong) but precise. Either disprove it or show my argument deficient.
Best Response
You've already chosen the best response.
well ... sorry my bad :/
Best Response
You've already chosen the best response.
@Mikael, do not approve or disapprove your solution as it MAY NOT be the ONLY solution
Best Response
You've already chosen the best response.
I CLAIM that the only spiral that has in 0.002 time-units A) constant rotation B) Constant zoom-down factor Z Is the above
Best Response
You've already chosen the best response.
it's a spiral ... at least we know that.
Best Response
You've already chosen the best response.
0.002 is simply to underscore the constancy of\[\Delta t\]
Best Response
You've already chosen the best response.
Can any one show me how the equation of a spiral looks
Best Response
You've already chosen the best response.
R(t) = exp(Kt) theta(t) = kt
Best Response
You've already chosen the best response.
let's solve this Q for particular case. assume side is 5 and speed is 1
Best Response
You've already chosen the best response.
@sauravshakya there are numerous way in which you can have a spiral ... try to look for polar equation or parametric equation ... this is same a circle except the radius changes.
Best Response
You've already chosen the best response.
\[R(\theta) = R_0e^{k*\theta}\]
Best Response
You've already chosen the best response.
Have no clue on polar equation or parametric equation
Best Response
You've already chosen the best response.
well .. that's one hell of one spiral ..
Best Response
You've already chosen the best response.
Well guys I have to go now.
Best Response
You've already chosen the best response.
AAND I HAVE PROOF ! Here it is\[R(t + \Delta T) = R(t) * e^{k \Delta T} ---- constant factor\]
Best Response
You've already chosen the best response.
\[\theta(t + \Delta T) = \theta(t) + k*\Delta T ----> constant-rotation\]
Best Response
You've already chosen the best response.
So @experimentX @across @siddhantsharan and whoever want - check this solution !
Best Response
You've already chosen the best response.
Please CHECK this solution @estudier @mukushla @hartnn
Best Response
You've already chosen the best response.
@experimentX Please accept the proof - it is rock solid. Do not evade - this is THE only Only solution
Best Response
You've already chosen the best response.
@mathslover please check my solution of exponential spiral
Best Response
You've already chosen the best response.
hold on ... I'm hunting for spirals
Best Response
You've already chosen the best response.
By the way THIS IS A GENERAL SOLUTION FORM FOR ANY EQUILATERAL POLYGON WITH N-SNAILS IN EACH VERTEX !!!!
Best Response
You've already chosen the best response.
Hunting is good, but hunting for WRONG is just funny
Best Response
You've already chosen the best response.
THIS IS A GENERAL SOLUTION FORM FOR ANY EQUILATERAL POLYGON WITH N-SNAILS IN EACH VERTEX !!!!
Best Response
You've already chosen the best response.
no ... just for fun. man fun is good.
Best Response
You've already chosen the best response.
If I am right - PLEASE say so. Check - this is justified because you asked the. And I solved it (or not - but check) question
Best Response
You've already chosen the best response.
all right ... i'll check.
Best Response
You've already chosen the best response.
\[R( \Theta) = R_0 e^{k \Theta}\]
Best Response
You've already chosen the best response.
This GIVES THE SHAPE OF THE PATHS FOR ANY REGULAR POLYGON
Best Response
You've already chosen the best response.
Square, Pentagon, Hexagon, Triangle whatever your n is
Best Response
You've already chosen the best response.
|dw:1347905702311:dw| yeah this is definitely a r = k e^(theta)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
let's try to solve this problem using calculus method some other day.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505744cde4b0a91cdf44fac1","timestamp":"2014-04-21T10:23:38Z","content_type":null,"content_length":"449437","record_id":"<urn:uuid:b157f29d-8d9c-4489-9019-1bab910e1457>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cumby Math Tutor
Find a Cumby Math Tutor
I hold a BS in biochemistry and a PhD in cell biology, so I have extensive education in science and math. I like to adapt my tutoring style to the student and work with him/her to figure out the
method that they best respond to. I like to provide structured tutoring experiences, so please send as much detail about the subject matter that requires tutoring as possible.
9 Subjects: including algebra 2, geometry, precalculus, biochemistry
...I use a lot of hands-on activities in my tutoring session as well as stories and silly, goofy learning tricks. I have been told that the methods I use are very helpful and I can work well with
everyone. I recently worked at a summer camp, so I can easily work with all ages.
14 Subjects: including algebra 1, algebra 2, American history, biology
...I'm available in the evenings and I'm ready to start helping your student today!Algebra I is one of my favorite subjects. Most people shy away from functions and factoring, that, to me, is the
most fun! Elementary math is the foundation for everything else!
21 Subjects: including prealgebra, geometry, probability, algebra 1
I was born in New Orleans, Louisiana and at the age of three, my family moved overseas. I have lived in four different countries (America, Cameroon, Oman, and Egypt) and I have visited over twenty
other countries. I love the overseas life and have seen great sights and met amazing people.
19 Subjects: including algebra 2, drawing, geometry, prealgebra
...My husband is taking courses at Collin college and I have assisted him over the past few years with all core college courses. I have also tutored middle school aged children in math and other
subjects. I have a high-responsibilty corporate job that has taught me a great deal about Microsoft Excel, Word, PowerPoint and time management!
17 Subjects: including algebra 1, prealgebra, reading, English | {"url":"http://www.purplemath.com/cumby_tx_math_tutors.php","timestamp":"2014-04-17T07:35:02Z","content_type":null,"content_length":"23552","record_id":"<urn:uuid:9ff658ad-6e39-43db-b4ec-7eec7fc9e34d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manhattan, NY Algebra 2 Tutor
Find a Manhattan, NY Algebra 2 Tutor
Hi, my name is Milton. I am a fresh university graduate in Physics and I have had lots of great teachers throughout my education. I have also had a lot of bad ones, so I feel like I have a
reasonable idea of what works and what doesn't when trying to teach someone something new in physics or mathematics.
17 Subjects: including algebra 2, chemistry, Spanish, calculus
...I can submit transcripts if necessary. I have a Master's degree in Applied Math from the University of Michigan and three years of tutoring experience (both private tutoring and with the
University's learning center). I have taken multiple college-level courses, including symbolic logic and introduction to mathematical proof which involve logic. I received an A (4.00) in both
15 Subjects: including algebra 2, calculus, geometry, statistics
...Also, I have good problem solving skills that I can teach to you and I can classify the different test questions into categories so that you can focus on the areas where you need the most
help. Finally, I have already tutored several students in math SAT prep this year and so am very familiar wi...
8 Subjects: including algebra 2, chemistry, geometry, algebra 1
...My years of tutoring and teaching allow me to explain difficult material cogently and concisely while approaching every tutoring session with patience and understanding. Please feel free to
contact me with any specific questions, comments, or concerns. I assure you that you will receive a prompt response.
42 Subjects: including algebra 2, reading, English, writing
...I also spent three years on the staff of the student newspaper, including three semesters as an editor, copy editing content and coaching journalists to help them write as effectively as
possible. During my time there, I received numerous school wide, regional, national, and international awards...
28 Subjects: including algebra 2, reading, English, geometry
Related Manhattan, NY Tutors
Manhattan, NY Accounting Tutors
Manhattan, NY ACT Tutors
Manhattan, NY Algebra Tutors
Manhattan, NY Algebra 2 Tutors
Manhattan, NY Calculus Tutors
Manhattan, NY Geometry Tutors
Manhattan, NY Math Tutors
Manhattan, NY Prealgebra Tutors
Manhattan, NY Precalculus Tutors
Manhattan, NY SAT Tutors
Manhattan, NY SAT Math Tutors
Manhattan, NY Science Tutors
Manhattan, NY Statistics Tutors
Manhattan, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Manhattan_NY_Algebra_2_tutors.php","timestamp":"2014-04-17T11:30:35Z","content_type":null,"content_length":"24330","record_id":"<urn:uuid:caaab6c3-cd96-4479-9c21-ab6cdc1c5844>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate of Convergence for sin(1/x^2) with Maclaurin is undefined?
I decided to put my attempt at a solution before the question, because the "solution" is what my question is about.
1. The problem statement, all variables and given/known data
Find the rate of convergence for the following as n->infinity:
lim [sin(1/n^2)]
Let f(n) = sin(1/n^2) for simplicity.
2. The attempt at a solution
I was searching through other forums and resources, and finally found a solution.
It said to use the Maclaurin Series (thus x0 = 0), but this would make every term
to look like:
f(0) + f'(0)*(n^1) + (1/2)*f''(0)*(n^2) + (1/4)*f'''(0)*(n^3) + ... + remainder
3. Relevant equations
How can we solve when 1/0 is undefined?
For example, in *every* term we have a sin(1/0^x) somewhere. This doesn't work, obviously.
This is solvable using the first few terms of the Maclaurin polynomial according to other sources, but I do not understand how. Did I overlook something? Or am I fundamentally misunderstanding the | {"url":"http://www.physicsforums.com/showthread.php?t=529954","timestamp":"2014-04-17T04:02:41Z","content_type":null,"content_length":"23168","record_id":"<urn:uuid:7012ce90-cff5-4125-b7b4-17b428696dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
are associated with most of the rifampin resistance in
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2004; 5: 137.
Few amino acid positions in rpoB are associated with most of the rifampin resistance in Mycobacterium tuberculosis
Mutations in rpoB, the gene encoding the β subunit of DNA-dependent RNA polymerase, are associated with rifampin resistance in Mycobacterium tuberculosis. Several studies have been conducted where
minimum inhibitory concentration (MIC, which is defined as the minimum concentration of the antibiotic in a given culture medium below which bacterial growth is not inhibited) of rifampin has been
measured and partial DNA sequences have been determined for rpoB in different isolates of M. tuberculosis. However, no model has been constructed to predict rifampin resistance based on sequence
information alone. Such a model might provide the basis for quantifying rifampin resistance status based exclusively on DNA sequence data and thus eliminate the requirements for time consuming
culturing and antibiotic testing of clinical isolates.
Sequence data for amino acid positions 511–533 of rpoB and associated MIC of rifampin for different isolates of M. tuberculosis were taken from studies examining rifampin resistance in clinical
samples from New York City and throughout Japan. We used tree-based statistical methods and random forests to generate models of the relationships between rpoB amino acid sequence and rifampin
resistance. The proportion of variance explained by a relatively simple tree-based cross-validated regression model involving two amino acid positions (526 and 531) is 0.679. The first partition in
the data, based on position 531, results in groups that differ one hundredfold in mean MIC (1.596 μg/ml and 159.676 μg/ml). The subsequent partition based on position 526, the most variable in this
region, results in a > 354-fold difference in MIC. When considered as a classification problem (susceptible or resistant), a cross-validated tree-based model correctly classified most (0.884) of the
observations and was very similar to the regression model. Random forest analysis of the MIC data as a continuous variable, a regression problem, produced a model that explained 0.861 of the
variance. The random forest analysis of the MIC data as discrete classes produced a model that correctly classified 0.942 of the observations with sensitivity of 0.958 and specificity of 0.885.
Highly accurate regression and classification models of rifampin resistance can be made based on this short sequence region. Models may be better with improved (and consistent) measurements of MIC
and more sequence data.
Rifampin, one of the principal drugs used in tuberculosis treatment, is a semi-synthetic antibiotic that inhibits transcription by preventing RNA synthesis. Isolates of Mycobacterium tuberculosis
resistant to rifampin occur at low to moderate frequencies in many regions of the world [1]. Mutations in rpoB, the gene encoding the β subunit of DNA-dependent RNA polymerase, are associated with
rifampin resistance. In the laboratory, drug resistance is quantified in terms of minimum inhibitory concentration (MIC), which is defined as the minimum concentration of the antibiotic in a given
culture medium below which bacterial growth is not inhibited.
Several studies have been conducted where MIC of rifampin has been measured and partial DNA sequences have been determined for rpoB in different isolates of M. tuberculosis [2-6]. However, no model
has been constructed to predict rifampin resistance based on sequence information alone. Such a model might provide the basis for quantifying rifampin resistance status based exclusively on DNA
sequence data and thus eliminate the requirements for time consuming culturing and antibiotic testing of clinical isolates. Tree-based statistical methods (see Methods) have generated very accurate
models relating amino acid sequence of short (8-mer) peptides to their binding by major histocompatibility complex (MHC) class I molecules with higher accuracy than artificial neural networks [7].
Both tree-based models and aggregation of such models through random forests (see Methods) have proven to be quite successful in other problems involving sequence data as covariates such as HIV-1
replication capacity [8] and cytidine to uridine RNA editing in plant mitochondria [9]. The success of tree-based statistical models and random forests in these problems involving covariates derived
from sequence data motivated our application of these models to the problem of rifampin resistance in M. tuberculosis.
The response variable is a set of continuously distributed values for MIC, which makes the problem one of regression. These data are used to answer the following questions: What proportion of the
variance in MIC is attributable to sequence differences in positions 511–533 of the β subunit of RNA polymerase of M. tuberculosis? What particular positions, and what distribution of amino acids at
those positions, are associated with most of the variance in MIC? Alternatively, the response variable could be cast in discrete terms: resistant or susceptible. This is possible by assuming a
threshold value for MIC above which an isolate is considered resistant to rifampin. Among the specific questions we can answer with such a model are the following: What particular positions, and what
distribution of amino acids at those positions, allow for distinguishing rifampin-susceptible and rifampin-resistant isolates of M. tuberculosis? What is the misclassification error rate associated
with susceptibility prediction for these data? We address these questions and evaluate the ability to predict MIC from protein sequence data (inferred from DNA sequence data) using tree-based
regression and classification methods. We find that these methods generate highly accurate models of rifampin resistance.
The data set used in the study consists of 173 observations with 60 distinct genotype-phenotype combinations (Table (Table1).1). The most frequent combination has 47 occurrences, and there are 40
unique (singleton) combinations. MIC for rifampin varies from 0.0625 μg/ml to > 512 μg/ml. The 173 sequences are distributed among 24 genotypes, 11 of which occur uniquely in the data set. The
plurality genotype is represented by 69 samples; 98 samples differ from the plurality by one amino acid; and the remaining 6 samples differ from the plurality by two amino acids. Some genotypes
defined by the partial sequence of rpoB are associated with several different phenotypes (MIC values). Also, some genotypic states are associated with large effects, while some have little or no
effect on MIC phenotype. Finally, some changes in MIC are not associated with changes in the sequence region examined. These genotype data are short (69 bp) partial sequences of a single gene, and
thus they may not contain all phenotypically relevant genetic information. Indeed, there is evidence that amino acid changes outside of the examined region are associated with changes in MIC for
rifampin [3]. Additionally, the sample size is small (also typical of most genotype-phenotype datasets), which will decrease power. Nonetheless these data are typical of studies surveying the genetic
variation associated with antibiotic resistance and of genotype-phenotype data in general. Thus they make an appropriate subject of investigation.
Minimum inhibitory concentration (MIC) of rifampin and associated variable amino acids in positions 511–533 of the β subunit of RNA polymerase of Mycobacterium tuberculosis [2–4].
Regression tree analysis
The regression tree for the relationship of rpoB amino acid sequence and MIC has two splits defining three terminal nodes (Fig. (Fig.1).1). At each node in the tree, the MIC prediction given (μg/ml)
is the mean of all isolates at that node. The first split of the topmost node (root node) consists of the entire sample and is based on the amino acid at position 531, with those sequences having
serine (S) going to the left child node, and those having leucine (L) or tryptophan (W) going to the right child node. The best split for each node is that which gives the largest decrease in the
error. Here error is measured as the deviance, which for a continuous variable is a constant multiple of the residual sums-of-squares. Reported values were determined using 10-fold cross-validation.
Moving down the tree the error decreases, as the sum of the deviance for each pair of child nodes is less than the deviance of the parent node. Given the hierarchical nature of trees and the
criterion used to choose splits, the first split, that based on position 531, explains the highest proportion of the overall phenotypic variance. This bisection of the data results in groups that
differ one hundredfold in mean MIC (1.596 μg/ml and 159.676 μg/ml). The subsequent partition based on position 526, the most variable in this region, results in a > 354-fold difference in MIC. The
proportion of the variance in MIC explained across all splits involving the two amino acid positions (526 and 531) is 0.679. All proportions of variances explained by the model as reported here are
those estimated through cross-validation and are not based on re-substitution, and thus represent appropriately conservative estimates.
Cross-validated pruned regression tree for minimum inhibitory concentration (MIC) of rifampin (μg/ml) based on amino acid sequence data from positions 511–533 of the β subunit of RNA polymerase of
Mycobacterium tuberculosis. The ...
Classification tree analysis
From a clinical perspective it may be most relevant to consider the level of drug resistance as a two-state categorical variable (susceptible or resistant) rather than as a continuously distributed
variable. In clinical practice, if an isolate of M. tuberculosis is determined to be rifampin resistant then rifampin is replaced with another antibiotic. Although blood serum concentration of
rifampin reaches levels of 6 – 7 μg/ml about 1.5 – 2 hours after ingestion [10], a clinically relevant MIC value for dichotomizing the MIC values would be lower than this peak. We conservatively
define MIC values ≤ 1 μg/ml as susceptible and values > 1 μg/ml as resistant, a definition consistent with conventional standards [11]. With this dichotomization we can explore the use of tree-based
statistical classification to predict rifampin resistance in a way that is more relevant to clinical practice.
The predictor variables are again the unordered categorical designations of amino acids at polymorphic positions. The classification tree for these data (Fig. (Fig.2)2) has two splits based on two
of the 11 variable amino acid positions. At each node in the tree, the prediction of rifampin susceptibility status (susceptible or resistant) is given for all isolates at that node. The first split
is based on position 531; those isolates with serine (S) are predicted to be susceptible, and those with leucine (L) or tryptophan (W) are predicted to be resistant. The class counts for the full
data set are given at each node. For example, the root node (top most node in the figure) contains all 173 cases of which 103 are resistant are resistant to rifampin, and the remaining 70 isolates
are susceptible to rifampin. The proportion of correctly classified observations across all splits as determined by re-substitution of the observations on the cross-validation pruned subtree is
0.884. Comparing this tree to the pruned regression subtree (Fig. (Fig.1)1) reveals that the two split definitions in each tree are identical. Both the regression and classification tree models are
significant (P < 0.0001) based on permutation tests.
Cross-validated pruned classification tree for rifampin susceptibility and resistance based on amino acid sequence data from positions 511–533 of the β subunit of RNA polymerase of Mycobacterium
tuberculosis. The numbers of observations ...
Random forest analyses
The random forest analysis, which aggregates results over many tree models, each constructed on subsamples of the data, produced markedly better models as compared to the single tree-based models.
The random forest analysis of the MIC data as a continuous variable, a regression problem, produced a model that explained 0.861 of the variance. The random forest analysis of the MIC data as
discrete classes (susceptible and resistant), a classification problem, produced a model that correctly classified 0.942 of the observations with corresponding sensitivity of 0.958 and specificity of
Although both the regression and classification random forest results are markedly better than the single tree-based models, they do lack the ease of interpretation of a tree model. However, variable
importance can be assessed in random forests by measuring the increase in group purity based individual models containing the variable. As might be expected, the results for both regression and
classification are similar and identify the same amino acid positions as being most important in determining response to rifampin as did the single tree models: primarily 531 and 526, and much less
so for 513 and 516 (Figure (Figure33).
Variable importance plot from random forest classification analysis. The plot includes all polymorphic positions in the region examined, and shows the importance of each position as the decrease in
the Gini index (a measure of impurity) induced by splitting ...
Analysis of genotype and phenotype data poses several significant challenges. Data characteristics such as mixture of variable types, high dimensionality, interactions between variables, and
preponderance of unordered categorical variables render many candidate analytical methods inappropriate or ineffective. Tree-based statistical models adeptly deal with these all these challenges and
do so in a way that produces readily interpretable results.
Through the analyses described above, we have learned several things that were not previously apparent. We have distinguished phenotypically relevant from phenotypically irrelevant changes in
genotype by establishing the relative importance of the polymorphic sequence positions, and amino acids at those positions, as they affect susceptibility to rifampin. For example, although they are
polymorphic, changes at positions 511, 512, 515, 521 and 529 did not significantly affect MIC for rifampin. The hierarchical importance of changes, and their contextual/conditional relationships, are
depicted in the resulting tree diagrams in a readily interpretable manner. Inherent in the tree structure is the fact that earlier splits explain more variation in phenotype then subsequent splits.
For example, the first split, at position 531, explains more variation then does the split based on position 526.
The models can be used to predict MIC for rifampin where genotype is known, as well as provide the basis for hypothesis testing involving future empirical work. Furthermore models can be refined to
yield improved predictions by incorporating additional data as they become available. Improved models may be possible with additional data: full length sequence of rpoB may include sequence features
that are responsible for some variation in MIC values for rifampin, and sequence data from additional strains might lead to even more general models.
As demonstrated above, the relationship of genotype to phenotype can be quantified using tree-based statistical models and aggregations thereof. Our approach has been to use types of models in the
analysis of genotype-phenotype relationships because they offer distinct advantages compared to other methods and allow for rigorous and ready interpretation of results. Tree-based and random forest
analyses are readily applicable to other forms of genotypic information including data that take the general form of visualized fragments (bands on gels) such as microsatellites, restriction fragment
length polymorphisms (RFLPs), amplified fragment length polymorphisms (AFLPs), and similar data. Tree-based and random forest analyses can also be applied directly to DNA sequence data including
single nucleotide polymorphisms (SNPs). In general, tree-based statistical and random forest models are applicable to all cases where the goal is to examine the relationship between genotype and
Relatively simple models provided accurate predictions of rifampin resistance in M. tuberculosis. These models demonstrated that only a few variable positions in the β subunit of DNA-dependent RNA
polymerase were responsible for most of the variation in rifampin resistance. Such models might provide the basis for quantifying rifampin resistance status based exclusively on DNA sequence data and
thus eliminate the requirements for time consuming culturing and antibiotic testing of clinical isolates. More generally, the results of this study demonstrate the usefulness of tree-based
statistical models and random forests in genetic analysis.
Data sources
Sequence data for amino acid positions 511–533 of rpoB and associated MIC of rifampin for different isolates of M. tuberculosis were taken from studies examining rifampin resistance in clinical
samples from New York City and throughout Japan [2-4]. Minimum inhibitory concentration (MIC) is defined as the minimum concentration of the antibiotic in a given culture medium below which bacterial
growth is not inhibited.
The predictor variables are unordered categorical designations of amino acid at polymorphic positions, and the response variable are continuous values for MIC represented by their log[2 ]transforms.
Values given in the original sources as <x were set to log[2](x - 0.5), and those values given as >x were set to log[2](x + 0.5). The MIC values are converted back to μg/ml in figures to be
consistent with Table Table11 and to facilitate interpretation.
Tree-based statistical analyses
Analysis of the relationships between rpoB amino acid sequence and rifampin susceptibility was done through the use of tree-based statistical models [12], also known as classification and regression
trees (CART) [13]. Analyses were done with rpart (recursive partitioning) [14] using the rpart library [15] for the open source statistical package R [16]. Tree-based models operate by recursively
partitioning a data set in two (binary split) based on the value of a single predictor variable to best achieve homogeneous collections of a nominal or ordinal response variable (classification) or
to best separate low and high values of a continuous response variable (regression). The split definition can be considered as a question, which has the following general form: Is the observation x[i
]A? Here A is a region of the variable space. Thus answering the question for all observations produces two groups of observations; those for which the answer is yes (those in region A) and those for
which the answer is no (x[i ]A, those in the complement of A). The specific criteria for choosing among the possible partitions (questions) is based on the change in deviance, which for regression
problems is equivalent to least squares. Subsequent binary partitioning continues until stopping criteria (variously defined) are met. The result is a classification or regression tree: a
hierarchical series of data bifurcations, which depicts the partition definitions and describes the resulting data subsets defined by each partition.
For unordered categorical covariates, such as amino acid designation, the search through possible splits is exhaustive. For each variable amino acid position there are 2^n-1 -1 possible partitions,
where n is the number of different amino acids observed. For example, in the case of amino acid position 526 of rpoB analyzed here there are 9 observed amino acids resulting in 255 possible
partitions to be evaluated. The preferred way to construct an appropriately sized tree is to first build a large tree and subsequently prune it [12,13]. Pruning is the process of removing branches
from a tree to produce a subtree. To objectively choose the appropriate size for a pruned tree, it is useful to employ the concept of cost-complexity [13]. Embodied within the cost-complexity measure
is a reward for tree (model) fit and a penalty for tree size (number of parameters). A tree can be pruned by using the cost-complexity measure to identify subtrees to be eliminated. A more formal
definition and discussion of cost-complexity is given elsewhere [13].
Performance of tree-based models can be assessed in a number of ways depending on the goals of the analysis. One way is to evaluate the fit of the data used to generate the model, which is known as
the re-substitution error. The use of re-substitution error may be justified when the principal goal of the analysis is to explain the observations in hand. However, the re-substitution error
provides an underestimate of the error if the goal is to produce a model for future prediction. Another scheme to assess performance is to partition observations into a subset for model building, the
training set, and a subset to evaluate the model, the test set. To remove biases this general scheme can be expanded in the form of cross-validation. Typically 10-fold cross-validation is used, where
the data are randomly divided into 10 equal or near equal portions. Nine of these portions are used to generate the model and the remaining portion, the test set, is used to evaluate the model. This
step is repeated until all test sets have been used in model evaluation.
We assessed the significance of our tree-based statistical models through permutation where the predictor variables are randomized with respect to the response variable [17]. The frequency of
observing a result value equal to or better than the observed value in 1 × 10^4 permutations is the estimate of the probability associated with the observed result.
Random forest analyses
In a series of recent papers [18-21], Breiman has demonstrated that consequential gains in classification or prediction accuracy can be achieved by using an ensemble of trees, where each tree in the
ensemble is grown in accordance with the realization of a random vector. Final predictions are obtained by aggregating (voting) over the ensemble, typically using equal weights. Bagging [18]
represents an early example whereby each tree is constructed from a bootstrap [22] sample drawn with replacement from the training data. The simple mechanism whereby bagging reduces prediction error
for unstable predictors, such as trees, is well understood in terms of variance reduction resulting from averaging [18,23]. Such variance gains can be enhanced by reducing the correlation between the
quantities being averaged. It is this principle that motivates random forests.
Random forests seek to effect such correlation reduction by a further injection of randomness. Instead of determining the optimal split of a given node of a (constituent) tree by evaluating all
allowable splits on all covariates, as is done with single tree methods or bagging, a subset of the covariates drawn at random is employed. Breiman [20,21] argues that random forests (a) enjoy
exceptional prediction accuracy, (b) that this accuracy is attained for a wide range of settings of the single tuning parameter employed, and (c) that over-fitting does not arise due to the
independent generation of ensemble members.
Here, our random forests comprised 1 × 10^4 individual trees constructed by sub-sampling eight predictor variables (regression) or two predictor variables (classification) at each node. Variable
importance was assessed by measuring the increase in group purity when partitioning data based on a variable. We used the R package randomForest [24].
Authors' contributions
MPC conceived of the study, and participated in its coordination. Both authors participated in study design, carried out the statistical analyses, wrote and approved the final manuscript.
We thank AL Bazinet, DS Myers and MC Neel for assistance and comments on the manuscript.
• Espinal MA, Laszlo A, Simonsen L, Boulahbal F, Kim SJ, Reniero A, Hoffner S, Rieder HL, Binkin N, Dye C, Williams R, Raviglione MC. Global trends in resistance to antituberculosis drugs. World
Health Organization-International Union against Tuberculosis and Lung Disease Working Group on Anti-Tuberculosis Drug Resistance Surveillance. N England J Med. 2001;344:1294–1303. doi: 10.1056/
NEJM200104263441706. [PubMed] [Cross Ref]
• Moghazeh SL, Pan X, Arain T, Stover CK, Musser JM, Kreiswirth BN. Comparative antimycobacterial activities of rifampin, rifapentine, and KRM-1648 against a collection of rifampin-resistant
Mycobacterium tuberculosis isolates with known rpoB mutations. Antimicrob Agents Chemother. 1996;40:2655–2657. [PMC free article] [PubMed]
• Taniguchi H, Aramaki H, Nikaido Y, Mizuguchi Y, Nakamura M, Koga T, Yoshida S. Rifampicin resistance and mutation of the rpoB gene in Mycobacterium tuberculosis. FEMS Microbiology Letters. 1996;
144:103–108. doi: 10.1016/0378-1097(96)00346-1. [PubMed] [Cross Ref]
• Ohno H, Koga H, Kuroita T, Tomono K, Ogawa K, Yanagihara K, Yamamoto Y, Miyamoto J, Tashiro T, Kohno S. Rapid prediction of rifampin susceptibility of Mycobacterium tuberculosis. Am J Respir Crit
Care Med. 1997;155:2057–2063. [PubMed]
• Williams DL, Spring L, Collins L, Miller LP, Heifets LB, Gangadharam PRJ, Gillis TP. Constribution of rpoB mutations to development of rifamycin cross-resistance in Mycobacterium tuberculosis.
Antimicrob Agents Chemother. 1998;42:1853–1857. [PMC free article] [PubMed]
• Siddiqi N, Shamim M, Hussain S, Choudhary RK, Ahmed N, Prachee , Banerjee S, Savithri GR, Alam M, Pathak N, Amin A, Hanief M, Katoch VM, Sharma SK, Hasnain S. Molecular characterization of
multidrug-resistant isolates of Mycobacterium tuberculosis from patients in North India. Antimicrob Agents Chemother. 2002;46:443–450. doi: 10.1128/AAC.46.2.443-450.2002. [PMC free article] [
PubMed] [Cross Ref]
• Segal MR, Cummings MP, Hubbard AE. Relating genotype to phenotype: analysis of peptide binding data. Biometrics. 2001;57:632–643. doi: 10.1111/j.0006-341X.2001.00632.x. [PubMed] [Cross Ref]
• Segal MR, Barbour JD, Grant RM. Relating HIV-1 sequence variation to replication capacity via trees and forests. Statistical Applications in Genetics and Molecular Biology. 2004;3 Article 2. [
• Cummings MP, Myers DS. Simple statistical models predict C-to-U edited sites in plant mitochondrial RNA. BMC Bioinformatics. 2004;5:132. doi: 10.1186/1471-2105-5-132. [PMC free article] [PubMed]
[Cross Ref]
• Bass JB, Farer LS, Hopewell PC, O'Brien R, Jacobs RF, Ruben F, Snider DE, Thornton G. Treatment of tuberculosis and tuberculosis infection in adults and children. Am J Respir Crit Care Med. 1994;
149:1359–1374. [PubMed]
• Heifets L. Qualitative and quantitative drug-susceptibility tests in mycobacteriology. Am Rev Respir Dis. 1988;137:1217–1222. [PubMed]
• Clark LA, Pregibon D. Tree-based models. In: Chambers JM, Hastie TJ, editor. In Statistical Models in S. London: Chapman and Hall; 1993. pp. 377–419.
• Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and Regression Trees. Pacific Grove, CA: Wadsworth and Brooks; 1984.
• Therneau TM, Atkinson EJ. An introduction to recursive partitioning using the RPART routines. Tech Rep Mayo Foundation. 1997.
• Therneau TM, Atkinson B, Ripley B. The rpart Package: Recursive Partitioning. 2003. http://cran.r-project.org/src/contrib/Descriptions/rpart.html
• Ihaka R, Gentleman R. R: a language for data analysis and graphics. J Comput Graph Stat. 1996;5:299–314.
• Cummings MP, Myers DS, Mangelson M. Applying permutation tests to tree-based statistical models: extending the R package rpart. Tech Rep CS-TR-4581, UMIACS-TR-2004-24, Center for Bioinformatics
and Computational Biology, Institute for Advanced Computer Studies, University of Maryland. 2004.
• Breiman L. Bagging predictors. Mach Learn. 1996;24:123–140. doi: 10.1023/A:1018054314350. [Cross Ref]
• Breiman L. Arching classifiers (with discussion) Ann Stat. 1998;26:801–849. doi: 10.1214/aos/1024691079. [Cross Ref]
• Breiman L. Random Forests. Mach Learn. 2001;45:5–32. doi: 10.1023/A:1010933404324. [Cross Ref]
• Breiman L. Statistical modeling: the two cultures. Stat Sci. 2001;16:199–215. doi: 10.1214/ss/1009213726. [Cross Ref]
• Efron B, Tibshirani R. An Introduction to the Bootstrap. New York: Chapman & Hall; 1993.
• Hastie TJ, Tibshirani R, Friedman JH. The Elements of Statistical Learning. New York: Springer; 2001.
• Breiman L, Cutler A, Liaw A, Wiener M. 2004. Http://cran.r-project.org/src/contrib/Descriptions/randomForest.html
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• A High-Resolution Enhancer Atlas of the Developing Telencephalon[Cell. 2013]
Visel A, Taher L, Girgis H, May D, Golonzhka O, Hoch R, McKinsey GL, Pattabiraman K, Silberberg SN, Blow MJ, Hansen DV, Nord AS, Akiyama JA, Holt A, Hosseini R, Phouanenavong S, Plajzer-Frick I,
Shoukry M, Afzal V, Kaplan T, Kriegstein AR, Rubin EM, Ovcharenko I, Pennacchio LA, Rubenstein JL. Cell. 2013 Feb 14; 152(4)895-908
• Extreme Polymorphism in a Vaccine Antigen and Risk of Clinical Malaria: Implications for Vaccine Development[Science translational medicine. 2009]
Takala SL, Coulibaly D, Thera MA, Batchelor AH, Cummings MP, Escalante AA, Ouattara A, Traoré K, Niangaly A, Djimdé AA, Doumbo OK, Plowe CV. Science translational medicine. 2009 Oct 14; 1(2)2ra5
• Bias in random forest variable importance measures: Illustrations, sources and a solution[BMC Bioinformatics. ]
Strobl C, Boulesteix AL, Zeileis A, Hothorn T. BMC Bioinformatics. 825
See all...
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524371/?tool=pubmed","timestamp":"2014-04-20T00:41:49Z","content_type":null,"content_length":"87365","record_id":"<urn:uuid:6530eb5d-b5d9-4b82-a0fa-af628a171a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stolper-Samuelson Theorem - Dictionary Definition of Stolper-Samuelson Theorem
The Stolper-Samuelson theorem is as follows: In some models of international trade, trade lowers the real wage of the scarce factor of production, and protection from trade raises it. That is a
Stolper-Samuelson effect, by analogy to their (1941) theorem in a Heckscher-Ohlin model context.
A notable case is when trade between a modernized economy and a developing one would lower the wages of the unskilled in the modernized economy because the developing country has so many of the
Terms related to Stolper-Samuelson Theorem:
About.Com Resources on Stolper-Samuelson Theorem:
Writing a Term Paper? Here are a few starting points for research on Stolper-Samuelson Theorem:
Books on Stolper-Samuelson Theorem:
Journal Articles on Stolper-Samuelson Theorem: | {"url":"http://economics.about.com/od/economicsglossary/g/stolper.htm","timestamp":"2014-04-16T13:59:11Z","content_type":null,"content_length":"38159","record_id":"<urn:uuid:ebfba66a-08be-42ce-94ed-13ca6441b711>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robust Statistics: Theory and Methods
ISBN: 978-0-470-01092-1
436 pages
May 2006
Read an Excerpt
Classical statistical techniques fail to cope well with deviations from a standard distribution. Robust statistical methods take into account these deviations while estimating the parameters of
parametric models, thus increasing the accuracy of the inference. Research into robust methods is flourishing, with new methods being developed and different applications considered.
Robust Statistics sets out to explain the use of robust methods and their theoretical justification. It provides an up-to-date overview of the theory and practical application of the robust
statistical methods in regression, multivariate analysis, generalized linear models and time series. This unique book:
• Enables the reader to select and use the most appropriate robust method for their particular statistical model.
• Features computational algorithms for the core methods.
• Covers regression methods for data mining applications.
• Includes examples with real data and applications using the S-Plus robust statistics library.
• Describes the theoretical and operational aspects of robust methods separately, so the reader can choose to focus on one or the other.
• Supported by a supplementary website featuring time-limited S-Plus download, along with datasets and S-Plus code to allow the reader to reproduce the examples given in the book.
Robust Statistics aims to stimulate the use of robust methods as a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. It is ideal for researchers,
practitioners and graduate students of statistics, electrical, chemical and biochemical engineering, and computer vision. There is also much to benefit researchers from other sciences, such as
biotechnology, who need to use robust statistical methods in their work.
See More
1. Introduction.
1.1 Classical and robust approaches to statistics.
1.2 Mean and standard deviation.
1.3 The “three-sigma edit” rule.
1.4 Linear regression.
1.5 Correlation coefficients.
1.6 Other parametric models.
1.7 Problems.
2. Location and Scale.
2.1 The location model.
2.2 M-estimates of location.
2.3 Trimmed means.
2.4 Dispersion estimates.
2.5 M-estimates of scale.
2.6 M-estimates of location with unknown dispersion.
2.7 Numerical computation of M-estimates.
2.8 Robust confidence intervals and tests.
2.9 Appendix: proofs and complements.
2.10 Problems.
3. Measuring Robustness.
3.1 The influence function.
3.2 The breakdown point.
3.3 Maximum asymptotic bias.
3.4 Balancing robustness and efficiency.
3.5 *“Optimal” robustness.
3.6 Multidimensional parameters.
3.7 *Estimates as functionals.
3.8 Appendix: proofs of results.
3.9 Problems.
4 Linear Regression 1.
4.1 Introduction.
4.2 Review of the LS method.
4.3 Classical methods for outlier detection.
4.4 Regression M-estimates.
4.5 Numerical computation of monotone M-estimates.
4.6 Breakdown point of monotone regression estimates.
4.7 Robust tests for linear hypothesis.
4.8 *Regression quantiles.
4.9 Appendix: proofs and complements.
4.10 Problems.
5 Linear Regression 2.
5.1 Introduction.
5.2 The linear model with random predictors 118
5.3 M-estimates with a bounded ρ-function.
5.4 Properties of M-estimates with a bounded ρ-function.
5.5 MM-estimates.
5.6 Estimates based on a robust residual scale.
5.7 Numerical computation of estimates based on robust scales.
5.8 Robust confidence intervals and tests for M-estimates.
5.9 Balancing robustness and efficiency.
5.10 The exact fit property.
5.11 Generalized M-estimates.
5.12 Selection of variables.
5.13 Heteroskedastic errors.
5.14 *Other estimates.
5.15 Models with numeric and categorical predictors.
5.16 *Appendix: proofs and complements.
5.17 Problems.
6. Multivariate Analysis.
6.1 Introduction.
6.2 Breakdown and efficiency of multivariate estimates.
6.3 M-estimates.
6.4 Estimates based on a robust scale.
6.5 The Stahel–Donoho estimate.
6.6 Asymptotic bias.
6.7 Numerical computation of multivariate estimates.
6.8 Comparing estimates.
6.9 Faster robust dispersion matrix estimates.
6.10 Robust principal components.
6.11 *Other estimates of location and dispersion.
6.12 Appendix: proofs and complements.
6.13 Problems.
7. Generalized Linear Models.
7.1 Logistic regression.
7.2 Robust estimates for the logistic model.
7.3 Generalized linear models.
7.4 Problems.
8. Time Series.
8.1 Time series outliers and their impact.
8.2 Classical estimates for AR models.
8.3 Classical estimates for ARMA models.
8.4 M-estimates of ARMA models.
8.5 Generalized M-estimates.
8.6 Robust AR estimation using robust filters.
8.7 Robust model identification.
8.8 Robust ARMA model estimation using robust filters.
8.9 ARIMA and SARIMA models.
8.10 Detecting time series outliers and level shifts.
8.11 Robustness measures for time series.
8.12 Other approaches for ARMA models.
8.13 High-efficiency robust location estimates.
8.14 Robust spectral density estimation.
8.15 Appendix A: heuristic derivation of the asymptotic distribution of M-estimates for ARMA models.
8.16 Appendix B: robust filter covariance recursions.
8.17 Appendix C: ARMA model state-space representation.
8.18 Problems.
9. Numerical Algorithms.
9.1 Regression M-estimates.
9.2 Regression S-estimates.
9.3 The LTS-estimate.
9.4 Scale M-estimates.
9.5 Multivariate M-estimates.
9.6 Multivariate S-estimates.
10. Asymptotic Theory of M-estimates.
10.1 Existence and uniqueness of solutions.
10.2 Consistency.
10.3 Asymptotic normality.
10.4 Convergence of the SC to the IF.
10.5 M-estimates of several parameters.
10.6 Location M-estimates with preliminary scale.
10.7 Trimmed means.
10.8 Optimality of the MLE.
10.9 Regression M-estimates.
10.10 Nonexistence of moments of the sample median.
10.11 Problems.
11. Robust Methods in S-Plus.
11.1 Location M-estimates: function Mestimate.
11.2 Robust regression.
11.3 Robust dispersion matrices.
11.4 Principal components.
11.5 Generalized linear models.
11.6 Time series.
11.7 Public-domain software for robust methods.
12. Description of Data Sets.
See More
Ricardo Maronna
is a Professor in the Department of Mathematics, Faculty of Exact Sciences, National University of La Plata, Argentina, and researcher at C.I.C.P.B.A. He is the author of numerous research articles
on robust statistics, especially in the areas of regression and multivariate analysis.
Doug Martin is a Professor in the Department of Statistics, and Director of the Computational Finance Program at the University of Washington in Seattle, Washington. He was a consultant at Bell
Laboratories for many years, and author of numerous research articles on robust methods for time series. Martin founded the original S-PLUS company Statistical Sciences, Inc., and led the development
of the S-PLUS Robust Statistics Library.
Victor Yohai, is a Professor in the Department of Mathematics, Faculty of Exact and Natural Sciences, University of Buenos Aires, Argentina, and researcher at CONICET. He is the author of a large
number of important research articles on robust statistics, in particular on regression and time series. Several of the procedures proposed by him have been implemented in the robust library of
See More
"This book belongs on the desk of every statistician working in robust statistics, and the authors are to be congratulated for providing the profession with a much-needed and valuable resource for
teaching and research." (
Journal of the American Statistical Association
, June 2008)
"…an original and valuable contribution…a source of inspiration for all those pursuing research in robust statistics." (Mathematical Reviews, 2007i)
"…a great book for graduate students as well as for applied scientists and data analysts." (MAA Reviews, February 14, 2007)
See More
"This book belongs on the desk of every statistician working in robust statistics, and the authors are to be congratulated for providing the profession with a much-needed and valuable resource for
teaching and research." (Journal of the American Statistical Association, June 2008)
See More
Buy Both and Save 25%!
Robust Statistics: Theory and Methods (US $119.00)
-and- Linear Statistical Inference and its Applications, 2nd Edition (US $164.00)
Total List Price: US $283.00
Discounted Price: US $212.25 (Save: US $70.75)
Cannot be combined with any other offers. Learn more. | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470010924.html","timestamp":"2014-04-18T07:34:04Z","content_type":null,"content_length":"57866","record_id":"<urn:uuid:cf2f41b9-1d1e-4eef-92eb-2ff447464e0b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
[TxMt] Latex Equation Highlighting
William Yang weijiay at gmail.com
Thu Oct 26 02:35:04 UTC 2006
A little quirk with the Latex equation highlighting: if you use an
equation environment like \begin{equation} or \begin{multline},
everything inside of the \begin \end is highlighted as an equation, even
\label{} definitions. Shouldn't things like \label not be colored like
the equation? What do you all thing?
More information about the textmate mailing list | {"url":"http://lists.macromates.com/textmate/2006-October/014152.html","timestamp":"2014-04-20T06:56:16Z","content_type":null,"content_length":"2987","record_id":"<urn:uuid:1ee593b0-26a9-4db0-9f8a-d317716d16c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Newton, MA Algebra 2 Tutor
Find a West Newton, MA Algebra 2 Tutor
...I have taught various levels of mathematics--through calculus--as standalone courses and as part of the content of physics courses, so I have a range of pedagogical strategies in these areas.
I am very much aware of the challenges many students face in their early encounters with these subjects,...
7 Subjects: including algebra 2, calculus, physics, algebra 1
...When tutoring science, I usually emphasize imagination and visualization. Thinking about and "seeing" what molecules are doing never hurts and often helps. When tutoring math, I usually
explore different problem solving techniques with the student until we find the ones that work best for him or her.
23 Subjects: including algebra 2, chemistry, physics, calculus
...I attended the Mass Academy of Math and Science, a high school for gifted students. After years of being a star violin student, my violin teacher asked me to be her teacher's aide at the
children's group classes. As she grew older and wanted to spend more time with her family, she sent some of her beginner students to me.
13 Subjects: including algebra 2, English, geometry, writing
...I believe that my approach to home-schooling, and thus to tutoring is unique. I am very good at teaching so that the student “sees” the solution. I am excellent at figuring out what the
student is not understanding, so that I can get to the heart of the problem and show the student what the concept is, in a way the student understands.
25 Subjects: including algebra 2, reading, English, ESL/ESOL
I obtained my BS and PhD in Biomedical Engineering, focusing on applying mathematical and computational tools to solve biomedical problems. MATLAB is my main computer language. I have being
tutoring undergraduate and graduate students in research labs on MATLAB programming.
16 Subjects: including algebra 2, calculus, geometry, algebra 1
Related West Newton, MA Tutors
West Newton, MA Accounting Tutors
West Newton, MA ACT Tutors
West Newton, MA Algebra Tutors
West Newton, MA Algebra 2 Tutors
West Newton, MA Calculus Tutors
West Newton, MA Geometry Tutors
West Newton, MA Math Tutors
West Newton, MA Prealgebra Tutors
West Newton, MA Precalculus Tutors
West Newton, MA SAT Tutors
West Newton, MA SAT Math Tutors
West Newton, MA Science Tutors
West Newton, MA Statistics Tutors
West Newton, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/west_newton_ma_algebra_2_tutors.php","timestamp":"2014-04-16T16:09:42Z","content_type":null,"content_length":"24373","record_id":"<urn:uuid:089af3f4-377d-4827-94b6-e339625349ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Fast Surface Interpolation using Multiresolution Wavelet Transform
July 1994 (vol. 16 no. 7)
pp. 673-688
ASCII Text x
M.H. Yaou, W.T. Chang, "Fast Surface Interpolation using Multiresolution Wavelet Transform," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 7, pp. 673-688, July, 1994.
BibTex x
@article{ 10.1109/34.297948,
author = {M.H. Yaou and W.T. Chang},
title = {Fast Surface Interpolation using Multiresolution Wavelet Transform},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {16},
number = {7},
issn = {0162-8828},
year = {1994},
pages = {673-688},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.297948},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - Fast Surface Interpolation using Multiresolution Wavelet Transform
IS - 7
SN - 0162-8828
EPD - 673-688
A1 - M.H. Yaou,
A1 - W.T. Chang,
PY - 1994
KW - interpolation; convergence of numerical methods; wavelet transforms; signal processing; surface interpolation; multiresolution wavelet transform; large sparse linear equation system;
convergence; multiresolution basis transfer scheme; interpolation basis; QMF matrix pair; preconditioner; frequency domain
VL - 16
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
Discrete formulation of the surface interpolation problem usually leads to a large sparse linear equation system. Due to the poor convergence condition of the equation system, the convergence rate of
solving this problem with iterative method is very slow. To improve this condition, a multiresolution basis transfer scheme based on the wavelet transform is proposed. By applying the wavelet
transform, the original interpolation basis is transformed into two sets of bases with larger supports while the admissible solution space remains unchanged. With this basis transfer, a new set of
nodal variables results and an equivalent equation system with better convergence condition can be solved. The basis transfer can be easily implemented by using an QMF matrix pair associated with the
chosen interpolation basis. The consequence of the basis transfer scheme can be regarded as a preconditioner to the subsequent iterative computation method. The effect of the transfer is that the
interpolated surface is decomposed into its low-frequency and high-frequency portions in the frequency domain. It has been indicated that the convergence rate of the interpolated surface is dominated
by the low-frequency portion. With this frequency domain decomposition, the low-frequency portion of the interpolated surface can be emphasized. As compared with other acceleration methods, this
basis transfer scheme provides a more systematical approach for fast surface interpolation. The easy implementation and high flexibility of the proposed algorithm also make it applicable to various
regularization problems.
[1] A. Aldroubi, M. Unser, and M. Eden, "Discrete spline filters for multiresolutions and wavelet ofl2," BEIP/National Center for Res. Resources, Rep. No. 21/91, 1991.
[2] O. Axelsson and V. A. Barker,Finite Element Solution of Boundary Value Problems: Theory and Computation. Orlando, FL: Academic, 1984.
[3] A. Brandt, "Multi-level adaptive solutions to boundary-value problems,"Math. Computat., vol. 31, pp. 333-390, 1977.
[4] M. T. Chiaradia, A. Distante, and E. Stella, "Three-dimensional surface reconstruction integrating shading and sparse stereo data,"Optic Eng., vol. 28, no. 9, pp. 935-942, 1989.
[5] C. K. Chui,An Introduction to Wavelets. Boston, MA: Academic Press, Boston, 1992.
[6] C. K. Chui, "On compactly supported spline wavelets and a duality principle,"Trans. Amer. Math. Soc., vol. 330, no. 2, pp. 903-915, 1992.
[7] A. Cohen, I. Daubechies, and J.-C. Feauveau, "Biorthogonal bases of compactly supported wavelets,"Commun. Pure Appl. Math., to appear.
[8] I. Daubechies, "Orthonormal bases of compactly supported wavelets,"Commun. Pure Appl. Math., vol. XLI, pp. 909-996, 1988.
[9] R. D. Eastman and A. M. Waxman, "Using disparity functionals for stereo correspondence and surface reconstruction,"Comput. Vision, Graphics, Image Processing, vol. 39, pp. 73-101, 1987.
[10] W. E. L. Grimson, "An implementation of a computational theory of visual surface interpolation,"Comput. Vision Graphics Image Processing, vol. 22, pp. 39-69, 1983.
[11] W. E. L. Grimson, "Computational experiment with a feature based stereo algorithm,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-7, no. 1, pp. 17-34, 1985.
[12] C. W. Groetsch,The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Boston, MA: Pitman, 1984.
[13] W. Hackbusch,Multi-Grid Methods and Applications. New York: Springer-Verlag, 1985.
[14] W. Hackbusch and U. Trottenberg, Eds.,Multigrid Methods. New York: Springer-Verlag, 1982.
[15] L. A. Hageman and D. M. Young,Applied Iterative Methods. New York: Academic Press, 1981.
[16] W. Hoff and N. Ahuja, "Surfaces from stereo: Integrating feature matching, disparity estimation, and contour detection,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-11, no. 2, pp.
121-136, 1989.
[17] H. C. Hsieh and W. T. Chang, "Computation network for visible surface reconstruction," inProc. SPIE Conf. Visual Commun. Image Processing, Lausanne, Switzerland, 1990, pp. 1504-1515.
[18] W. T. Chang, H. C. Hsieh, and W.-Hsing Lai, "Second order smoothness measure in curve fitting," inProc. Telecommun. Symp., Hsinchu, ROC, 1990, pp. 550-555.
[19] H. C. Hsieh and W. T. Chang, "Analog computation structure for surface reconstruction,"J. Visual Commun. Image Represent., vol. 2, no. 4, pp. 381-394, 1991.
[20] H. C. Hsieh and W. T. Chang, "Multi-layer computation structure for surface reconstruction,"IEEE Trans. Image Processing, submitted.
[21] S. Jaffard, "Wavelet methods for fast resolution of elliptic problems,"SIAM J. Numerical Anal., vol. 29, no. 4, pp. 965-986, 1992.
[22] D. R. Kincaid and L. J. Hayes,Iterative Methods for Large Linear Systems. New York: Academic Press, 1990.
[23] P. Lancaster and K. Salkauskas,Curve and Surface Fitting. New York: Academic Press, 1986.
[24] M. M. Lavrentev, V. G. Romanov, and S. P. Shishatskii, "Ill-posed problems of mathematical physics and analysis,"Amer. Math., Providence, RI, 1986.
[25] G. G. Lorentz, C. K. Chui, and L. L. Schumaker, Eds.,Approximation Theory. New York: Academic Press, 1976.
[26] S. G. Mallat, "A theory for multiresolution signal decomposition: The wavelet representation,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-11, no. 7, pp. 647-693, 1989.
[27] S. Mallat, "Multifrequency channel decompositions of images and wavelet models,"IEEE Trans. Acoustic Speech Signal Processing, vol. 37, no. 12, pp. 2091-2110, Dec. 1989.
[28] D. Marr,Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: Freeman, 1982.
[29] A. P. Pentland, "Surface interpolation networks,"Neural Computat., vol. 5, no. 3, pp. 430-442, 1993.
[30] A. P. Pentland, "Interpolation using wavelet bases,"IEEE Trans. Pattern Anal. Machine Intell., vol. 16, no. 4, pp. 410-414, Apr. 1994.
[31] P. R. Prenter,Spline and Variational Methods. New York: John Wiley, 1975.
[32] K. Rektorys,Variational Methods in Mathematics, Science, and Engineering, second ed. Dordrecht, The Netherlands: Reidel, 1980.
[33] M. B. Ruskai, R. Coifman, Y. Meyer,et al., Wavelets and Their Applications. Boston, MA: John and Bartlett, 1992.
[34] R. Szeliski, "Fast surface interpolation using hierarchical basis functions,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-12, no. 6, pp. 513-528, 1990.
[35] D. Terzopoulos, "Multi-level computational processes for visual surface reconstruction,"Comput. Vision Graphics Image Processing, vol. 24, pp. 52-96, 1983.
[36] D. Terzopoulos, "Regularization of inverse visual problems involving discontinuities,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, no. 4, pp. 413-424, July 1986.
[37] D. Terzopoulos, "Image analysis using multigrid relaxation methods,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, pp. 129-139, Mar. 1986.
[38] D. Terzopoulos, "The computation of visible-surface representations,"IEEE Trans Pattern Anal. Machine Intell., vol. PAMI-10, no. 4, pp. 417-438, 1988.
[39] H. Yserentant, "On the multi-level splitting of finite element space,"Numerische Mathimatik, vol. 49, pp. 379-412, 1986.
[40] D. M. Young,Iterative Solution of Large Linear Systems. New York: Academic Press, 1971.
Index Terms:
interpolation; convergence of numerical methods; wavelet transforms; signal processing; surface interpolation; multiresolution wavelet transform; large sparse linear equation system; convergence;
multiresolution basis transfer scheme; interpolation basis; QMF matrix pair; preconditioner; frequency domain
M.H. Yaou, W.T. Chang, "Fast Surface Interpolation using Multiresolution Wavelet Transform," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 7, pp. 673-688, July 1994,
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tp/1994/07/i0673-abs.html","timestamp":"2014-04-18T00:31:46Z","content_type":null,"content_length":"60314","record_id":"<urn:uuid:8cfecc97-5d93-4b03-85be-b9862937baa9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/sarahann117/asked","timestamp":"2014-04-18T03:20:32Z","content_type":null,"content_length":"110722","record_id":"<urn:uuid:f0ff0e59-885a-4e5a-ab1e-9fc170b75a0e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakwood, CA Calculus Tutor
Find an Oakwood, CA Calculus Tutor
...I enjoy learning and spreading knowledge and strive to help students not only learn the subjects they are studying but actually become passionate and motivated about them. I am a graduate of
both the Massachusetts Institute of Technology and University of Southern California. My passion for lea...
42 Subjects: including calculus, reading, Spanish, chemistry
...At Oberlin my BA is *in* History/Theory so I took many many music theory classes. I took Music Theory I-IV (basic theory through avant-garde) and am qualified to teach younger students through
Oberlin methods (emphasizing function and use of musical ideas rather than just defining them). In my ...
18 Subjects: including calculus, chemistry, algebra 2, SAT math
...I have many years of experience teaching Elementary Math, both to children and to college students who want to become teachers. I am confident that I can help you understand the concepts and
become a good problem solver. I use pictures, colors, and graphs to make the subject easy to understand and fun.
16 Subjects: including calculus, French, geometry, piano
...The more a student sees and works through a problem or concept, the more comfortable they become. Repetition is key! Once a concept becomes second nature, other layers can be added on to
deepen the knowledge in a subject.
14 Subjects: including calculus, physics, algebra 1, algebra 2
...I was a Bible Study leader at InterVarsity Christian Fellowship for one year, and was a member of that Fellowship for three years. I have been heavily involved in two churches, learning
doctrine from multiple sources and have been to conferences pertaining to the Christian doctrine. I easily pa...
26 Subjects: including calculus, reading, chemistry, French
Related Oakwood, CA Tutors
Oakwood, CA Accounting Tutors
Oakwood, CA ACT Tutors
Oakwood, CA Algebra Tutors
Oakwood, CA Algebra 2 Tutors
Oakwood, CA Calculus Tutors
Oakwood, CA Geometry Tutors
Oakwood, CA Math Tutors
Oakwood, CA Prealgebra Tutors
Oakwood, CA Precalculus Tutors
Oakwood, CA SAT Tutors
Oakwood, CA SAT Math Tutors
Oakwood, CA Science Tutors
Oakwood, CA Statistics Tutors
Oakwood, CA Trigonometry Tutors
Nearby Cities With calculus Tutor
Bicentennial, CA calculus Tutors
Cimarron, CA calculus Tutors
Dockweiler, CA calculus Tutors
Farmer Market, CA calculus Tutors
Foy, CA calculus Tutors
Glassell, CA calculus Tutors
Lafayette Square, LA calculus Tutors
Miracle Mile, CA calculus Tutors
Pico Heights, CA calculus Tutors
Rancho Park, CA calculus Tutors
Rimpau, CA calculus Tutors
Sanford, CA calculus Tutors
Santa Western, CA calculus Tutors
Vermont, CA calculus Tutors
Wilcox, CA calculus Tutors | {"url":"http://www.purplemath.com/oakwood_ca_calculus_tutors.php","timestamp":"2014-04-17T07:48:07Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:a5645ecb-69cc-41cf-a142-d1a9536f3614>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equality holding in Cauchy's Inequality
September 21st 2012, 03:17 AM #1
Aug 2009
Equality holding in Cauchy's Inequality
Hello, this was a problem on an exam:
From Cauchy's Integral Formula, we get Cauchy's Inequality $|f^{(n)}(z_0)| \leq \frac{n!M}{R^n}$.
Prove that equality holds if and only if $f(z)= \frac{aMz^n}{R^n}$ for some $a \in \mathbb{C}$ with $|a| = 1$.
Proving $(\Leftarrow)$ is easy, just take the modulus of the nth derivative of f.
How do we prove the other direction?
Re: Equality holding in Cauchy's Inequality
What are M and R?
Re: Equality holding in Cauchy's Inequality
Ah, From Cauchy's Integral Formula and Inequality.
f is analytic on a ball of radius R
|f(z)| < M for all z in the ball
Last edited by Bingk; September 21st 2012 at 04:07 AM. Reason: correction
Re: Equality holding in Cauchy's Inequality
So the statement is:
Let $\Omega = \{z \in \mathbb{C} \ | \ \lVert z - z_0 \rVert < R \}$. Let $f \in H(\bar{\Omega})$ satisfy $\lVert f(z) \rVert \le M \ \forall z \in \Omega$.
Then $\lVert f^{(n)}(z_0) \rVert = \frac{n!M}{R^n} \Leftrightarrow \exists a \in \mathbb{C}, \lVert a \rVert = 1$, such that $f(z) = \frac{aMz^n}{R^n} \ \forall z \in \Omega$.
Are you sure it isn't supposed to be $f(z) = \frac{aM(z-z_0)^n}{R^n}$ ?
Last edited by johnsomeone; September 21st 2012 at 04:38 AM.
Re: Equality holding in Cauchy's Inequality
What I wrote is what was on the exam, but I think that was justified by assuming that z[0] = 0.
And yes, what you have is correct, as far as I understood the problem
Last edited by Bingk; September 21st 2012 at 04:54 AM. Reason: Added
September 21st 2012, 03:45 AM #2
Super Member
Sep 2012
Washington DC USA
September 21st 2012, 04:06 AM #3
Aug 2009
September 21st 2012, 04:23 AM #4
Super Member
Sep 2012
Washington DC USA
September 21st 2012, 04:53 AM #5
Aug 2009 | {"url":"http://mathhelpforum.com/differential-geometry/203812-equality-holding-cauchy-s-inequality.html","timestamp":"2014-04-17T02:49:48Z","content_type":null,"content_length":"41974","record_id":"<urn:uuid:50c2c4c8-04a0-4756-845a-0038ea73758e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expander graphs, gonality, and variation of Galois representations
New paper on the arXiv with Chris Hall and Emmanuel Kowalski.
Suppose you have a 1-dimensional family of polarized abelian varieties — or, just to make things concrete, an abelian variety A over Q(t) with no isotrivial factor.
You might have some intuition that abelian varieties over Q don’t usually have rational p-torsion points — to make this precise you might ask that A_t[p](Q) be empty for “most” t.
In fact, we prove (among other results of a similar flavor) the following strong version of this statement. Let d be an integer, K a number field, and A/K(t) an abelian variety. Then there is a
constant p(A,d) such that, for each prime p > p(A,d), there are only finitely many t such that A_t[p] has a point over a degree-d extension of K.
The idea is to study the geometry of the curve U_p parametrizing pairs (t,S) where S is a p-torsion point of A_t. This curve is a finite cover of the projective line; if you can show it has genus
bigger than 1, then you know U_p has only finitely many K-rational points, by Faltings’ theorem.
But we want more — we want to know that U_p has only finitely many points over degree-d extensions of K. This can fail even for high-genus curves: for instance, the curve
C: y^2 = x^100000 + x + 1
has really massive genus, but choosing any rational value of x yields a point on C defined over a quadratic extension of Q. The problem is that C is hyperelliptic — it has a degree-2 map to the
projective line. More generally, if U_p has a degree-d map to P^1, then U_p has lots of points over degree-d extensions of K. In fact, Faltings’ theorem can be leveraged to show that a kind of
converse is true.
So the relevant task is to show that U_p admits no map to P^1 of degree less than d; in other words, its gonality is at least d.
Now how do you show a curve has large gonality? Unlike genus, gonality isn’t a topological invariant; somehow you really have to use the geometry of the curve. The technique that works here is one
we learned from an paper of Abramovich; via a theorem of Li and Yau, you can show that the gonality of U_p is big if you can show that the Laplacian operator on the Riemann surface U_p(C) has a
spectral gap. (Abramovich uses this technique to prove the g=1 version of our theorem: the gonality of classical modular curves increases with the level.)
We get a grip on this Laplacian by approximating it with something discrete. Namely: if U is the open subvariety of P^1 over which A has good reduction, then U_p(C) is an unramified cover of U(C),
and can be identified with a finite-index subgroup H_p of the fundamental group G = pi_1(U(C)), which is just a free group on finitely many generators g_1, … g_n. From this data you can cook up a
Cayley-Schreier graph, whose vertices are cosets of H_p in G, and whose edges connect g H with g_i g H. Thanks to work of Burger, we know that this graph is a good “combinatorial model” of U_p(C);
in particular, the Laplacian of U_p(C) has a spectral gap if and only if the adjacency matrix of this Cayley-Schreier graph does.
At this point, we have reduced to a spectral problem having to do with special subgroups of free groups. And if it were 2009, we would be completely stuck. But it’s 2010! And we have at hand a
whole spray of brand-new results thanks to Helfgott, Gill, Pyber, Szabo, Breuillard, Green, Tao, and others, which guarantee precisely that Cayley-Schreier graphs of this kind, (corresponding to
finite covers of U(C) whose Galois closure has Galois group a perfect linear group over a finite field) have spectral gap; that is, they are expander graphs. (Actually, a slightly weaker condition
than spectral gap, which we call esperantism, is all we need.)
Sometimes you think about a problem at just the right time. We would never have guessed that the burst of progress in sum-product estimates in linear groups would make this the right time to think
about Galois representations in 1-dimensional families of abelian varieties, but so it turned out to be. Our good luck.
5 thoughts on “Expander graphs, gonality, and variation of Galois representations”
1. My version of the story is there…
2. Bonege! Gratulojn!
Chu iu ajn el viaj legantoj scias chu la implikacio (polilogaritma diametro -> esperantismo) validas ne nur por la grafoj de Cayley, sed ankau por la grafoj de Cayley-Schreier?
(La implikacio alidirekta funkcias por chiu grafo de grado barita.)
3. PS. I think it might be good for certain abilities of yours in possible disuse if you took upon yourself to translate the above for the nationally limited.
4. “…you might ask that A_t[p](Q) be empty for ‘most’ t.” Empty?? I’m pretty sure that’s too much to ask.
Nitpicking aside, this is a really cool paper — I wish I had been involved with it somehow. (Not that I almost was or that there was any reasonable way I would have been — hence “I wish.”) I plan
on inviting Chris Hall to UGA to talk about it in the near future. Or perhaps I just did?
5. I have just posted some remarks by P. Sarnak on this paper here (with his permission).
Tagged abelian varieties, algebraic geometry, esperanto, expander graphs, expanders, Galois representations, gonality, graph theory, hall, kowalski, number theory, spectral gap, torsion | {"url":"http://quomodocumque.wordpress.com/2010/08/31/expander-graphs-gonality-and-variation-of-galois-representations/","timestamp":"2014-04-20T19:24:39Z","content_type":null,"content_length":"67718","record_id":"<urn:uuid:8c2a19e8-92bf-4493-8ba7-b61fcdc3deba>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Return of Brandon Morrow, Strikeout Artist
Throughout his six-year MLB career, Brandon Morrow has tantalized us with his potential. He was capable of elite strikeout rates, but his spotty control and penchant for the long ball held him
back. Last year Morrow turned in his best single-season pitching performance to date, and it’s a shame that he got hurt after just 21 starts and 124.2 innings. He was on pace for a 196-inning, 15-win
season for the Blue Jays.
Not surprisingly, improved control and an increased ground ball rate (leading to fewer homers) were the primary factors fueling his breakout, but like most pitchers who experience drops in walk and
fly ball rates, Morrow’s strikeout rate fell as well. PitchFX data provided by BrooksBaseball.net indicates a change in approach in 2012 led to Morrow’s new outcomes.
Morrow’s pitch selection, 2011 vs. 2012
In 2011, Morrow threw his four-seam fastball 58% of the time, his slider 24%, his change-up 4% and any other pitch 14%. His average four-seamer clocked in at 94.6 mph. In 2012, Morrow threw his
four-seam fastball 60% of the time, his slider 21%, his change-up 11% and any other pitch just 8% with his average four-seamer registering at 93.8 mph.
That might not seem like much of a difference, but it leads me to two conclusions about Morrow’s 2012 season:
1. He made a conscious attempt to specialize in three pitches versus varying upwards of six offerings
2. He stopped rearing back for extra velocity, likely improving his control in the process
A look at exactly when he threw each pitch bears out his transformation much more clearly.
A look at the tan-shaded lines shows how Morrow approached both left-handed and right-handed batters in 2011 and 2012. Notice how in 2011 he threw change-ups just 6% of the time versus left-handed
batters, but in 2012 he tripled that rate to 19% of the time. Against right-handed batters his approach remained very constant: fastballs and sliders. The notable change is the complete eradication
of the sinker, which yielded a proportional rise in four-seamer usage.
Explaining the decrease in walk and fly ball rates from 2011 to 2012
One thing you might notice in the tables above is the rearrangement of the red and blue shading from 2011 to 2012, which corresponds to pitches Morrow threw 10% more or 10% less than his baseline
usage, respectively. In 2012 you see the appearance of the blue shading in the four-seam category, indicating lower than normal usage, for when Morrow was ahead of left-handed batters and when he had
two strikes on both righties and lefties.
Against left-handed batters, Morrow traded in those four-seamers for change-ups. Prior to 2012, Morrow had always struggled with locating his change-up — he threw it for a ball at least 43% of the
time in each of his first five seasons — but last year he missed the strike zone just 35.1% of the time with his change-up. Combined with his increased reliance on his change, that explains a great
deal of his improved walk rate. The rest of it can be explained by the incremental annual rise of his fastball control. From 2008-2011, Morrow saw his fastball-for-a-ball rate drop three percentage
points from 38.1% to 35.1% (lower is better) but last year alone it dropped over two percentage points to 32.9%.
As for the increased ground ball rate, it also doesn’t hurt that Morrow’s career 2.61 grounder-to-fly ratio off his change-up is the highest of any of his several offerings, so more ground balls was
a natural byproduct of his change in approach. Plus, from 2009-2012, Morrow increased the grounder-to-fly ratio of his slider annually from 1.40 in 2009 to 2.06 last year.
Explaining the decrease in strikeout rate from 2011 to 2012
Morrow’s strikeout struggles last year were due entirely to his difficulty striking out left-handed batters. While his strikeout rate (measured in terms of K%, or strikeouts per plate appearance)
against right-handed batters remained stable from 2011 to 2012 (20.9% in 2011, 20.7% in 2012), his K% against left-handed pitchers dropped noticeably (28.6% to 23.5%). This decrease resulted in an
overall drop in K% from 26.1%, the second-highest rate of his career, to 21.4%, the second-lowest.
What could have caused this decrease? For whatever reason (velocity, movement, etc.) Morrow endured a drop in swings-and-misses generated by his four-seam fastball last year. In 2010 and 2011,
Morrow’s two elite strikeout seasons, he generated whiffs on about 10% of his four-seamers. Last year that rate fell to just 5.9%
This didn’t have much of an effect on his strikeout rate against right-handed batters. Morrow turned to his always-effective slider in two-strike counts against righties with greater frequency in
2012 than 2011 (48% versus 40%), which helped offset the drop in four-seamer whiff rate. In two-strike counts against left-handed batters, he used his slider as much in 2012 as 2011 (41% in 2012, 39%
in 2011) but also featured his change-up with increased frequency. In terms of generating swings and misses, his change-up was less than half as effective as his slider (10.8% whiff rate for
change-up, 23.2% for slider).
To sum all of that up, overall from 2011 to 2012 Morrow saw his fastball velocity decrease. With it his strikeout effectiveness with the fastball plummeted. He was still able to maintain the same
strikeout rate versus right-handed batters due to a noticeable rise in his slider usage, his premier swing-and-miss pitch, in two-strike counts to righties. He was unable to maintain the same
strikeout rate against left-handed batters because the pitch he used 47% of the time in two-strike counts to righties (his four-seamer) was half as effective at generating swings and misses as it was
the year before, and instead of increasing his slider usage to generate strikeouts he turned to his much less effective change-up (much less effective in terms of generating whiffs).
Looking ahead to 2013
If we could construct the perfect season for Morrow in 2013, it would look something like this:
• Fastball velocity and whiff rate return to or surpass 2010-2011 levels
• Increased change-up usage from 2012 persists, which would keep his ground ball rate up
• Control of his four-seamer and change-up remains improved from 2010-2011 levels, which would keep his walk rate down
It’s only been one start and 100 pitches, but it appears all of these things have happened.
Morrow’s velocity has risen drastically. In his first start this year his fastball averaged a robust 95.4 mph. Recall that last year Morrow’s average fastball was just 93.8 mph, and in his first
start of 2012 it averaged just 91.9 mph. While he induced swings and misses on just 5.9% of his four-seamers last year, he boasted a gaudy 15.7% whiff rate in his first start of 2013.
Morrow’s increased change-up usage has carried over from last season. In his start against the Indians on April 3, he tossed his change-up 15 times (for a 15% usage rate) and featured it prominently
to left-handed batters (17% of the time) just like he had the year before.
Morrow’s control has remained improved. Of the 51 four-seamers he threw in his first start of 2013, only 33.3% of them were balls, which is right in line with his 32.9% rate from last season. While
that sample size is still very small, the sample size of 15 change-ups is absolutely impossible to draw anything meaningful from, but for what it’s worth he missed the strike zone on 46.7% of those
15 change-ups. For now, just remember that he walked only two batters in those six innings, which isn’t too bad.
With his fastball dominance restored and his increase change-up usage from 2012 carrying over to 2013, Morrow could be poised for a return to his elite strikeout rates while maintaining his improved
control and ground ball rates. That could be a devastating combination for American League batters and makes Morrow a sleeper candidate to insert himself into the AL Cy Young discussion — if he can
stay healthy.
Comments on this entry are closed. | {"url":"http://www.baseballprof.com/2013/04/brandon-morrow-strikeout-artist/","timestamp":"2014-04-17T01:18:40Z","content_type":null,"content_length":"51271","record_id":"<urn:uuid:8cf45fad-933b-4927-bd6f-b6a97d24082b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
5.3 Floats
A floating-point number or float is a number stored in scientific notation. The number of significant digits in the fractional part is governed by the current floating precision (see Precision). The
range of acceptable values is from ‘10^-3999999’ (inclusive) to ‘10^4000000’ (exclusive), plus the corresponding negative values and zero.
Calculations that would exceed the allowable range of values (such as ‘exp(exp(20))’) are left in symbolic form by Calc. The messages “floating-point overflow” or “floating-point underflow” indicate
that during the calculation a number would have been produced that was too large or too close to zero, respectively, to be represented by Calc. This does not necessarily mean the final result would
have overflowed, just that an overflow occurred while computing the result. (In fact, it could report an underflow even though the final result would have overflowed!)
If a rational number and a float are mixed in a calculation, the result will in general be expressed as a float. Commands that require an integer value (such as k g [gcd]) will also accept
integer-valued floats, i.e., floating-point numbers with nothing after the decimal point.
Floats are identified by the presence of a decimal point and/or an exponent. In general a float consists of an optional sign, digits including an optional decimal point, and an optional exponent
consisting of an ‘e’, an optional sign, and up to seven exponent digits. For example, ‘23.5e-2’ is 23.5 times ten to the minus-second power, or 0.235.
Floating-point numbers are normally displayed in decimal notation with all significant figures shown. Exceedingly large or small numbers are displayed in scientific notation. Various other display
options are available. See Float Formats.
Floating-point numbers are stored in decimal, not binary. The result of each operation is rounded to the nearest value representable in the number of significant digits specified by the current
precision, rounding away from zero in the case of a tie. Thus (in the default display mode) what you see is exactly what you get. Some operations such as square roots and transcendental functions are
performed with several digits of extra precision and then rounded down, in an effort to make the final result accurate to the full requested precision. However, accuracy is not rigorously guaranteed.
If you suspect the validity of a result, try doing the same calculation in a higher precision. The Calculator's arithmetic is not intended to be IEEE-conformant in any way.
While floats are always stored in decimal, they can be entered and displayed in any radix just like integers and fractions. Since a float that is entered in a radix other that 10 will be converted to
decimal, the number that Calc stores may not be exactly the number that was entered, it will be the closest decimal approximation given the current precision. The notation ‘radix#ddd.ddd’ is a
floating-point number whose digits are in the specified radix. Note that the ‘.’ is more aptly referred to as a “radix point” than as a decimal point in this case. The number ‘8#123.4567’ is defined
as ‘8#1234567 * 8^-4’. If the radix is 14 or less, you can use ‘e’ notation to write a non-decimal number in scientific notation. The exponent is written in decimal, and is considered to be a power
of the radix: ‘8#1234567e-4’. If the radix is 15 or above, the letter ‘e’ is a digit, so scientific notation must be written out, e.g., ‘16#123.4567*16^2’. The first two exercises of the Modes
Tutorial explore some of the properties of non-decimal floats. | {"url":"http://www.gnu.org/software/emacs/manual/html_node/calc/Floats.html","timestamp":"2014-04-18T17:02:09Z","content_type":null,"content_length":"6748","record_id":"<urn:uuid:bfb7ad82-736f-41b1-ae9f-5ed475df62d1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] How do I call a masked function in a package without a namespace?
[R] How do I call a masked function in a package without a namespace?
Gabor Grothendieck ggrothendieck at myway.com
Tue Mar 15 21:27:37 CET 2005
Dirk Koschuetzki <dkoschuetzki <at> gmx.de> writes:
: Hello,
: I work with two packages sna and graph from CRAN resp. Bioconductor. Both
: packages have a function called "degree". Therefore one of the functions
: is masked by the other and which one gets called depends on the order of
: loading. The problem is that both package do not have a namespace,
: therefore calling the masked function with "package::degree" does not
: work. See the following transcript:
: $ R --vanilla
: [[ Running on Debian Sarge ]]
: R : Copyright 2004, The R Foundation for Statistical Computing
: Version 2.0.1 (2004-11-15), ISBN 3-900051-07-0
: [...]
: > library("sna")
: > library("graph")
: Loading required package: cluster
: Loading required package: Ruuid
: Creating a new generic function for "print" in "Ruuid"
: Loading required package: Biobase
: Loading required package: tools
: Welcome to Bioconductor
: Vignettes contain introductory material. To view,
: simply type: openVignette()
: For details on reading vignettes, see
: the openVignette help page.
: > conflicts()
: [1] "last.warning" "degree" "body<-" "print" "split"
: [6] "union"
: > sna::degree()
: Error in loadNamespace(name) : package 'sna' does not have a name space
: > graph::degree()
: Error in loadNamespace(name) : package 'graph' does not have a name space
: > sna:::degree
: Error in loadNamespace(name) : package 'sna' does not have a name space
: > graph:::degree
: Error in loadNamespace(name) : package 'graph' does not have a name space
: Is there a way to call the masked function via a different way?
: And I wold like to create my own function degree which will of course
: masked both functions and should therefore be able to call both functions.
The following looks up the position of the graph package on the
search path and gets degree specifically from there,
invokes it with the indicates arguments.
graph.degree <- function(...)
get("degree", grep("package:graph$", search()))(...)
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2005-March/067720.html","timestamp":"2014-04-17T18:24:37Z","content_type":null,"content_length":"5274","record_id":"<urn:uuid:ea7b18f7-5987-4238-9c3d-ab6bfec7fae5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Teacher2Teacher - Q&A #495
View entire discussion
From: Gail (for Teacher2Teacher Service)
Date: Aug 19, 1998 at 21:20:35
Subject: Re: Rounding numbers to the nearest ten
It is important that this student first be able to skip count by tens and by
hundreds, since those are numbers he/she will be using when rounding. Then,
it is important that he/she knows what "consecutive" tens, or hundreds are.
You can have the student work with number lines, beginning at amounts greater
than zero, to practice skip counting.
After you have created number lines, ask the student to select two
consecutive amounts and put a dot where he/she thinks the midpoint is (the
spot directly in the middle between those two tens or hundreds. Help him/her
label that point. Do this with all the tens or hundreds on the number line.
Help the student see there is a "pattern" for all the middle points.
When your student is comfortable finding midpoints, ask him/her to choose a
number that has not been marked on the number line (for example, 63). Ask
the student to figure out where it belongs on the number line. (He/she
should place it on the line between 60 and 70, before the midpoint of 65.)
By using the line segment between 60 and 65 as a guide, the student can see
that the point for 63 is much closer to 60 than it is to 70... so the
number rounds off to 60.
If you are rounding to the nearest hundred, select a number like 178, and
look for the two consecutive hundreds it is between. The student should place
it between 170 and 180, past the midpoint of 175. Using the line segment
from 170 to 180 as a guide, the student can see that 178 is closer to 180
than it is to 170, so that amount rounds to 180.
If you find the number is the midpoint, tell the student they are exactly
halfway between the two consecutive tens or hundreds, so there really isn't
a "closer" one to go to... usually the way to round such a number is to go
"up" the number line to the higher of the two (sometimes, if you are doing
an estimate, this will cause your sum to be way off... but that is really a
consideration for students older than fourth grade...)
Repeat this many times, until the student can do the rounding mentally,
without having to resort to the number line.
-Gail, for the Teacher2Teacher service
Post a public discussion message
Ask Teacher2Teacher a new question | {"url":"http://mathforum.org/t2t/message.taco?thread=495&message=3","timestamp":"2014-04-20T09:51:04Z","content_type":null,"content_length":"6442","record_id":"<urn:uuid:8654dcc2-cef3-4002-81f5-9011c570c14c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Appendix A: For those concerned about momentum.
Conservation of Momentum and Conservation of Energy
Conservation of Momentum:
The amount of momentum (p) that an object has depends on two physical quantities: the mass and the velocity of the moving object.
p = mv
where p is the momentum, m is the mass, and v the velocity.
If momentum is conserved it can be used to calculate unknown velocities following a collision.
(m[1 * ]v[1])[i] + (m[2 * ]v[2])[i] = (m[1 * ]v[1])[f] + (m[2 * ]v[2])[f]
where the subscript i signifies initial, before the collision, and f signifies final, after the collision.
If (m[1])[i] = 0, and (v[2])[i] = 0, then [](v[2])[f] must =0.
So, for conservation of momentum, there cannot be pulverization.
If we assume the second mass is initially at rest [(v[2])[i] = 0], the equation reduces to
(m[1 * ]v[1])[i] = (m[1 * ]v[1])[f] + (m[2 * ]v[2])[f]
As you can see, if mass m[1] = m[2] and they "stick" together after impact, the equation reduces to ,
(m[1 * ]v[1])[i] = (2m[1 * ]v[new])[f]
or v[new] = (1/2) [* ]v[1]
If two identical masses colliding and sticking together, they will travel at half the speed as the original single mass.
Conservation of Energy:
In elastic collisions, the sum of kinetic energy before a collision must equal the sum of kinetic energy after the collision. Conservation of kinetic energy is given by the following formula:
(1/2)(m[1 * ]v^2[1])[i] + (1/2)(m[2 * ]v^2[2])[i] = (1/2)(m[1 * ]v^2[1])[f] + (1/2)(m[2 * ]v^2[2])[f] + (Pulverize) + (Fail Floor Supports)
where (Pulverize) is the energy required to pulverize a floor and (Fail Floor Supports) is the energy required to fail the next floor.
If (1/2)(m[1 * ]v^2[1])[i] + (1/2)(m[2 * ]v^2[2])[i] = (Pulverize) + (Fail Floor Supports), there well be no momentum transfer.
In reality, (1/2)(m[1 * ]v^2[1])[i] + (1/2)(m[2 * ]v^2[2])[i] < (Pulverize) + (Fail Floor Supports),
So, for conservation of energy, we must assume there is some additional energy such that,
(1/2)(m[1 * ]v^2[1])[i] + (1/2)(m[2 * ]v^2[2])[i] + (Additional Energy) = (Pulverize) + (Fail Floor Supports),
where (Additional Energy) is the additional amount of energy needed to have the outcome we observed on 9/11/01. | {"url":"http://www.drjudywood.co.uk/articles/BBE/BilliardBalls.html","timestamp":"2014-04-21T02:50:41Z","content_type":null,"content_length":"91513","record_id":"<urn:uuid:02913433-e5d3-483b-884a-436cfb5f6b68>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph translational grammars for instruction selection and SSA graphs
ihusar <ihusar@fit.vutbr.cz>
Thu, 10 Sep 2009 12:18:16 +0200
From comp.compilers
| List of all articles for this month |
From: ihusar <ihusar@fit.vutbr.cz>
Newsgroups: comp.compilers
Date: Thu, 10 Sep 2009 12:18:16 +0200
Organization: FIT VUT
Keywords: analysis, question
Posted-Date: 13 Sep 2009 20:06:09 EDT
Hello guys and ladies,
In my work, I am trying to generate compiler backend from
instruction set description in an architecture design language called
ISAC (project pages http://www.fit.vutbr.cz/research/groups/lissom/).
For this, I was trying to find a suitable model that describes
instruction set and from which I can generate LLVM backend passes.
One propably suitable model I found is based on context-sensitive
translational graph grammars, where the input grammar is one that
generates compiler's IR control-dataflow graph where nodes are IR
instructions, the output language generates a control-dataflow graph
where nodes are target architecture instructions. My problem here is
that I am not able to find any suitable formalism (most of
graph-grammars work is about context-free graph grammars and
translational grammars seem not to be defined at all).
An attempt to describe rule that represents addition with carry
generation is shown here:
www.stud.fit.vutbr.cz/~xhusar01/graph-graph-add.pdf. This figure shows
context-sensitive graph translation rule where R and C are left-hand
side nonterminals. We can define the graph translational grammar GIS
as a 6-tuple (N, I, O, E, P, S), where the nonterminal set N = Q union
S, Q is set of all processor register classes, I is input alphabet
consisting of compiler's IR instructions and immediate operands, O is
output alphabet consisting of target architecture instructions and
immediate operands, E = {i1, ..., i32, ...} is a set of available data
types that are used to annotate edges present in rules, P is a set of
production rules
My question is: have you read or heard about such an attempt to
describe instructions with graph translational grammars or with
something similar. Also, if the rule example is a little
understandable, don't you know about some suitable context-sensitive
graph translational grammar? (it will be based propably on
gluing/agebraic approach)
My second question is related to SSA-graphs. In my opinion, it would
be nice or at least interesting to have an control-dataflow graph
really defined as a graph (no notion of basic blocks!), there would be
just immediates as leaves and inner nodes would be instructions. There
will be two types of edges: for data dependencies and for control
flow. For example a SSA phi instruction would have one or more
incoming control flow edges and from them it can decide which incoming
dataflow edge will be used as a result. Example is shown here
www.stud.fit.vutbr.cz/~xhusar01/SSA.pdf. Again, I would not like to
reinvent the wheel, so if you know about such approaches, please let
me know.
Thank you
Adam Husar
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/09-09-053","timestamp":"2014-04-17T16:19:20Z","content_type":null,"content_length":"6502","record_id":"<urn:uuid:0aab1566-61c5-4de3-8a7c-391f71f41df6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Question #19
Replies: 2 Last Post: Jun 2, 1997 3:59 PM
Messages: [ Previous | Next ]
Question #19
Posted: May 24, 1997 11:01 AM
Joshua Zucker and David Klein have exchanged the following thoughts
about one of the sample multiple choice questions provided by the
college board in a book in which the new syllabi are presented:
> Another new question, #19, in my opinion asks for lack of
> understanding. They give a table of values of f(1.7), f(1.8), f(1.9),
> f(2.0). They tell you f is differentiable on [0,3] and ask for the
> best approximation for f'(1.7). The correct answer SHOULD be that we
> have no idea, because we don't know whether f is oscillating rapidly
> between the given points. We need more information about f! But of
> course, the AP people just want us to plug in delta-f over delta-x
> without thinking about these issues.
Another excellent point. I think we see here the negative influence of
Harvard Calculus which has questions of this type. I agree with you that
this question asks for lack of understanding.
> And not only that, but the wrong answers they give are foolish! They
> do give the answer choice that would come from just taking delta-f
> without dividing by delta-x. But they do NOT have the wrong answer
> that would come from using f(2.0) - f(1.7) / .3 instead of
> f(1.8) - f(1.7) / .1 ... that is, they don't test the one idea that
>> they should be, which is that the best approximation to the derivative
> comes from using the point that's CLOSEST. They do have the answer
> that comes from using 1.9, though.
It's hard to say that f(2.0) - f(1.7) / .3 is necessarily wrong (as you
point out above).
They are of course correct about the technical flaws of the problem.
None the less, I have a great deal of enthusiasm for what I think is the
underlying point of the problem, so I have given some thought as to how
it might be amended.
As stated, the table of values implies that the derivative of the
function is not monotonic in the interval from 1.7 to 2.0. Suppose this
were changed so that the values were consistent with strict monotonicity
of the derivative and the additional condition were given that the
second derivative is defined and has no sign changes over the interval?
Would this then make the computation of an approximation of the
derivative value using the nearest point the best approximation?
But even if some esoteric example can be concocted which defeats this
attempt to rid the problem of technical flaws, I am still sympathetic to
the point of the problem. Here is my reason a tangent line to the
graph of a function at a certain point, do we not make a sketch showing
that the the graph of a function which is monotonic to the right of the
point of tangency? Do we not do this in connection with a discussion of
looking at a succession of approximations? If we use a standard
asymmetric difference quotient, do we not imply with our sketches and
words that the goal (a slope which defines a tangent line) is approached
more and more closely as the denominator of the difference quotient
approaches 0? How many of us then take the time to show that these
implications need not be true in every case?
The derivative rules we expect the students to know are usually
presented and justified by an analysis of a difference quotient.
Whether or not students follow the details of the discussion, they know
it is essential for doing well in the course to learn the bottom line
and to practice its use. In fact, in my view, a good deal of
mathematics instruction encourages students to wait for the bottom line
because "that is where the money is."
To me the question under discussion is asking students to show some
numerical and geometric understanding of some of the intermediate
lines. This is why I am so sympathetic to this question and to reform
calculus in general. I have been teaching calculus for more than 20
years and I recall having the experience of incidentally discovering
that even students who were masters of bottom lines sometimes had little
feeling for what the story was all about. This is in spite of the fact
that I tried to say some clever and meaningful things to them on the way
to the bottom line.
When computers (including hand held calculators) can do the bottom line
work for us, we are free to concentrate more on what the story is all
about. To me a bottom line calculus course is colored only in black
while a reform calculus course, well done, is in full color.
I recently gave a talk in which I told about some preliminary steps I
have taken recently with students to whom I intended to introduce the
Chain Rule. I have them think about a sine wave position function and
its associated velocity function. Then I ask the students questions
inserted into several daily assignments to get them thinking about the
effect, if any, on the model velocity function if the fundamental period
of the position function is decreased. If the right questions are
asked, students can come to the view that in such a case, it is
reasonable to expect the amplitude of the velocity function to
One person attending the talk wondered how I would have time to teach a
complete calculus course if I took so much time just setting up a march
to the bottom line of a derivation (or proof) of the Chain Rule. I
think that a rush to the bottom line without any sense of why it might
be reasonable is rather empty intellectually.
A reform calculus course certainly need not be less engaging
intellectually than a traditional type calculus course. Quite the
contrary. We who want to share a technicolor version of calculus with
our students face a great challenge. They are very accustomed to bottom
line mathematics courses. We want them to open their eyes and minds.
Richard Sisley | {"url":"http://mathforum.org/kb/thread.jspa?threadID=160236","timestamp":"2014-04-21T05:20:49Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:563492dd-ecad-4532-9dcd-6109fb87e376>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diagonal invariants of the symmetric group on $k[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n]$
up vote 5 down vote favorite
This sounds like something that must have been answered long ago, but for some reason I can find nothing on it in the internet. (There has been lots of recent activity in diagonal covariants, related
to the $n!$ conjecture, but invariants seem to have become a stepchild in this process.)
Let $k$ be a field of characteristic $0$. Let $n\in\mathbb N$. The group $S_n\times S_n$ acts on the polynomial ring $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]$ by
$\left(\sigma,\tau\right)\left(P\right) = P\left(X_{\sigma\left(1\right)},X_{\sigma\left(2\right)},...,X_{\sigma\left(n\right)},Y_{\tau\left(1\right)},Y_{\tau\left(2\right)},...,Y_{\tau\left(n\
Thus, the symmetric group $S_n$ also acts on $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]$ due to the diagonal embedding $S_n\to S_n\times S_n$.
Since the action of $S_n$ is not generated by pseudoreflections, it follows from the converse of the Chevalley-Shephard-Todd theorem that $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]$ is not a free
$k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]^{S_n}$-module. But it is easy to see that $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]$ is a free $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]^{S_n\
times S_n}$-module of rank $n!^2$.
Question: Is $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right]^{S_n}$ a free $k\left[X_1,X_2,...,X_n,Y_1,Y_2,...,Y_n\right] ^ {S_n\times S_n}$-module?
invariant-theory rt.representation-theory symmetric-polynomials symmetric-functions
Possibly related: Section 18 of Hecke's Lectures on the Theory of Algebraic Numbers. – Pierre-Yves Gaillard Feb 14 '12 at 17:51
Well, the $S_n\times S_n$-invariants appear in many places (also in the theory of $\lambda$-rings); they are very well-behaved. – darij grinberg Feb 14 '12 at 18:01
1 An approach is to show that $k[\cdots]^{\Delta W}$ is Cohen-Macaulay, and therefore free over $k[\cdots]^{W^k}$. See section 3 of Stanley's "Invariants of finite groups and their applications to
combinatorics" math.mit.edu/~rstan/pubs/pubfiles/38.pdf – Gjergji Zaimi Feb 14 '12 at 19:02
Why exactly should this action not be generated by reflections? A transposition $\sigma\in S_n$ acts as a reflection on $k^n$. Hence the action of $(\sigma,1)$ on $k^n\times k^n$ is also a
reflection. $S_n\times S_n$ is generated by $\lbrace(\sigma,1),(1,\tau) | \sigma,\tau\,\text{Transp.}\rbrace$. – Johannes Hahn Feb 14 '12 at 19:15
1 The action is generated by symplectic reflections, though. – Mariano Suárez-Alvarez♦ Feb 14 '12 at 19:35
show 5 more comments
2 Answers
active oldest votes
I believe the answer to your question is yes.
The Reynolds operator $$ R: k[X_1,\ldots,X_n,Y_1,\ldots,Y_n] \longrightarrow k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]^{S_n} $$ is $k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]^{S_n}$-equivariant, and therefore
in particular $k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]^{S_n\times S_n}$-equivariant. Also, $R$ is a projection, and therefore the image of $R$ is a direct summand of of its domain of definition. It
follows therefore that the $k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]^{S_n\times S_n}$-module $ k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]^{S_n}$ is a direct summand of the free $k[X_1,\ldots,X_n,Y_1,\
ldots,Y_n]^{S_n\times S_n}$-module $k[X_1,\ldots,X_n,Y_1,\ldots,Y_n]$.
up vote Now since the action of $S_n\times S_n$ is generated by reflections, its invariant ring is a polynomial ring, and therefore any direct summand of a free module over this ring is again free
7 down (this is the Quillen–Suslin theorem).
Edit (concerning the rank, which is $n!$): Let $G$ be a finite group acting on $R=k[x_1,\ldots,x_n]$ and let $H\leq G$ be a subgroup. Then $$ {\rm frac}(R^G)\otimes_{R^G} R^H \cong (R^G-\{0
\})^{-1}R^H \cong {\rm frac}(R^H) $$
where ${\rm frac}$ denotes the fraction field (this is easily seen from the fact that by a construction similar to the Reynols operator, the denominator of a fraction over $R^H$ can always
be made $R^G$-invariant). Let $Q={\rm frac}(R)$, then one can show in a similar fashion $Q^G = {\rm frac}(R^G)$. We conclude that $$ {\rm rank}_{R^G} R^H = {\rm dim}_{{\rm frac}(R^G)} {\rm
frac} (R^H) = [Q^H:Q^G] = [G:H] $$ the last equality being due to Galois theory.
Hmm. Nice argument (+1), but I'd prefer something without using the Quillen-Suslin theorem, and maybe actually something that gives me the rank of that module... – darij grinberg Feb 14
'12 at 18:17
I think this can be extended to show that the rank is $n!$. – Florian Eisele Feb 14 '12 at 18:24
2 Okay Darij, I added a proof that the rank is $n!$ (this should be fairly standard, I think I know it from an invariant theory lecture). However, of course, what I suppose you really want
is an explicit basis. – Florian Eisele Feb 14 '12 at 18:44
1 Thanks for this - it is a very good example for the use of advanced commutative algebra in classical invariant theory. I indeed was looking for an explicit basis, though. Do you happen to
know whether the proof of Quillen-Suslin (at least one of them) uses schemes or other modern algebraic geometry nontrivially? If so, feel free to post this on mathoverflow.net/questions/
76942/… . – darij grinberg Feb 14 '12 at 19:36
Unfortunately I don't know much about the Quillen-Suslin theorem beyond the fact that it exists (although I just had a very quick look at Quillen's paper out of interest, and it indeed
seems to be mostly algebraic geometry rather than commutative algebra). – Florian Eisele Feb 14 '12 at 20:32
add comment
I finally got around to writing the promised details. I tried to make this a bit instructive, I hope you still find it useful.
First I will expand a bit on my comment above. A good reference is Stanley's article "Invariants of finite groups and their applications to combinatorics". There is a folklore theorem (which
first appeared in print in M. Hochster and J. A. Eagon, Cohen-Macaulay rings, invariant theory, and the generic perfection of determinantal loci, Amer. J. Math. 93 (1971), 1020-1058) which
says that for a finite subgroup $G$ of $Gl_n(\mathbb C)$ the algebra of invariants $\mathbb C[x_1,x_2,\dots,x_n]^G$ is Cohen-Macaulay. Therefore if $G$ is a subgroup of $G'$ and $G'$ is
generated by pseudoreflections we get as a corollary of the Chevalley-Shephard-Todd theorem that $\mathbb C[x_1,\dots,x_n]^G$ is free over $\mathbb C[x_1,\dots,x_n]^{G'}$. In particular, this
holds for $G'=S_n\times S_n$ and $G$ its diagonal subgroup.
Now, from a combinatorics perspective, we aren't simply satisfied by calculating the dimension of a polynomial algebra over another, but we would also like to exhibit a nice basis. I think
it's worth spending sometime understanding the case of $R=\mathbb C[x_1,\dots,x_n]$ over $\mathbb C[x_1,\dots,x_n]^{S_n}$, first. Because the multivariate cases are similar in nature.
It's not hard to prove that the dimension of $R$ over $R^{S_n}$ is $n!$ and moreover there are two standard bases one learns about:
• The Artin Basis, consisting of monomials $x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n}$, with $0\le a_i\le n-i$ for all $i$.
• Schubert polynomials.
Schubert polynomials are a nice basis, because they are indexed over combinatorial objects, and satisfy many combinatorial and geometric properties. The Artin basis, on the other hand, makes
it easy to see that the Hilbert series is $(1+q)(1+q+q^2)\cdots (1+q+\cdots+q^{n-1})=[n]_q!$, yet it somewhat hides the presence of the symmetric group.
We know that $[n]_q!$ is the generating function of any Mahonian statistics on the symmetric group, so it would be nice to have a basis to reflect that. We can do this by the so called
up vote "descent" basis, and we will see that this is a construction that generalizes to the bivariate case (this is essentially the content of Bergeron and Lamontagne's paper).
4 down
vote The most famous Mahonian statistic is the Major index. Our basis is modeled after this statistic and is indexed over permutations. Since we have $$\operatorname{maj} (\sigma) = \sum _{\sigma
_{i+1} < \sigma _i} i ,$$ the most natural thing to try is the collection of monomials $$b _{\sigma} = \prod _{i = 1} ^{n-1} (x _{\sigma_1} \cdots x _{\sigma _i}) ^{\chi (\sigma _i > \sigma _
To prove that the $b_{\sigma}$'s form a basis it is enough to show that the polynomials $m_{\lambda}b_\sigma$ where $m_\lambda$ ranges over all symmetric monomials parametrized by partitions
$\lambda$ are linearly independent. And the proof goes like this: We will construct a bijection $(\lambda,\sigma)\leftrightarrow \lambda$ where $\mu,\lambda$ are partitions with at most $n$
parts and $\sigma\in S_n$, and then use a Grobner type argument, i.e. show that the leading monomial in $m_{\lambda}b_{\sigma}$ is precisely $X^{\mu}=x_1^{\mu_1}\cdots x_n^{\mu_n}$. You will
find this argument spelled out in detail in section 2 of Allen's "The Descent Monomials and a Basis for the Diagonally Symmetric Polynomials".
To guess a basis for the bivariate case Bergeron and Lamontagne, play a similar game, where they first calculate the Hilbert series of $R^{S_n}$ over $R^{S_n\times S_n}$. They actually
calculate the Frobenius series and obtain the expression $$(q;q)_n(t;t)_n h_n\left[\frac{1}{(1-q)(1-t)}\right]$$ in plethystic notation. But this is a well known generating function over
$S_n$. Namely it is $$\sum _{\sigma\in S _n} q^{\operatorname{maj}(\sigma)}t^{\operatorname{maj}(\sigma^{-1})}.$$ So they construct a basis $$B _{\sigma}=\rho b _{\sigma}(X)b _{\sigma^{-1}}
(Y)$$ similar to the construction above, where $\rho$ is the Reynolds operator. To be able to use a Grobner type argument, they construct a bijection $(\lambda _1,\lambda _2,\sigma)\
leftrightarrow (\mu _1,\mu _2)$ (section 12), and they are able to show that the polynomials $m _{\lambda _1}(X)m _{\lambda _2}(Y)B _{\sigma}(X,Y)$ are linearly independent (section 13).
Finally, a word on the case of diagonal coinvariants. It is a big open problem to exhibit a basis for the space of diagonal coinvariants, $\mathbb C[X,Y]$ over $\mathbb C[X,Y]^{S_n}$. Even
though, there is a conjectured form for the Hilbert series as a generating function of two statistics over parking functions (the dimension here is no longer $n!$, rather $(n+1)^{n-1}$,
proved by Haiman in 2001), these statistics are not natural enough to let one guess what the corresponding basis will be.
Thanks for the basis! Allen's paper looks nice; what I don't like is how it refers to a Garsia paper I can't find. I already knew of the Bergeron-Lamontagne paper but couldn't locate the
relevant things there; honestly I still can't. – darij grinberg Feb 14 '12 at 21:53
1 If I have some time later today, I will expand this answer to give the proofs. – Gjergji Zaimi Feb 14 '12 at 21:59
1 @darij: Isn't it the case that Proposition 13.1 of Bergeron-Lamontagne actually contains an answer to your question? – Vladimir Dotsenko Feb 17 '12 at 8:14
add comment
Not the answer you're looking for? Browse other questions tagged invariant-theory rt.representation-theory symmetric-polynomials symmetric-functions or ask your own question. | {"url":"http://mathoverflow.net/questions/88444/diagonal-invariants-of-the-symmetric-group-on-kx-1-x-2-x-n-y-1-y-2-y-n?sort=newest","timestamp":"2014-04-17T15:32:15Z","content_type":null,"content_length":"79690","record_id":"<urn:uuid:589cbc45-24d8-4443-b024-e7df88549a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
I'm trying to figure out an easy way to see what a full & half share person would be charged. My friends and I are getting a ski house and we would like to see what it would cost each person.
For example,
Total Rental Price: $3,800
4 Full Shares and 1 Half Share.
I tried dividing 3800/5 which gave me 760. Then I divided the 760/2 which gave me 380. The 380 would be what the base price of the half share would pay. But now I have an extra 380 that is left over,
so I took that 380 and divided it by 5. That gave me 76 which I added on to the 4 full shares. But I still have an extra 76 that I can't add to the half share, so I divide 76/2 that gives me 38. I
add that 38 to the 380 which gives me a total of 418 for the half share. And with that extra 38 I divide it by 4 which gives me 9.50. Finally that 9.50 I add to the full share for a total of 845.50
Full Share: 845.50 (845.50 x 4 = 3,382)
Half Share: 418.00
3,382 + 418 = 3,800
Now this gives me the right total price, but if I divide the 845.50/2 it gives me 422.75 which does not equal to 418. So somewhere a long the line my calculation is wrong.
Does anybody know an easier way to calculate this or a formula that can calculate it the right way?
Thank you, | {"url":"http://www.mathisfunforum.com/post.php?tid=1796&qid=16592","timestamp":"2014-04-21T09:57:36Z","content_type":null,"content_length":"16735","record_id":"<urn:uuid:e9560f31-5ff6-4a3f-bcaa-81eee64753bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
tiny change hangs kernel - vectorizing problem
10-17-2011, 04:44 AM #1
Join Date
Jun 2011
tiny change hangs kernel - vectorizing problem
Hi folks; it's been a while but I thought I'd post my final problem here and see if it resonates with anyone.
I have a kernel that runs beautifully on nVidia, which I am vectorizing for an AMD 5870.
I'm doing some trig, so to eliminate branching I have turned things like
if ( T < 0.f ) T += 360.f;
T += ( T < 0.f ) * 360.f;
( This is of course necessary to as to process all 4 elements of the vector in one go; if the branch were used then it would have to be performed individually per each element of the vector.)
... all cool, and the logic is good; it works for float1s. (And, doesn't hurt performance!!!??, even if it's doing 8 complex calculations for 8 different conditions; I am happily surprised....)
-> However, when you're using float4s, the value of a comparison is different. Instead of getting +1 back from a logical comparison, you get -1. SO, in order to use it in a calculation, as above,
it's necessary to somehow change that -1 to a +1 for the equation to yield what it needs to.
This hangs the 5870.
I have #defined FLOGIC(x) to take i.e. ( T < 0.f ) and change the sign of the result, in a number of ways:
#define FLOGIC(x) (float4) -(x);
#define FLOGIC(x) (float4) (x) * -1.f;
#define FLOGIC(x) (float4) ( x * -1 );
#define FLOGIC(x) (float4) ( abs( x ) );
#define FLOGIC(x) fabs( (float4) x );
. . . if I don't do this, if I use the result of the logical comparison as originally depicted way up above, then the kernel compiles and runs beautifully except for the fact that the -1 ruins
all the calculations it touches and the results are useless.
. . . if I *DO* do this, if I attempt any of the above-described methods to reverse the sign of the float4 logical comparison, the kernel never comes back. (Same deal if I use a function instead
of a #define. I can do anything in that function or #define +except+ change the sign without terminally messing things up.)
It sails through clBuildProgram and clCreateKernel, enqueues fine, and then clFinish hangs the whole machine. [ Mac Pro ]
The cursor still follows the mouse around but the clock is frozen and so is everything else, requiring a hard boot.
Does anybody have any ideas?
p.s. what fun to be here in opencl's early days, huh?
Re: tiny change hangs kernel - vectorizing problem
Neverrrrrrr miiiiiiiiiiind ! . . .
Yes, it was a tiny change, but re: what I was trying to do, I now see this, in 6.2.2:
Explicit casts between vector types are not legal. The examples below will generate a compilation error.
float4 f;
int4 i = (int4) f; <- not allowed
... well, I didn't get the compilation error, but it is in fact not supposed to work.
to recap what i was trying to do:
The results of a comparison return an integer vector of the same length as what was compared, so, if I want to formularize a float4 comparison, I want to take something like
if (a<b) x=c; else x=d;
and replace it with
x = ( a < b ) * c + ( a >= b ) * d;
where a, b, c, d, and x are all float4s. (the expressions get *much* more complex than this simple example)
But, the compiler told me it couldn't implicitly convert the int4 result of the comparison to float4 for the multiplication I wanted. SO, I put in an EXplicit conversion! It seemed to like it at
first, until I went back to address the -1 problem, which caused Lion to slink off into the weeds when clFinish is called.
Surely there is some way to get the values out of the elements of an int4 and put them into a float4. I suppose I could copy each element individually, casting them when they're temporarily
scalars.... maybe there are explicit conversion functions i haven't stumbled across ... I'll look....
Thanks for your attention!
Re: tiny change hangs kernel - vectorizing problem
That's right, the original expression should produce a compilation error:
float4 T;
T += ( T < 0.f ) * 360.f;
(T < 0.f) is a relational containing a vector and a scalar; 0.f will be widened to float4 and then the comparison is performed element-wise, returning an int4.
According to section 6.3(d), vector relational operators return -1 (all bits set) for true. The evaluation so far would be:
(int4)(...) * 360.f
However, according to section 6.2.6, this should be an error because the rank of the scalar type (float) is greater than the rank of the vector type (int).
This expression would be legal if the scalar had the same rank as the vector type, e.g.
(int4)(...) * 360
However, implicit conversions between vector types are not permitted (6.2.1), so
T += (int4)(...)
is illegal, since T is a float4. Casts between vector types are also illegal (6.2.2), so it is necessary to use an explicit conversion. For example:
T += convert_float4((((T < 0.f ) & (int4)(~0x1)) * 360);
Re: tiny change hangs kernel - vectorizing problem
T += convert_float4((((T < 0.f ) & (int4)(~0x1)) * 360);
Can't you do something like this?
Code :
// Original: if ( T < 0.f ) T += 360.f;
T += as_float4(as_int(360.f) & (T < 0.f));
Disclaimer: Employee of Qualcomm Canada. Any opinions expressed here are personal and do not necessarily reflect the views of my employer. LinkedIn profile.
Re: tiny change hangs kernel - vectorizing problem
T += convert_float4((((T < 0.f ) & (int4)(~0x1)) * 360);
Can't you do something like this?
Code :
// Original: if ( T < 0.f ) T += 360.f;
T += as_float4(as_int(360.f) & (T < 0.f));
Or perhaps this might be a touch more readable (:
T += select(0.0f, 360.0f, T < 0.0f);
Re: tiny change hangs kernel - vectorizing problem
Thanks, folks, for your ideas. I like your thinking....
For illustration purposes only, here is a slightly more complex example:
for float1, #define FLOGIC(a) (a)
for float4, #define FLOGIC(a) convert_float4( abs(a) )
Code :
H = atan( bb / aa ) \
+ FLOGIC ( aa < 0.f ) * PI \
+ FLOGIC ( bb < 0.f && aa >= 0.f ) * PIx2;
... I have others that go on for 6 lines or more ... so you may see that I wish to compute each logical condition as a self-contained unit and then apply it to the rest of the clause, for each
clause in each calculation.
•) ajs2, I like the (~0x1), which should be faster than abs(), but for some reason when I tried it half of the output buffer was vertically inverted. Not so with abs(). Something to figure out
later after I attend to other logic problems.
•) david, your idea of using as_type() with bitwise AND should be faster than the multiplications, and I will do that for performance purposes . . . after I get my logic working. (That's probably
why they went with -1 in the first place; why didn't I think of that? Converting to +1 and float-multiplying each clause is not very smart.)
•) notzed, I thought about select, but it wouldn't work for any but the simplest two-alternative instances (of which there are plenty, but still). (Also, with #defines, this same 1500 lines of
code will work for float4, float1, or straight Xcode C, where select() doesn't exist ... though I could #define one.)
The example I posted earlier with a value of 360 may not have been the best choice; I'm usually using non-integer-value floatns. So, thanks for the ideas which incorporated that calc in integer
form, but I'll want to separate the operations. (For now. Perhaps later I'll go around seeking out every little performance optimization....)
Still a ways off, but perhaps I can see a glimmer of light ... here's what I get:
#define vex (turns on float4s):
Lion, Xcode 4:
CPU (Xeon) -- executes; gives me bad data but recognizable as being partially correct...
GPU (5870) -- still hangs Lion when clFinish is called....
Snow Leopard, Xcode 3:
CPU ( I7 ) -- LLVM compiler has failed to compile a function
GPU (330M) -- LLVM compiler has failed to compile a function
//#define vex (float4s turned off):
All devices on both platforms -- works well!
what fun....
Re: tiny change hangs kernel - vectorizing problem
Sounds like you're just making work for yourself if i understand you correctly. 1500 lines of code isn't all that much, and opencl c is different enough to c that trying to wrap it all in macros
and create a pseudo custom language that is converted to either at compile time sounds like a lot of hassles for little benefit. Apart from the language the very different hardware requires
sometimes radically different approaches. And poor choices can be really really expensive.
Back to select, it can surely be used for any decision logic you can implement any other way (excluding branches in control flow): so i'm not sure what you're talking about here. It might not be
too pretty, but that FLOGIC stuff isn't either - and that is less general, it's choice is either '0' or 'a number'.
But now your goal is clearer ... from the specification:
. The ternary selection operator (?
operator evaluates the first expression exp1, which can be a scalar or vector result except
float. If the result is a scalar value then it selects to evaluate the second expression if the
result compares unequal to 0, otherwise it selects to evaluate the third expression. If the
result is a vector value, then this is equivalent to calling select(exp3, exp2, exp1). The select
function is described in table 6.14. The second and third expressions can be any type, as
long their types match, or there is a conversion in section 6.2.1
So you could probably just use ?: and it should 'just work'.
Re: tiny change hangs kernel - vectorizing problem
Sounds like you're just making work for yourself if i understand you correctly. 1500 lines of code isn't all that much, and opencl c is different enough to c that trying to wrap it all in macros
and create a pseudo custom language that is converted to either at compile time sounds like a lot of hassles for little benefit. Apart from the language the very different hardware requires
sometimes radically different approaches. And poor choices can be really really expensive.
Making work for myself? ... well, yep, that's me!! I tend not to do things the easy way at first. However, I code straight plain K&R C so the difference is not too great. I just learned enough
Cocoa and Objective-C over a couple of months to bring my project up to the present from the old CodeWarrior days and allow the use of current hardware and OpenCL. I do understand that different
GPU substrates may require different approaches for best performance (i.e. later vectorization thread you've contributed to) but at base I was taught "Make it work first, make it pretty later".
Back to select, it can surely be used for any decision logic you can implement any other way (excluding branches in control flow): so i'm not sure what you're talking about here. It might not be
too pretty, but that FLOGIC stuff isn't either - and that is less general, it's choice is either '0' or 'a number'.
OK, *now* I see what you mean; drew a blank at first. So, given that everything I need _is_ currently either 0 or a number, I could say:
Code :
val = \
conditionA ? subcalcA : 0 \
+ conditionB ? subcalcB : 0 \
+ conditionC ? subcalcC : 0 ...
... or the same thing with select().
I can pop that into my #def and see if it affects performance ... later. Because I did actually get that bit to work, and my attention is elsewhere at the moment.
10-17-2011, 11:53 AM #2
Join Date
Jun 2011
10-17-2011, 01:32 PM #3
Junior Member
Join Date
Jan 2011
10-17-2011, 03:22 PM #4
Senior Member
Join Date
May 2010
Toronto, Canada
10-17-2011, 05:40 PM #5
Senior Member
Join Date
Aug 2011
10-18-2011, 06:18 PM #6
Join Date
Jun 2011
10-27-2011, 06:42 AM #7
Senior Member
Join Date
Aug 2011
10-28-2011, 07:56 PM #8
Join Date
Jun 2011 | {"url":"http://www.khronos.org/message_boards/showthread.php/7861-Is-it-OpenCL-properly-installed-in-my-PC?goto=nextoldest","timestamp":"2014-04-19T17:30:52Z","content_type":null,"content_length":"67155","record_id":"<urn:uuid:4ef9ed0b-b4d4-4efa-88ba-0f5c73923fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
11-29-2002 #16
I lurk
Join Date
Aug 2002
Originally posted by adrianxw
How do you thing people entered code into machines before they had cute languages/assemblers to make it easier?
I can well remember long sessions in front of a desk with a big row of up/down switches, where you set the switches and then pressed the "Clk" button to read that BINARY in, and repeat for HOURS.
Then when you figured you'd got it all in right, you pressed "Exec" or something and watched while nothing happened.
Of course you can program in binary, (or hex if you have the relative luxury of a hex keypad).
You ancient programmers and your crazy stories. Have any others?
Originally posted by adrianxw
How do you thing people entered code into machines before they had cute languages/assemblers to make it easier?
I can well remember long sessions in front of a desk with a big row of up/down switches, where you set the switches and then pressed the "Clk" button to read that BINARY in, and repeat for HOURS.
Then when you figured you'd got it all in right, you pressed "Exec" or something and watched while nothing happened.
You don't try lots of random combinations when programming low level circuits, there is a lot of fine math behind that.
That would be the same as randomizing lots of letters hoping it would end up in a C/C++ program...
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.
I can well remember long sessions in front of a desk with a big row of up/down switches, where you set the switches and then pressed the "Clk" button to read that BINARY in, and repeat for HOURS.
Then when you figured you'd got it all in right, you pressed "Exec" or something and watched while nothing happened.
I follow a simular process with my VC compiler.
Long sessions in front of a desk... Check.
Repeat for hours... Check.
Press 'Exec' when you figure you got it right... Check.
But then the 'nothing happening' part is replaced by a horrible display of cascading errors and eventual system lock and crash resulting in multiple reboots and backup restorations. Just goes to
show how much better we've got it now, hey? None of this 'nothing happened' junk. Pfft. Whos got time for that?
"There's always another way"
-lightatdawn (lightatdawn.cprogramming.com)
>>> I follow a simular process with my VC compiler.
Yeah, that's about right!
>>> Have any others?
How about this. Another system I used many years ago, (a Ferranti Argus computer), was programmed, (again in binary), by pulling out trays each of which was a large matrix of small round holes.
To program the thing, you put a small ferrite bead in a hole to make a 0, or left it empty to make a 1. A small design flaw with the system meant you could pull the tray out all the way thus the
back fell to the floor and all the little beads fell out and rolled away.
>>> >>> Have any others?
Dozens probably - give me my 2.5 GHz P4, and my VC++ any day. The old systems were fun at times, but for the most were tedious, irritating and unreliable.
>>> ancient programmers
I am not ancient yet, I may be older than some in here, true.
Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream.
Well you will never learn binary and i feel that it is a programing language.
Well than you missed the whole first part of the topic. It's not a matter of opinion -- Binary is not a language, it's just a nubmer system on which languages can be formed.
Originally posted by Polymorphic OOP
Well than you missed the whole first part of the topic. It's not a matter of opinion -- Binary is not a language, it's just a nubmer system on which languages can be formed.
Binary is one of our first programming languages, aside from machine and assembly. More modern languages evolved from those three, and eventually became more text based programming. Just because
binary is numbers, doesn't mean that it isn't a language. We write c/c++ with words. By your reasoning, using "cout<<" cannot be deemed a language. see where I'm going with this?
This has been a public service announcement from GOD.
Yes, actually it does mean it's not a language, as I already explained if you would have happened to read my earlier posts in this topic.
If you think that binary is a language, than you DON'T have an understanding of what it really is.
Binary is a medium where other languages are formed. If you can't undestand that after my explanations at the beginning of the topic, then, well, get help.
There is no binary language. There's machine codes, assembly languages, etc. If you think binary is a language then you probably falsly think of machine code as "binary language."
Originally posted by Polymorphic OOP
Yes, actually it does mean it's not a language, as I already explained if you would have happened to read my earlier posts in this topic.
If you think that binary is a language, than you DON'T have an understanding of what it really is.
Binary is a medium where other languages are formed. If you can't undestand that after my explanations at the beginning of the topic, then, well, get help.
There is no binary language. There's machine codes, assembly languages, etc. If you think binary is a language then you probably falsly think of machine code as "binary language."
How very true.
toaster you are kinda right but your crazy
5 entries found.
From Webster's Revised Unabridged Dictionary (1913) [web1913]
Binary \Bi"na*ry\, a. [L. binarius, fr. bini two by two, two at
a time, fr. root of bis twice; akin to E. two: cf. F.
Compounded or consisting of two things or parts;
characterized by two (things).
{Binary arithmetic}, that in which numbers are expressed
according to the binary scale, or in which two figures
only, 0 and 1, are used, in lieu of ten; the cipher
multiplying everything by two, as in common arithmetic by
ten. Thus, 1 is one; 10 is two; 11 is three; 100 is four,
etc. --Davies & Peck.
{Binary compound} (Chem.), a compound of two elements, or of
an element and a compound performing the function of an
element, or of two compounds performing the function of
{Binary logarithms}, a system of logarithms devised by Euler
for facilitating musical calculations, in which 1 is the
logarithm of 2, instead of 10, as in the common
logarithms, and the modulus 1.442695 instead of .43429448.
{Binary measure} (Mus.), measure divisible by two or four;
common time.
{Binary nomenclature} (Nat. Hist.), nomenclature in which the
names designate both genus and species.
{Binary scale} (Arith.), a uniform scale of notation whose
ratio is two.
{Binary star} (Astron.), a double star whose members have a
revolution round their common center of gravity.
{Binary theory} (Chem.), the theory that all chemical
compounds consist of two constituents of opposite and
unlike qualities.
From Webster's Revised Unabridged Dictionary (1913) [web1913]
Binary \Bi"na*ry\, n.
That which is constituted of two figures, things, or parts;
two; duality. --Fotherby.
From WordNet (r) 1.6 [wn]
adj 1: of or pertaining to a number system have 2 as its base; "a
binary digit"
2: consisting of two (units or components or elements or terms)
or based on two; "a binary star is a system in which two
stars revolve around each other"; "a binary compound";
"the binary number system has two as its base"
n : a system of two stars that revolve around each other under
their mutual gravitation [syn: {binary star}, {double
From The Free On-line Dictionary of Computing (13 Mar 01) [foldoc]
1. <mathematics> {Base} two. A number representation
consisting of zeros and ones used by practically all computers
because of its ease of implementation using digital
electronics and {Boolean algebra}.
2. <file format> Any file format for {digital} {data} encoded
as a sequence of {bit}s but not consisting of a sequence of
printable {characters} ({text}). The term is often used for
executable {machine code}.
Of course all digital data, including characters, is actually
binary data (unless it uses some (rare) system with more than
two discrete levels) but the distinction between binary and
text is well established.
3. <programming> A description of an {operator} which takes
two {arguments}. See also {unary}, {ternary}.
From Internet Dictionary Project [idp_italian]
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.
Poly, what is meant by programming in binary "as a language" is simply to write the opcodes and such out in their rawest form. I had an uncle that programmed oil-refinery equipment this way back
in the 50's. This was literally writing a program like:
3F 99 5B A2, etc.
The fact that the term binary refers to the number system, as well as the data streams associated with computing doesn't invalidate this use of the word.
if( numeric_limits< byte >::digits != bits_per_byte )
error( "program requires bits_per_byte-bit bytes" );
Yes, it does, because there is no BINARY LANGAUGE. If there was a binary language you could program in it, but you can't. You can program in machine code, but that varies. The machine codes are
different languages, NOT binary.
You write with letters, that doesn't make our ALPHABET a language, does it? Just because you can deal with the language in binary doesn't mean a thing.
11-29-2002 #17
11-29-2002 #18
Join Date
Aug 2001
11-30-2002 #19
12-02-2002 #20
12-02-2002 #21
12-02-2002 #22
12-02-2002 #23
12-02-2002 #24
12-02-2002 #25
12-02-2002 #26
12-02-2002 #27
12-02-2002 #28
12-02-2002 #29
12-02-2002 #30 | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/29643-binary-2.html","timestamp":"2014-04-17T11:12:40Z","content_type":null,"content_length":"104754","record_id":"<urn:uuid:cd3b785c-77f3-471b-9a81-1e7526bab17d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDF format for
Figs 1-6 PDF format for Manuscript
Identification of an Abnormal Beryllium Lymphocyte Proliferation Test
1. Fig1.pdf Histograms and Normal Q-Q Plots PHA: Log and Linear Scale
The panels on the left show the histograms of the SIs. The top left is for Ln(SI)s and the bottom left is for the SIs. The panels on the right are normal q-q plots. If the data in the histogram
(on the left) is normally distributed then the normal q-q plot (on the right) should look like a straight line. These plots clearly show that Ln(SI)s follow the normal distribution,i.e. the SIs
follow the lognormal distribution.
2. Fig2.pdf Histograms and Normal Q-Q Plots ConA: Log and Linear Scale
The panels on the left show the histograms of the SIs. The top left is for Ln(SI)s and the bottom left is for the SIs. The panels on the right are normal q-q plots. If the data in the histogram
(on the left) is normally distributed then the normal q-q plot (on the right) should look like a straight line. These plots clearly show that Ln(SI)s follow the normal distribution,i.e. the SIs
follow the lognormal distribution.
3. Fig3.pdf Histograms of the SIs for the beryllium workers and nonexposed BeLPTs. Numbers in parenthesis are the outlier restiant median(M) estimate of location and S the MAD estimate of the scale
parameter. The mean and standard deviation (SD) for each distribution are also given.
4. Fig4.pdf Histograms of the Ln(SI)s for the beryllium workers and nonexposeds BeLPTs. The outlier restiant estimates on the Ln scale of location M (the median) and S the MAD estimate of the scale
parameter for each distribution are given in parenthesis.
5. Fig5.pdf Normal q-q plots of Ln(si)s for each beryllium concentration on day 5 and day 7 for beryllium workers and nonexposed controls. The data values are shown on the vertical axis. The median
(M), MAD scale estimate(S) of the Ln(si)s and exp(M) are listed on each plot. Values of M and S for beryllium workers (circles) are in upper left and nonexposed controls (triangles) are in lower
right of each panel.
6. Fig6.pdf Histogram and Normal Q-Q Plots for Ln(SImax) for beryllium workers and nonexposed combined. The Median (M), and MAD scale estimate(S) of the Ln(SI)s are shown.
7. Fig7.pdf Boxplots (left panel) and normal q-q plots (right panel) for Ln(SImax). In the right panel summary statistics for nonexposed controls (circles) are shown in lower right, and for
beryllium workers (triangles) in upper left of q-q plot. A small P value for Kolmogorov-Smirnov (KS) goodness-of-fit test indicates departure from normal distribution for LN(SImax).
8. Fig8.pdf Empirical ROC curves for Ln(SI)s for Each Beryllium Concentration on Day 5 and Day 7. AUC is the area under the curve. The partial AUC shown in each plot is based on a nonpametric
estimate of the area under the ROC curve from 0 to 0.05 on the x-axis (i.e., maximum FPR of practial interest is 0.05). | {"url":"http://www.csm.ornl.gov/~frome/toxsbp/SBPplts.html","timestamp":"2014-04-21T04:31:45Z","content_type":null,"content_length":"3922","record_id":"<urn:uuid:03116f27-e77a-47e0-a964-3f5901a63bdd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibred categories and the foundations of naive category theory
Results 11 - 20 of 33
- Typed Lambda Calculi and Applications, LNCS 664 , 1993
"... ) 1 J. M. E. Hyland 2 C.-H. L. Ong 3 University of Cambridge, England Abstract This paper is motivated by the discovery that an appropriate quotient SN 3 of the strongly normalising untyped
3-terms (where 3 is just a formal constant) forms a partial applicative structure with the inherent appl ..."
Cited by 14 (1 self)
Add to MetaCart
) 1 J. M. E. Hyland 2 C.-H. L. Ong 3 University of Cambridge, England Abstract This paper is motivated by the discovery that an appropriate quotient SN 3 of the strongly normalising untyped 3-terms
(where 3 is just a formal constant) forms a partial applicative structure with the inherent application operation. The quotient structure satisfies all but one of the axioms of a partial combinatory
algebra (pca). We call such partial applicative structures conditionally partial combinatory algebras (c-pca). Remarkably, an arbitrary right-absorptive c-pca gives rise to a tripos provided the
underlying intuitionistic predicate logic is given an interpretation in the style of Kreisel's modified realizability, as opposed to the standard Kleenestyle realizability. Starting from an arbitrary
right-absorptive c-pca U , the tripos-to-topos construction due to Hyland et al. can then be carried out to build a modified realizability topos TOPm (U ) of non-standard sets equipped with an
- J. Pure Appl. Algebra , 1999
"... Given a 2-category K admitting a calculus of bimodules, and a 2-monad T on it compatible with such calculus, we construct a 2-category L with a 2-monad S on it such that: • S has the
adjoint-pseudo-algebra property. • The 2-categories of pseudo-algebras of S and T are equivalent. Thus, coh ..."
Cited by 13 (2 self)
Add to MetaCart
Given a 2-category K admitting a calculus of bimodules, and a 2-monad T on it compatible with such calculus, we construct a 2-category L with a 2-monad S on it such that: • S has the
adjoint-pseudo-algebra property. • The 2-categories of pseudo-algebras of S and T are equivalent. Thus, coherent structures (pseudo-T-algebras) are transformed into universally characterised
ones (adjoint-pseudo-S-algebras). The 2-category L consists of lax algebras for the pseudo-monad induced by T on the bicategory of bimodules of K. We give an intrinsic characterisation of
pseudo-S-algebras in terms of representability. Two major consequences of the above transformation are the classifications of lax and strong morphisms, with the attendant coherence result for
pseudo-algebras. We apply the theory in the context of internal categories and examine monoidal and monoidal globular categories (including their monoid classifiers) as well as pseudo-functors into
, 1991
"... There has recently been considerable interest in the development of `logical frameworks ' which can represent many of the logics arising in computer science in a uniform way. Within the
Edinburgh LF project, this concept is split into two components; the first being a general proof theoretic encodin ..."
Cited by 11 (0 self)
Add to MetaCart
There has recently been considerable interest in the development of `logical frameworks ' which can represent many of the logics arising in computer science in a uniform way. Within the Edinburgh LF
project, this concept is split into two components; the first being a general proof theoretic encoding of logics, and the second a uniform treatment of their model theory. This thesis forms a case
study for the work on model theory. The models of many first and higher order logics can be represented as fibred or indexed categories with certain extra structure, and this has been suggested as a
general paradigm. The aim of the thesis is to test the strength and flexibility of this paradigm by studying the specific case of Girard's linear logic. It should be noted that the exact form of this
logic in the first order case is not entirely certain, and the system treated here is significantly different to that considered by Girard.
, 1997
"... We consider some basic properties of the 2-category Fib of fibrations over arbitrary bases, exploiting the fact that it is fibred over Cat. We show a factorisation property for adjunctions in
Fib, which has direct consequences for fibrations, e.g. a characterisation of limits and colimits for them. ..."
Cited by 7 (0 self)
Add to MetaCart
We consider some basic properties of the 2-category Fib of fibrations over arbitrary bases, exploiting the fact that it is fibred over Cat. We show a factorisation property for adjunctions in Fib,
which has direct consequences for fibrations, e.g. a characterisation of limits and colimits for them. We also consider oplax colimits in Fib, with the construction of Kleisli objects as a particular
example. All our constructions are based on an elementary characterisation of Fib as a fibration.
, 1992
"... We propose a new framework for representing logics, called LF + and based on the Edinburgh Logical Framework. The new framework allows us to give, apparently for the first time, general
definitions which capture how well a logic has been represented. These definitions are possible since we are abl ..."
Cited by 5 (0 self)
Add to MetaCart
We propose a new framework for representing logics, called LF + and based on the Edinburgh Logical Framework. The new framework allows us to give, apparently for the first time, general definitions
which capture how well a logic has been represented. These definitions are possible since we are able to distinguish in a generic way that part of the LF + entailment which corresponds to the
underlying logic. This distinction does not seem to be possible with other frameworks. Using our definitions, we show that, for example, natural deduction first-order logic can be well-represented in
LF + , whereas linear and relevant logics cannot. We also show that our syntactic definitions of representation have a simple formulation as indexed isomorphisms, which both confirms that our
approach is a natural one and provides a link between type-theoretic and categorical approaches to frameworks. 1 Introduction Much effort has been devoted to building systems for supporting the
construction of f...
- In International Workshop on Logical Frameworks and Meta-Languages: Theory and Practice , 2007
"... This note is about work in progress on the topic of “internal type theory ” where we investigate the internal formalization of the categorical metatheory of constructive type theory in (an
extension of) itself. The basic notion is that of a category with families, a categorical notion of model of de ..."
Cited by 5 (2 self)
Add to MetaCart
This note is about work in progress on the topic of “internal type theory ” where we investigate the internal formalization of the categorical metatheory of constructive type theory in (an extension
of) itself. The basic notion is that of a category with families, a categorical notion of model of dependent type theory. We discuss how to formalize the notion of category with families inside type
theory and how to build initial categories with families. Initial categories with families will be term models which play the role of canonical syntax for dependent type theory. We also discuss the
formalization of the result that categories with finite limits give rise to categories with families. This yields a type-theoretic perspective on Curien’s work on “substitution up to isomorphism”.
Our formalization is being carried out in the proof assistant Agda 2 developed at Chalmers. 1
, 1998
"... Introduction Category theory can be viewed as an elementary, i.e. essentially first order, theory independent from set theory. In an elementary topos, i.e. a category satisfying a number of
elementary axioms, one can perform all constructions that one performes with sets in everyday mathematics. Ne ..."
Cited by 2 (0 self)
Add to MetaCart
Introduction Category theory can be viewed as an elementary, i.e. essentially first order, theory independent from set theory. In an elementary topos, i.e. a category satisfying a number of
elementary axioms, one can perform all constructions that one performes with sets in everyday mathematics. Nevertheless, the language of category theory is not expressive enough to capture those
categorical notions that make reference to set theory. Amongst those are: (co-)completeness, (local) smallness, existence of a small set of generators and well-poweredness. If we want to replace the
category of sets by a category B whose objects are regarded as index sets we need an abstract theory of families. Such a theory is the theory of fibred categories. We can choose B as a topos but for
most purposes it suffices that B has pullbacks. A category fibred over B is a functor P : E ! B
- Proc. 1st Conf. on Algebra and Coalgebra in Computer Science CALCO’05, Swansea. Springer LNCS 3629 , 2005
"... Abstract. We show that any institution I satisfying some reasonable conditions can be transformed into another institution, Ibeh, which captures formally and abstractly the intuitions of adding
support for behavioral equivalence and reasoning to an existing, particular algebraic framework. We call o ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We show that any institution I satisfying some reasonable conditions can be transformed into another institution, Ibeh, which captures formally and abstractly the intuitions of adding
support for behavioral equivalence and reasoning to an existing, particular algebraic framework. We call our transformation an “extension ” because Ibeh has the same sentences as I and because its
entailment relation includes that of I. Many properties of behavioral equivalence in concrete hidden logics follow as special cases of corresponding institutional results. As expected, the presented
constructions and results can be instantiated to other logics satisfying our requirements as well, thus leading to novel behavioral logics, such as partial or infinitary ones, that have the desired
properties. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=701353&sort=cite&start=10","timestamp":"2014-04-20T21:08:32Z","content_type":null,"content_length":"36207","record_id":"<urn:uuid:9edbd8ef-b955-4f47-b5d5-f5a4feb327d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Remainders' printed from http://nrich.maths.org/
Example 1:
Divide by 4 and select all the numbers in the right-hand column - they should all turn red.
Now divide by 5 and select all the numbers in the right-hand column - they should all turn yellow, but some will turn orange.
What is special about the numbers that turn orange?
Now divide by 3 and select all the numbers in the right-hand column - most should turn blue, but one will turn black.
What is special about the number that turns black?
What is special about the numbers that turn green and purple?
Example 2: (you will need to clear your previous work)
Find the numbers that have a remainder of 2 when divided by 5 - you'll need to divide by 5 and select the numbers in the second column.
Now select the numbers that have a remainder of 1 when divided by 2 (the odd numbers).
What is special about the numbers that turned orange this time?
Try a few examples of your own and try to predict what will happen in each case. | {"url":"http://nrich.maths.org/1783/clue?nomenu=1","timestamp":"2014-04-20T05:44:16Z","content_type":null,"content_length":"4104","record_id":"<urn:uuid:aa23d018-0db3-458b-895d-c228d247eb44>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special Ways to Input Expressions
Mathematica allows you to use special notation for many common operators. For example, although internally Mathematica represents a sum of two terms as Plus[x, y], you can enter this expression in
the much more convenient form .
The Mathematica language has a definite grammar that specifies how your input should be converted to internal form. One aspect of the grammar is that it specifies how pieces of your input should be
grouped. For example, if you enter an expression such as , the Mathematica grammar specifies that this should be considered, following standard mathematical notation, as rather than . Mathematica
chooses this grouping because it treats the operator as having a higher precedence than . In general, the arguments of operators with higher precedence are grouped before those of operators with
lower precedence.
You should realize that absolutely every special input form in Mathematica is assigned a definite precedence. This includes not only the traditional mathematical operators, but also forms such as ,
or the semicolons used to separate expressions in a Mathematica program.
The table in "Operator Input Forms" gives all the operators of Mathematica in order of decreasing precedence. The precedence is arranged, where possible, to follow standard mathematical usage, and to
minimize the number of parentheses that are usually needed.
You will find, for example, that relational operators such as have lower precedence than arithmetic operators such as . This means that you can write expressions such as without using parentheses.
There are nevertheless many cases where you do have to use parentheses. For example, since has a lower precedence than , you need to use parentheses to write . Mathematica interprets the expression
as . In general, it can never hurt to include extra parentheses, but it can cause a great deal of trouble if you leave parentheses out, and Mathematica interprets your input in a way you do not
Four ways to write expressions in Mathematica.
There are several common types of operators in Mathematica. The in is an "infix" operator. The in -p is a "prefix" operator. Even when you enter an expression such as Mathematica allows you to do it
in ways that mimic infix, prefix and postfix forms.
You will often want to add functions like
as "afterthoughts", and give them in postfix form.
You should notice that has very low precedence. If you put //f at the end of any expression containing arithmetic or logical operators, the f is applied to the whole expression. So, for example,
means , not .
The prefix form has a much higher precedence. is equivalent to , not . You can write in prefix form as . | {"url":"http://reference.wolfram.com/mathematica/tutorial/SpecialWaysToInputExpressions.html","timestamp":"2014-04-16T10:17:48Z","content_type":null,"content_length":"36808","record_id":"<urn:uuid:ee9b7332-f589-4681-8ffe-9e69d8835610>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Leading hospitals - London, UK for heart and lung disease
Step 1:
Estimate Oxygen Consuption (VO2):
Enter the following for VO2 estimation:
height (cm)
weight (kg)
age (yrs)
heart rate (bpm)
Choose Gender:
Then click Calculate to Estimate VO2.
Estimated VO2: (ml/min)
Measured VO2:(ml/min)
(If this box is filled, the value will be used in calculations. If left blank, the estimated VO2 will be used instead)
Body Surface Area (BSA):
Calculated BSA: (m^2)
(please enter height and weight above)
Indexed VO2: (ml/min/m2)
Examples: (select and click Calculate)
normal circulation
left to right shunt
right to left shunt
bidirectional shunt
Step 2:
Enter Saturation and Partial Pressure:
Saturation --- PO[2]
(%) (mmHg) in SVC
(%) (mmHg) in IVC
(%) (mmHg) in PA
(%) (mmHg) in PV
(%) (mmHg) in Ao
To Convert: (kPa x 7.5 = mmHg)
Hgb (g/dl). Note: Effect of O2 is larger when the Hgb is LOW.
Step 3: Mixed Venous Oxygen:
Resulting MV:(%) (mmHg)
(MV PO[2] is calculated by the same method)
Step 4:
Pressures (for resistance only)
mean RA (mmHg)
mean PA (mmHg)
mean LA (mmHg)
mean Ao (mmHg)
Step 5:
Clear Form
Back to Main Form
Flows (Q):
Qpulmonic (L/min)
Qsystemic (L/min)
Qef (L/min)
R-->L shunt (L/min)
L-->R shunt (L/min)
Qp/Qs ratio
Systemic shunt fraction
Pulmonic shunt fraction
Cardiac Index (CI):
Pulmonic CI (L/min/m^2)
Systemic CI (L/min/m^2)
Effective CI (L/min/m^2)
Resistance (PVR/SVR):
PVR (wood units, indexed= u-m2)
PVR (dynes/sec/cm^-5)
SVR (wood units, indexed= u-m2)
SVR (dynes/sec/cm^-5)
Stroke Volume/Index (SVI):(Will only calculate if shunt volume is less than 0.5 L/min)
Pulmonic:(ml/beat) (ml/beat/m^2)
Systemic:(ml/beat) (ml/beat/m^2) | {"url":"http://www7.rbht.nhs.uk/flowcalcSupplO2.asp","timestamp":"2014-04-21T07:22:03Z","content_type":null,"content_length":"10867","record_id":"<urn:uuid:48f9c647-adea-4dd3-9da2-a197d49da7ff>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poisson Process
March 5th 2007, 02:40 PM
Poisson Process
Satellites are launched into space at times distributed according to a Poisson process with rate lambda. Each satellie independently spends a random time (having distribution G) in space before
falling to the ground. Find the probability that none of the satellites in the air at time t was launched before time s, where s < t.
Can someone go through this question step by step. Thanks. =]
March 9th 2007, 12:09 AM
Satellites are launched into space at times distributed according to a Poisson process with rate lambda. Each satellie independently spends a random time (having distribution G) in space before
falling to the ground. Find the probability that none of the satellites in the air at time t was launched before time s, where s < t.
Can someone go through this question step by step. Thanks. =]
Restate the question as finding the probability P(s,t) that all satellites launched before time s fall by time t.
Let G(t) be the probability a satellite has life less than t. Then G(t-x) is the probability a satellite launched at time x falls by time t. It is a feature of a Poisson process that the events
(here launch times) are uniformly distributed on any bounded subset of the event space. So for one satellite launched at an unknown time during an interval [a,s], say, the probability it falls by
time t is
F(a) = (Integral from a to s of G(t-x) dx)/(s-a).
For n sateliites launched at unknown times during [a,s], the probability they all fall by time t is F(a)^n.
It is another feature of a Poisson process with parameter L (short for lambda) is that the number of events occuring on any subset of the event space of size b has a Poisson distribution with
parameter Lb. So the probability that the number of satellites launched during [a,s] is n is exp(-L(s-a))(L(s-a))^n/n!. Multiplying by F(a)^n yields the probability that n satellites are launched
during [a,s] and they all fall by time t:
exp(-L(s-a))(L(s-a))^n/n! x F(a)^n = exp(-L(s-a))(L(s-a)F(a))^n/n!.
Summing this for n >= 0 yields the probability that all satellites launched during [a,s] fall by time t:
sum from n=0 to infinity of exp(-L(s-a))(L(s-a)F(a))^n/n!
= exp(-L(s-a)) sum from n=0 to infinity of (L(s-a)F(a))^n/n!
= exp(-L(s-a))exp(L(s-a)F(a))
= exp(L(s-a)F(a) - L(s-a))
= exp(L(s-a)(Integral from a to s of G(t-x) dx)/(s-a) - L(s-a))
= exp(L x Integral from a to s of (G(t-x) - 1) dx).
Taking the limit a -> -infinity yields the desired probability that all satellites launched before time s fall by time t:
P(s,t) = exp(L x Integral from -infinity to s of (G(t-x) - 1) dx).
feiyingx, would you please let us know whether this is right or wrong when you get the answer? Thanks.
March 9th 2007, 07:24 AM
That was very helpful! Thanks =D
I'll let you guys know when the solution is posted. Thanks =]
March 10th 2007, 10:24 AM
I just got the solutions and it is exact as you posted. =] | {"url":"http://mathhelpforum.com/advanced-statistics/12207-poisson-process-print.html","timestamp":"2014-04-17T22:08:00Z","content_type":null,"content_length":"7122","record_id":"<urn:uuid:36ff4cfc-d0e7-48b8-a4a0-ce033d27a3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to String Theory
The solution to this apparent contradiction seems almost obvious once you see it: the open string endpoints must be stuck on some sort of 25 (or 9) dimensional "membrane" while the string interiors
(open or closed) can always move in 26 (or 10) dimensions. If more "compact" dimensions are small, the dimensions of the membranes will go down as the string endpoints have fewer and fewer directions
in which they can move.
(A note on terminology: Because the word "membrane" usually implies two dimensions, string theorists use the term "brane" for the more general case. In one of the bad puns adored by physicists, a
brane that extends in p directions of space is called a "p-brane". And because these particular branes arise when the boundary points of open strings cannot move in certain directions, the are called
"Dirichlet branes" after the technical term "Dirichlet boundary conditions" that describes that situation, or "D-branes" for short.)
It turns out that these various dimensional p-branes are not just a mathematical tool but are independent objects in the theory in their own right. It is possible to calculate their masses, charges,
and the kinds of vibrations they can carry, all from the equations of string theory that required them to exist in the first place. Their discovery in the early 1990s opened up vast new realms of
phenomena in string theory to study, and has been instrumental in almost all of the field's progress in the past decade.
Only after D-branes were understood were the "particle-like" properties of open string endpoints really taken seriously. String endpoints act like particles in a universe defined by the brane (and
thus with p space dimensions), and the "photon-like" open strings that we saw even provide forces on the brane like electromagnetism. (Closed strings provide gravity, but as closed strings they can
move away from the brane, too.) This picture was the inspiration for "brane world" theories, which suggest that the universe we observe is actually just a 3-brane in a higher dimensional space.
Up to my research page.
Up to my professional page.
My personal site is also available.
Any questions or comments? Write to me: jensens@alma.edu
Copyright © 2004 by Steuard Jensen. | {"url":"http://www.slimy.com/~steuard/research/MITClub2004/slide34.html","timestamp":"2014-04-20T15:51:49Z","content_type":null,"content_length":"5176","record_id":"<urn:uuid:d2dc32b1-80db-466c-bb1b-d3626673a213>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Redundancy of Variables in CLP(R
Results 1 - 10 of 11
- In Proceedings of the Joint International Conference and Symposium on Logic Programming , 1996
"... Constraint Logic Programming (CLP) languages extend logic programming by allowing constraints from different domains such as real numbers or Boolean functions. They have proved to be ideal for
expressing problems that require interactive mathematical modelling and complex combinatorial optimization ..."
Cited by 9 (2 self)
Add to MetaCart
Constraint Logic Programming (CLP) languages extend logic programming by allowing constraints from different domains such as real numbers or Boolean functions. They have proved to be ideal for
expressing problems that require interactive mathematical modelling and complex combinatorial optimization problems. However, CLP languages have mainly been considered as research systems, useful for
rapid prototyping, but not really competitive with more conventional programming languages when performance is crucial. One promising approach to improving the performance of CLP systems is the use
of powerful program optimizations to reduce the cost of constraint solving. We extend work in this area by describing a new optimizing compiler for the CLP language CLP(R). The compiler implements
six powerful optimizations: reordering of constraints, bypass of the constraint solver, splitting and dead code elimination, removal of redundant constraints, removal of redundant variables, and
specialization of...
- Constraint Processing. Springer-Verlag, LNCS 923 , 1995
"... Finite domain constraint problems with hidden variables are very natural to use, for example, when one has to deal with a complex real-life situation, described as a set of constraints over a
set of variables, and desires to (re-)use such description several times selecting each time a different sub ..."
Cited by 8 (3 self)
Add to MetaCart
Finite domain constraint problems with hidden variables are very natural to use, for example, when one has to deal with a complex real-life situation, described as a set of constraints over a set of
variables, and desires to (re-)use such description several times selecting each time a different subset of variables of interest. In this paper we study finite domain constraint problems with hidden
variables and the possible redundancy of some of the hidden variables. Here for redundancy we mean that the elimination of such variables, together with all the constraints connecting them, does not
change the set of solutions of the given problem. We propose several sufficient conditions for variable redundancy and we develop algorithms, based on such conditions, which remove the variables
found to be redundant. This, combined with other preprocessing techniques which remove other kinds of redundancy (tuple redundancy, such as the local consistency algorithms, or also constraint
redundancy), c...
, 1994
"... A number of Constraint logic Programming systems, including CLP(R) and Prolog III, decide simultaneous linear inequalities as part of the fundamental operational step of constraint solving.
While this can contribute tremendously to the usefulness of the systems, it is computationally quite expensive ..."
Cited by 5 (4 self)
Add to MetaCart
A number of Constraint logic Programming systems, including CLP(R) and Prolog III, decide simultaneous linear inequalities as part of the fundamental operational step of constraint solving. While
this can contribute tremendously to the usefulness of the systems, it is computationally quite expensive. Non-ground inequalities must generally be tested for consistency with the collected
constraint set and then added to it, increasing its size, and thus making the next such test more expensive. Future redundant inequalities in a program are those that are guaranteed to be subsumed
after no more than one subsequent procedure call, usually in the context of a recursive procedure. It has been noted that such inequalities need only be tested for consistency with the current
constraint set, thus resulting in dramatic savings in execution speed and space usage. In this paper we generalize the notion of future redundancy in a number of ways and thus broaden its
applicability. Thus we show how to d...
"... Constraint Logic Programming (CLP) is a recent innovation in programming language design. CLP languages extend logic programming by allowing constraints from different domains such as real
numbers or Boolean functions. This gives considerable expressive power and flexibility and CLP programs have pr ..."
Cited by 4 (3 self)
Add to MetaCart
Constraint Logic Programming (CLP) is a recent innovation in programming language design. CLP languages extend logic programming by allowing constraints from different domains such as real numbers or
Boolean functions. This gives considerable expressive power and flexibility and CLP programs have proven to be a high-level programming paradigm for applications based on interactive mathematical
modeling. These advantages, however, are not without cost. Implementations of CLP languages must include expensive constraint solving algorithms tailored to the specific domains. Indeed, performance
of the current generation of CLP compilers and interpreters is one of the main obstacles to the widespread use of CLP. Here we outline the design of a highly optimizing compiler for CLP(R) , a CLP
language which extends Prolog by allowing linear arithmetic constraints. This compiler is intended to overcome the efficiency problems of the current implementation technology. The main innovation in
the comp...
- Department of Computer and Information Science, The Ohio State University , 1994
"... A central issue in the optimizing compilation of Constraint Logic Programming (CLP) languages is how to compile away as much general constraint solving as possible. Most such work relies on
obtaining mode and type information by global analysis, and uses it to generate specialized code for individua ..."
Cited by 4 (1 self)
Add to MetaCart
A central issue in the optimizing compilation of Constraint Logic Programming (CLP) languages is how to compile away as much general constraint solving as possible. Most such work relies on obtaining
mode and type information by global analysis, and uses it to generate specialized code for individual constraints and calls, often with the aid of multiple specialization. Some recent work has
augmented these techniques with procedure-level analysis of the inter-relationships between constraints, to detect constraints that subsume other constraints, and variables that cease to be reachable
at some point in a computation. In combination, these techniques have been shown to dramatically improve performance for a number of programs. Here we continue this line of investigation by
considering a class of programs that accumulate and simplify systems of linear arithmetic constraints. The programs contain procedures that relate their parameters by an affine transform. For some
calling patterns, th...
- In Proceedings of the 4th International Conference on Principles and Practice of Constraint Programming, CP`98, number 1520 in Lecture Notes in Computer Science , 1998
"... . During the evaluation of a constraint logic program, many local variables become inaccessible, or dead . In Prolog and other programming languages, the data bound to local variables can be
removed automatically by garbage collection. The case of CLP is more complex, as the variables may be involve ..."
Cited by 3 (0 self)
Add to MetaCart
. During the evaluation of a constraint logic program, many local variables become inaccessible, or dead . In Prolog and other programming languages, the data bound to local variables can be removed
automatically by garbage collection. The case of CLP is more complex, as the variables may be involved in several constraints. We can consider dead variables to be existentially quantified. Removing
an existential variable from a set of constraints is then a problem of quantifier elimination, or projection. Eliminating variables not only allows recovery of space but also can decrease the cost of
further consistency tests. Surprisingly, the existing systems do not exploit these advantages. Instead, the primary use of projection is as a mechanism for obtaining answer constraints. In this
paper, we will give a general system architecture for automatic early projection and specify the heuristics for CLP(R) together with an in-situ removal method. We then show the effectiveness of early
- Department of Computer and Information Science, The Ohio State University , 1994
"... We study the systematic development of Constraint Logic Programs from the viewpoint of Skeletons and Techniques as described by Kirschenbaum and Sterling. We describe a number of fundamental
skeleton classes for CLP, and generalize the notion of skeletons to deal with non-structural recursion. Then ..."
Cited by 3 (3 self)
Add to MetaCart
We study the systematic development of Constraint Logic Programs from the viewpoint of Skeletons and Techniques as described by Kirschenbaum and Sterling. We describe a number of fundamental skeleton
classes for CLP, and generalize the notion of skeletons to deal with non-structural recursion. Then we describe a range of useful techniques for extending these skeletons. Furthermore, we introduce
important classes of techniques that alter the control flow of skeletons in certain well-defined and desirable ways. This work represents a step towards understanding how to develop complex CLP
programs easily, and is expected to contribute to the adoption of CLP for applications projects. It may also lead to the development of semi-automated program development tools. Finally, it helps to
justify a substantial body of present work on CLP compiler optimizations that depends on the procedure level structure of programs. A preliminary version of this paper appears in the Proceedings of
the Inte...
, 1995
"... In most Constraint Logic Programming (CLP) languages, procedures can be transformed to improve the efficiency of constraint solving for a particular set of calling patterns. In particular, it is
often possible to take advantage of groundness information to replace a considerable amount of constrai ..."
Cited by 1 (0 self)
Add to MetaCart
In most Constraint Logic Programming (CLP) languages, procedures can be transformed to improve the efficiency of constraint solving for a particular set of calling patterns. In particular, it is
often possible to take advantage of groundness information to replace a considerable amount of constraint solving with ground imperative computation. We present an efficient algorithm for identifying
the specializations of a procedure that allow such optimization. This algorithm generates efficiently a concise representation of information flow in a procedure. This representation can be used to
produce a set of calling patterns for which specialization is likely to be fruitful, together with the suitably transformed procedure for each specialization. 1 Introduction Many Constraint Logic
Programming (CLP) [8] languages incorporate complex and potentially expensive constraint solving as a basic operational step. However, in most of them constraints can be solved easily and efficiently | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1961295","timestamp":"2014-04-19T01:16:45Z","content_type":null,"content_length":"38506","record_id":"<urn:uuid:45639f29-6208-48a4-bc1d-3ff85b27fd93>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn, IL Prealgebra Tutor
Find a Berwyn, IL Prealgebra Tutor
...Currently, our school implements a structured executive functioning program for all freshmen. I have seven years of high school coaching experience at all levels, in addition to being an
all-conference high school basketball player. I serve as the college counselor at our high school, so I have...
20 Subjects: including prealgebra, English, Spanish, grammar
...I have also developed an efficient and effective approach to significantly expanding vocabulary knowledge that I used for my own test preparation and that I recommend to everyone I tutor. MY
RECENT TEST SCORES: GMAT: 760 GRE: Quantitative 168/170 (perfect 800 on earlier version) Verbal 168/170...
38 Subjects: including prealgebra, Spanish, reading, statistics
...I often refer to elementary (K-6th) methods to break things down for my students. I hold a master's degree and am proficient especially in English language skills and elementary math. I am
currently employed as a resource teacher, teaching study skills to high school students.
15 Subjects: including prealgebra, English, reading, writing
...I hope to foster your motivation to learn. My biggest accomplishment as a tutor was helping a child with ADHD to organize a given reading assignment. He maintained focused at the task.
9 Subjects: including prealgebra, reading, biology, algebra 1
...I learned mathematics thinking it was a game and things to discover not a lesson to be studied or learned. I believe many students have difficulty with mathematics because they take it too
seriously and became intimidated. I want to relieve the students of the stress and help them to discover the fun of working a problem and exciting pleasure of solving difficult problems.
5 Subjects: including prealgebra, statistics, algebra 1, algebra 2
Related Berwyn, IL Tutors
Berwyn, IL Accounting Tutors
Berwyn, IL ACT Tutors
Berwyn, IL Algebra Tutors
Berwyn, IL Algebra 2 Tutors
Berwyn, IL Calculus Tutors
Berwyn, IL Geometry Tutors
Berwyn, IL Math Tutors
Berwyn, IL Prealgebra Tutors
Berwyn, IL Precalculus Tutors
Berwyn, IL SAT Tutors
Berwyn, IL SAT Math Tutors
Berwyn, IL Science Tutors
Berwyn, IL Statistics Tutors
Berwyn, IL Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bellwood, IL prealgebra Tutors
Broadview, IL prealgebra Tutors
Brookfield, IL prealgebra Tutors
Cicero, IL prealgebra Tutors
Forest Park, IL prealgebra Tutors
Forest View, IL prealgebra Tutors
La Grange Park prealgebra Tutors
Lyons, IL prealgebra Tutors
Maywood, IL prealgebra Tutors
North Riverside, IL prealgebra Tutors
Oak Park, IL prealgebra Tutors
River Forest prealgebra Tutors
Riverside, IL prealgebra Tutors
Stickney, IL prealgebra Tutors
Westchester prealgebra Tutors | {"url":"http://www.purplemath.com/Berwyn_IL_prealgebra_tutors.php","timestamp":"2014-04-19T19:50:33Z","content_type":null,"content_length":"24097","record_id":"<urn:uuid:e44c420e-00c8-45d3-a049-0082f3496d54>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse Variation word problems....... [Archive] - Free Math Help Forum
View Full Version : Inverse Variation word problems.......
It takes 4 hours for 5 painters to paint a house. How long would it take 9 painters to do the job?
I get confused easily here. Please help me on how to do most of the problem, or at least just tell me how to start off with the problem.
05-11-2006, 07:52 AM
It takes 4*5 painter-hours to paint a house.
05-11-2006, 09:58 AM
It takes 4 hours for 5 painters to paint a house. How long would it take 9 painters to do the job?
I get confused easily here. Please help me on how to do most of the problem, or at least just tell me how to start off with the problem.
Can you see that this is an inverse relationship? As the number of painters goes UP, the number of hours goes DOWN.
The basic pattern for an inverse variation is
xy = k
where "k" represents the constant of variation.
Now, suppose we let x = number of painters and y = number of hours. Since it takes 5 painters a total of 4 hours, we can write
5*4 = k
Ok...now you know what k is, so you can write the specific formula for this situation by replacing "k" with the value you got above. And you'll have an equation you can use to find the number of
hours required by ANY NUMBER of painters....use the equation to find the number of hours it will take for 9 painters.
05-11-2006, 12:24 PM
It takes 4 hours for 5 painters to paint a house. How long would it take 9 painters to do the job?
I get confused easily here. Please help me on how to do most of the problem, or at least just tell me how to start off with the problem.
The following should give you enough information for solving combined work problems.
If it takes me 2 hours to paint a room and you 3 hours, ow long will it take to paint it together? >>
Method 1:
1--A can paint the house in 5 hours.
2--B can paint the house in 3 hours.
3--A's rate of painting is 1 house per A hours (5 hours) or 1/A (1/5) houses/hour.
4--B's rate of painting is 1 house per B hours (3 hours) or 1/B (1/3) houses/hour.
5--Their combined rate of painting is 1/A + 1/B (1/5 + 1/3) = (A+B)/AB (8/15) houses /hour.
6--Therefore, the time required for both of them to paint the 1 house is 1 house/(A+B)/AB houses/hour = AB/(A+B) = 5(3)/(5+3) = 15/8 hours = 1 hour-52.5 minutes.
Note - T = AB/(A + B), where AB/(A + B) is one half the harmonic mean of the individual times, A and B.
Method 2:
Consider the following diagram -
.........._______________ _________________
..........I B / /\
..........I * / I
..........I * / I
..........Iy * / I
..........I * / I
..........I*****x****** I
..........I / * (c)
..........I(c-y) / * I
..........I / * I
..........I / * I
..........I / * I
..........I / * I
..........I/___________________* ________\/__
1--Let c represent the area of the house to be painted.
2--Let A = the number of hours it takes A to paint the house.
3--Let B = the number of hours it takes B to paint the house.
4--A and B start painting at the same point but proceed in opposite directions around the house.
5--Eventually they meet in x hours, each having painted an area proportional to their individual painting rates.
6--A will have painted y square feet and B will have painted (c-y) square feet.
7--From the figure, A/c = x/y or Ay = cx.
8--Similarly, B/c = x/(c-y) or by = bc - cx.
9--From 7 & 8, y = cx/a = (bc - cx)/b from which x = AB/(A+B), one half of the harmonic mean of A and B.
I think this should give you enough of a clue as to how to solve your particular problem.
It takes 4 hours for 5 painters to paint a house. How long would it take 9 painters to do the job?
I get confused easily here. Please help me on how to do most of the problem, or at least just tell me how to start off with the problem.
Can you see that this is an inverse relationship? As the number of painters goes UP, the number of hours goes DOWN.
The basic pattern for an inverse variation is
xy = k
where "k" represents the constant of variation.
Now, suppose we let x = number of painters and y = number of hours. Since it takes 5 painters a total of 4 hours, we can write
5*4 = k
Ok...now you know what k is, so you can write the specific formula for this situation by replacing "k" with the value you got above. And you'll have an equation you can use to find the number of
hours required by ANY NUMBER of painters....use the equation to find the number of hours it will take for 9 painters.
So.... 5*4=20, so the equation to solve is 9*y=20... The answer I got was 2.2. Is it correct?!?
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-43759.html","timestamp":"2014-04-19T07:33:47Z","content_type":null,"content_length":"8757","record_id":"<urn:uuid:4a0df52d-1c2f-4872-b27c-85149bc1a999>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Low Complexity 2D Pattern Synthesis Algorithm for Cylindrical Array
International Journal of Antennas and Propagation
Volume 2013 (2013), Article ID 352843, 6 pages
Research Article
A Low Complexity 2D Pattern Synthesis Algorithm for Cylindrical Array
Department of Communication Engineering, Hefei University of Technology, Hefei, Anhui 230009, China
Received 12 August 2013; Accepted 11 October 2013
Academic Editor: Atsushi Mase
Copyright © 2013 Chao Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper proposes a 2D pattern synthesis algorithm for cylindrical array. According to the geometric characteristic of cylinder, we can regard a cylindrical array as an equivalent linear array
whose elements are identical circular subarrays. Therefore, the beam pattern can be obtained by the product of the array factor of linear array and beam pattern of circular subarray. Then, the 2D
beamforming can be realized by two 1D beamforming processes. We can prove that the complex excitation vector of a cylindrical array is the Kronecker product of linear array’s weight vector and
circular array’s weight vector. By this algorithm of decomposition and reconstruction, the computational complexity of 2D beamforming could be significantly reduced. Finally, simulation results
further illustrate the validity of the proposed method.
1. Introduction
Conformal arrays can be commodiously mounted on a curved surface of platforms, such as aerial vehicles and fighter aircraft, and provide aerodynamic shape compatible with the corresponding fuselage.
This conformal design leads to an excellent aerodynamic performance and thus has an extremely wide range of applications due to its advantages of low radar cross section (RCS), large surveillance
coverage, and volume saving [1]. Although conformal arrays are widely used in many areas, the pattern synthesis of such arrays is still a challenge because of the complexity induced by the unusual
configuration. For the arrays, the simple characteristics associated with linear or planar arrays do not hold, and most of the conventional methods would not work.
In the recent years, pattern synthesis of conformal arrays has attracted increasing attention and a wide variety of techniques have been developed for the pattern synthesis of conformal arrays. In [2
], Bucci et al. propose a method to solve the pattern synthesis problem of arbitrary planar array. They alternately found the projection on the upper and lower bounds of the desired pattern to
control the shape of main lobe and the side lobe level (SLL). Afterwards, the method was extended to conformal arrays. Considering the embedded element radiation pattern, Steyskal used it in a
conformal wing array [3]. Tseng and Griffiths provide a new iterative method in [4] to achieve the desired beam pattern of arbitrary arrays by controlling the side lobe peaks. But the number of side
lobe peaks that it can control is limited. In [5], Vaskelainen presents an iterative least-squares method by assigning different weight values to different directions. He further presents a modified
least-squares optimization method with linear constraints in [6] to obtain a prescribed shape of main lobe. In [7], Olen and Compton propose another method based on adaptive array theory. In this
method, they first set excessive artificial interference signals in observation area outside the main lobe and then adaptively adjust the intensities of these interference signals to control the side
lobe level. In [8], Zhou and Ingram improve the method above. They only allow iterations occurred in the main lobe and among side lobe peaks. The method can provide more convenient main lobe shape
control, but finding the location of side lobe peaks might increase computational complexity. In [9], Guo et al. use linearly constrained minimum variance (LCMV) criterion to achieve the desired beam
pattern with a flat top main lobe. Dohmen et al. give a synthesis method of conformal array in [10] to design both the copolarized and cross-polarized patterns. In [11], Zou et al. present an
adaptive beamforming method with low level of polarization components based on geometric algebra. The synthesis problem of an arbitrary array antenna can be seen as a general optimization problem.
Therefore, intelligent optimization algorithms can also be used to solve this problem. Genetic algorithms (GA) are applied to complete this kind of optimization in [12, 13]. Simulated annealing and
particle swarm optimization (PSO) also have good performance of global optimization and are used in the area of pattern synthesis [14, 15].
Most of these methods are applied to general conformal arrays and need an iterative process. The computational complexity is acceptable in 1D condition. However, the amount of calculation will
significantly increase when they are applied to 2D beamforming. Some researchers have realized this problem and began to find fast algorithms [16]. Nevertheless, the structure information is not
taken into consideration. Most of the conformal arrays have regular structures which may be helpful to the array beamforming. In [17], the authors indicate that the cylindrical array can be treated
as a linear array while beamforming. However, they did not touch upon the 2D beamforming problem.
In this paper, we present a new method to solve the 2D beam pattern synthesis for cylindrical arrays. According to the geometry feature of cylinder, we treat the cylindrical array as an equivalent
linear array whose elements are identical circular subarrays. Since the linear array and circular array are orthogonal to each other, 2D beam pattern of the array in elevation and azimuth direction
will be mainly affected by linear and circular arrays, respectively. Therefore, the 2D beamforming process can be realized by two individual 1D pattern syntheses of the circular and linear arrays
successively. Through this process, the new method can greatly decrease the amount of computation of 2D beamforming for cylindrical arrays.
2. Problem Formulation
Consider a cylindrical array consisting of elements; the sketch map can be seen in Figure 1. and are the number of elements contained in each line and circle. is the global Cartesian coordinate
system, while is the local system, and is the central angle of the th element. The far-field beam pattern in the generic direction can be written as in which and are the azimuth and elevation, is the
complex excitation voltage of the th element, is the phase constant, and is the antenna gain of the th element with its local direction associated with the global direction . is the position vector
of the th element, while is the unit vector in the direction .
Since the array is conformal to a curved surface, the antenna elements generally direct their radiation beams toward different directions. Therefore, the transformation between the global coordinate
system and the element local coordinate system needs to be carried out to calculate the contribution of each element to the whole conformal array radiation, which is shown as follows:
The Euler rotation matrix is a very useful tool for this spatial rotation transformation [18]. The corresponding rotation matrix can be written as where are three Euler rotation angles of the axes ,
, and , respectively. And then, the spatial rotation transformation can finally be written as
For a cylindrical array, the maximal radiation of antenna is along the normal direction at most of the time; the coordinate transformation process can be expressed as
Without considering the coordinate shift, the rotation angles in (3) are in this case, where is the corresponding central angle of the th element.
3. 2D Beamforming for Cylindrical Arrays
In a planar array, the array factor and the element pattern are separable, but this condition does not hold for the case of a general conformal array. Consider the structure characteristic of
cylindrical array; it consists of a series of identical circular arrays and the whole array can be seen as an equivalent linear array whose elements are these identical circular subarrays. Therefore,
the array beam pattern can still be obtained by the principle of pattern multiplication. And then, the 2D beam pattern of cylindrical array can be expressed by the following formula: where is the
array factor of the linear array and is the beam pattern of the circular subarray which is used as the element pattern here. Assume that the complex excitation vector of linear array and circular
array are and , respectively. Then, the beam pattern and can be written as where and are the ideal steering vector of the linear array and the circular array and . The symbol represents a diagonal
matrix with diagonal entries . Define the equivalent steering vector , and from (6), (7), and (8), beam pattern in (1) can be rewritten as follows: in which is the steering vector of the whole
cylindrical array and its weight vector can be obtained as follows: where “" denotes the Kronecker product.
Since the array factor of linear array is identical in azimuth, it can be simplified as . Moreover, since the linear array and circular array are orthogonal to each other, the beam pattern in the
azimuth direction is mainly determined by the circular subarrays. Therefore, the 2D beam pattern can be decomposed into two 1D beamforming processes, one in the azimuth direction and one in the
elevation direction. In order to achieve the desired 2D beam pattern , we first carry out beamforming for circular subarray to achieve the given SLL in azimuth and then obtain the weight vector which
can satisfy the following equation:
And then, is used as the element pattern in the beamforming for linear array in the direction of elevation to obtain the weight vector which can satisfy the following equation:
The weight vector for the 1D pattern synthesis can be obtained based on an adaptive array method as follows [7]: where is the look direction, while and , are the power of noise and artificial
interference signals, respectively. Through the following iteration, the optimal weight vector can be obtained to approach the desired beam pattern: where denotes the th iteration, , and is the
desired beam pattern.
Since the radiation of array element is not isotropic, some elements may have little contribution to the look direction. And consequently we ignore these elements in the beamforming process to
further decrease the computational complexity. In most cases, elements radiate outwards around the normal direction, so we only use the elements in an area around the look direction, and the subarray
becomes a circular arc array as shown in Figure 2. The solid ones represent the used elements and the circle ones represent the ignored elements.
From the above, the proposed algorithm can be summarized as follows.(1)Consider a cylindrical array consisting of identical elements with their identical pattern . Select the contributive elements as
shown in Figure 2, and carry out the beamforming process for the circular arc array in azimuth with the fixed elevation . Through the iteration of (13) and (14), the weight vector of circular
subarray and its 1D beam pattern can be obtained.(2)Calculate the beam pattern of circular subarray by (8). Use it as the element pattern in the M-element linear array and similarly carry out the
beamforming process in elevation with the fixed azimuth .(3)After getting and for the circular and linear arrays, respectively, the weight vector of the cylindrical array can be obtained by (10), and
then the 2D beam pattern can be obtained by (9) as well.
From the above, we can see that the practically used array is a part of a complete cylindrical array. Actually, the proposed method can be applied to any conformal array with similar structure that
it can be seen as an equivalent linear array whose elements are identical subarrays.
4. Results and Discussions
In this section, simulations are provided to illustrate the effectiveness of the proposed method. Consider a uniform cylindrical array with and as shown in Figure 1. The space between neighboring
elements is half wavelength. Each element is assumed to have the same pattern function , and in their respective local coordinate system. Since the attenuation of the pattern is more than 10dB at in
the local coordinate system and this angle is in the global coordinate system, the contributive element we choose to carry out the uniform circular array beamforming is in the area and others are
ignored with this element pattern. Without loss of generality, we assume that the look direction is . Then, the number of actually used elements in the uniform circular array beamforming is 11 in
following simulations. The desired beam pattern is defined by the main lobe beamwidth between first nulls (50°) and side lobe level (−40dB). The interested angle area is with its scan step 1°.
First, we carry out beamforming for the 11-element arc array. The artificial interferences are set with an interval of 3° and the same below. After 100 iterations, the weight vector is obtained and
its beam pattern is shown in Figure 3. Consequently, can be obtained and is used as the element pattern for the following 8-element uniform linear array beamforming in the elevation direction. The
process is similar to the above and iteration number is also set to be 100. Figure 4 shows the final 1D beam pattern in the elevation direction.
After getting the weight vectors and , the whole weight vector of the practically used conformal array can be calculated and so is the 2D beam pattern as shown in Figure 5. Figures 6 and 7 show its
projections in azimuth and elevation directions, respectively. Obviously, they coincide with the 1D beam pattern we obtained above, which well supports the analyses above. It can be seen from the
figures that the finally obtained beam pattern can successfully achieve the prescribed beam pattern response.
The method in [9] is also performed for the 2D beamforming for cylindrical array in the same condition. With the same interval, there will be artificial interferences. Result can be seen in Figure 8
which is similar to the above. However, the amount of computation is more than that of the proposed method. Figure 9 shows the normalized weights obtained by the two methods, where method 1 denotes
the proposed method and method 2 denotes the method in [9].
In the pattern synthesis, the calculation mainly focuses on the spatial rotation transformation, the update of the power of artificial interferences, and matrix inversion. In our method, the whole
process is mainly composed of two 1D beamforming processes. There are 61 interferences for the beamforming of each of azimuth and elevation. Assume that the number of iterations is L. Then, to get
the final optimal weight vector , we need to calculate rotation transformation times, matrix inversion times and interference power times. Moreover, the size of matrix inversion is small ( and ).
Meanwhile, if the method in [9] is directly applied to the 2D pattern synthesis of this cylindrical array, then we need to calculate rotation transformation times, matrix inversion times and
interference power times. The size of matrix inversion is . The number of artificial interferences and size of matrix are the main reason for the large amount of calculation. Our method can
significantly reduce both of them and hence reduce the amount of calculation. Moreover, with the increase of array element number, the amount of computation will be rapidly increasing for those
direct 2D beamforming algorithms, while our method will not. More scenarios are simulated to verify the proposed algorithm. With the Intel (R) Core (TM) i5 CPU 2.4GHz and 4GB RAM, the elapsed time
of getting the optimal weight vector for both the proposed method and the method in [9] is shown in Table 1.
and in Table 1 are the elapsed time for the proposed method and the method in [9], respectively. Although the elapsed time may be affected by the computing environment and some parameters of the
conformal array, there is no doubt that our method has greatly reduced the amount of calculation without performance degradation.
5. Conclusion
In this paper, an efficient 2D beamforming method is proposed for cylindrical arrays. According to the geometry characteristics, a cylindrical array can be seen as an equivalent linear array composed
of identical circular subarrays, so that the principle of pattern multiplication can be applied and 2D pattern synthesis process can be realized by two 1D pattern syntheses based on the linear and
circular arrays, respectively. This method avoids the complicated calculation of 2D beam pattern and can significantly reduce the amount of computation. Furthermore, the proposed method can be
applied to any similar conformal arrays which can be seen as a linear array whose elements are identical subarrays. Effectiveness of the algorithm has been illustrated by the above simulations.
This work is supported by China Postdoctoral Science Foundation Grant no. 20100480680 and the Natural Science Foundation of Anhui Province Grant no. 1208085QF105.
1. L. Josefsson and P. Persson, Conformal Array Antenna Theory and Design, IEEE Press, Piscataway, NJ, USA, 2006.
2. O. M. Bucci, G. Franceschetti, G. Mazzarella, and G. Panariello, “Intersection approach to array pattern synthesis,” IEE Proceedings H, vol. 137, no. 6, pp. 349–357, 1990. View at Scopus
3. H. Steyskal, “Pattern synthesis for a conformal wing array,” IEEE Aerospace Conference Proceedings, vol. 2, pp. 819–824, 2002.
4. C.-Y. Tseng and L. J. Griffiths, “A simple algorithm to achieve desired patterns for arbitrary arrays,” IEEE Transactions on Signal Processing, vol. 40, no. 11, pp. 2737–2746, 1992. View at
Publisher · View at Google Scholar · View at Scopus
5. L. I. Vaskelainen, “Iterative least-squares synthesis methods for conformai array antennas with optimized polarization and frequency properties,” IEEE Transactions on Antennas and Propagation,
vol. 45, no. 7, pp. 1179–1185, 1997. View at Publisher · View at Google Scholar · View at Scopus
6. L. I. Vaskelainen, “Constrained least-squares optimization in conformal array antenna synthesis,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 3, pp. 859–867, 2007. View at
Publisher · View at Google Scholar · View at Scopus
7. C. A. Olen and R. T. Compton Jr., “A numerical pattern synthesis algorithm for arrays,” IEEE Transactions on Antennas and Propagation, vol. 38, no. 10, pp. 1666–1676, 1990. View at Publisher ·
View at Google Scholar · View at Scopus
8. P. Y. Zhou and M. A. Ingram, “Pattern synthesis for arbitrary arrays using an adaptive array method,” IEEE Transactions on Antennas and Propagation, vol. 47, no. 5, pp. 862–869, 1999. View at
Publisher · View at Google Scholar · View at Scopus
9. Q. Guo, G. Liao, Y. Wu, and J. Li, “Pattern synthesis method for arbitrary arrays based on LCMV criterion,” Electronics Letters, vol. 39, no. 23, pp. 1628–1630, 2003. View at Publisher · View at
Google Scholar · View at Scopus
10. C. Dohmen, J. W. Odendaal, and J. Joubert, “Synthesis of conformal arrays with optimized polarization,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 10, pp. 2922–2925, 2007. View
at Publisher · View at Google Scholar · View at Scopus
11. L. Zou, J. Lasenby, and Z. He, “Beamforming with distortionless co-polarisation for conformal arrays based on geometric algebra,” IET Radar, Sonar and Navigation, vol. 5, no. 8, pp. 842–853,
2011. View at Publisher · View at Google Scholar · View at Scopus
12. T. Su and H. Ling, “Array beamforming in the presence of a mounting tower using genetic algorithms,” IEEE Transactions on Antennas and Propagation, vol. 53, no. 6, pp. 2011–2019, 2005. View at
Publisher · View at Google Scholar · View at Scopus
13. J. O. Yang, Q. R. Yuan, F. Yang, H. J. Zhou, Z. P. Nie, and Z. Q. Zhao, “Synthesis of conformal phased array with improved NSGA-II algorithm,” IEEE Transactions on Antennas and Propagation, vol.
57, no. 12, pp. 4006–4009, 2009. View at Publisher · View at Google Scholar · View at Scopus
14. J. A. Ferreira and F. Ares, “Pattern synthesis of conformai arrays by the simulated annealing technique,” Electronics Letters, vol. 33, no. 14, pp. 1187–1189, 1997. View at Scopus
15. Z. B. Lu, A. Zhang, and X. Y. Hou, “Pattern synthesis of cylindrical conformal array by the modified particle swarm optimization algorithm,” Progress in Electromagnetics Research, vol. 79, pp.
415–426, 2008. View at Scopus
16. M. Comisso and R. Vescovo, “Fast iterative method of power synthesis for antenna arrays,” IEEE Transactions on Antennas and Propagation, vol. 57, no. 7, pp. 1952–1962, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
17. Z. Lin and H. Zishu, “A beamforming method for cylindrical array based on synthetic pattern of subarray,” in Proceedings of the 2nd International Conference on Image Analysis and Signal
Processing (IASP '10), pp. 667–670, April 2010. View at Publisher · View at Google Scholar · View at Scopus
18. T. Milligan, “More applications of Euler rotation angles,” IEEE Antennas and Propagation Magazine, vol. 41, no. 4, pp. 78–83, 1999. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/ijap/2013/352843/","timestamp":"2014-04-16T12:30:31Z","content_type":null,"content_length":"224197","record_id":"<urn:uuid:b79c9bcf-a5b2-4635-97fe-5572419a8a0a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
IB Physics/Relativity
H.1 Introduction to relativityEdit
H.1.1 Describe what is meant by a frame of referenceEdit
A frame of reference refers to a point of view. Physics refers to observational frames of reference, i.e. what is heard, seen, touched, smelt or tasted from a certain point of view.
e.g. you have a frame of reference (in a computer chair) that views the world as stationary. A frame of reference with the sun as stationary would view you as moving around the earth.
H.1.2 Describe what is meant by a Galilean TransformationEdit
Galilean is what seems intuitive. The equations do not involve relativity.
H.1.3 Solve problems involving relative velocities using the Galilean transformation equationsEdit
This should be easy. position equation: x'=x-vt
Velocity equation: u'=u-v
e.g. You are sitting on a park bench. You see a bike and car moving away from each other. The bike is moving at 5m/s, the car is moving at 20 m/s. How fast is the bike moving in the cars reference
frame? (Ans: 25m/s)
H.2 Concepts and postulates of special relativityEdit
H.2.1 Describe what is meant by an inertial frame of referenceEdit
A reference frame that is moving with a constant velocity. This really isn't that essential but you might see a fundamental flaw in this definition (just ignore it if you do).
H.2.2 State the two postulates of the special theory of relativityEdit
• the speed of light in a vacuum is constant for all inertial observers.
• The laws of physics are the same for all inertial observers.
H.2.3 Discuss the concept of simultaneityEdit
Consult a textbook (or youtube "simultaneity"), best explained with diagrams.
In summary though, simultaneous events that take place at the same point in space will be simultaneous to all observers. However, events that take place at different points in space can be
simultaneous for one observer but not for another.
H.3 Relativistic KinematicsEdit
H.3.1 Describe the concept of a light clockEdit
Best described by diagram.
Imagine a clock where light is bounced off two mirrors. mirror| light beam --> mirror |
Each time it hits a mirror a "tick" is registered. Since light is the same for all reference frames, this is the most accurate type of clock.
This seams simple for a stationary clock, HOWEVER in a reference frame where the clock moves, it must travel a diagonal distance which is greater than that for a stationary time frame. In this case a
"tick" would take longer for one reference frame than the other. Therefore, time is shorter for one reference frame than another.
H.3.2 Define proper time intervalEdit
Proper time interval is the elapsed time measured between two events when the observer is in an inertial frame of reference.
H.3.3 Derive the time dilation formulaEdit
The time dilation formula can be derived using pythagoras's theorem. The length of the light clock is l. According to the observer's clocks, the clock in the light clock travels longer (l' = ct').
Horizontal distance traveled by the light clock: vt'. t = l/c
l^2+(vt')^2 = (l')^2
t'^2 = (l')^2/c^2
t'^2 = (l^2+(vt')^2)/(c^2)
After rearranging, t' = 1/( 1-v^2/(c^2) ) x t
H.3.4 Sketch and annotate a graph showing the variation with relative velocity of the Lorentz factorEdit
H.3.5 Solve problems involving time dilationEdit
H.3.6 Define proper lengthEdit
H.3.7 Describe the phenomenon of length contractionEdit
H.3.8 Solve problems involving length contractionEdit
H.4 Some consequences of special relativityEdit
H.4.1. Describe how the concept of time dilation leads to the "twin paradox"Edit
H.4.2 Discuss the Hafele-Keating experimentEdit
Testing the theory of special relativity, scientists flew two clocks around the world in opposite directions and compared them to a clock that was stationary relative to the earth's surface. One
plane flew eastward and another went westward. When the planes returned, it was found that that for the plane flying East, it's clock was slower than the clock on the earth's surface (it was behind
by 59 ns). Since the earth spins to the east and the plane was travelling east relative to the earth's atmosphere, this clock had a greater velocity relative to the stationary frame of reference: the
centre of the Earth and so it's clock ticked a little slower. The westward clock ran faster than the clock that remained on the earth's surface as it was moving more slowly relative to the centre of
the earth. This experiment provided evidence of time dilation that was in excellent agreement with relativistic predictions.
H.4.3 Solve one-dimensional problems involving the relativistic addition of velocitiesEdit
H.4.4 State the formula representing the equivalence of mass and energyEdit
H.4.5 Define rest massEdit
Rest mass of an object is defined to be the energy required to produce the object at rest.
H.4.6 Distinguish between the energy of a body at rest and its total energy when movingEdit
H.4.7 Explain why no object can ever attain the speed of light in a vacuumEdit
According to the classical mechanics, F = ma therefore if the force is applied to an object constantly over a very long time period, the speed of the object must increase without limit. However, this
is not true; as the speed of the object increases, the relativistic mass of the object increases, therefore acceleration will gradually decrease. Only the particles with no rest mass (such as photon)
can travel with the speed of light.
H.4.8 Determine the total energy of an accelerated particleEdit
The total energy E, momentum p, and the rest energy E[0] (=m[0]c^2) have a relationship E^2 = p^2c^2 + E[0]^2.
H.5 Evidence to support special relativityEdit
H.5.1 Discuss muon decay as experimental evidence to support special relativityEdit
H.5.2 Solve problems involving the muon decay experimentEdit
H.5.3 Outline the Michelson-Morley experimentEdit
H.5.4 Discuss the result of the Michelson-Morley experiment and its implicationEdit
H.5.5 Outline an experiment that indicates that the speed of light in vacuum is independent of its sourceEdit
H.6 Relativistic momentum and energyEdit
H.6.1 Apply the relation for the relativistic momentum p = γm[0]u of particlesEdit
H.6.2 Apply the formula E[k] = (γ-1)m[0]c^2 for the kinetic energy of a particleEdit
H.6.3 Solve problems involving relativistic momentum and energyEdit
H.7 General relativityEdit
H.7.1 Explain the difference between the terms gravitational mass and inertial massEdit
Gravitational mass is the mass obtained from when a gravitational force acts upon an object. The inertial mass is the mass obtained from when an external force acts upon a body (the mass is resisting
the acceleration). Evidently, the gravitational and inertial mass are exactly the same because uniform acceleration is indistinguishable from a gravitational field.
H.7.2 Describe and discuss Einstein's principle of equivalenceEdit
The principle of equivalence states that there is no difference between an accelerating observer and an observer in a gravitational field. This principle coincides with General Relativity and the IB
exams focus on light being bent by a gravitational field; Einstein stated this idea in respect to his closed elevator thought experiment. Light bent the same way regardless of whether the elevator
was at rest or accelerating downward. The second part of the principle of equivalence is that time slows down when an object approaches a massive body such as black holes.
H.7.3 Deduce that the principle of equivalence predicts bending of light rays in a gravitational fieldEdit
H.7.4 Deduce that the principle of equivalence predicts that time slows down near a massive bodyEdit
H.7.5 Describe the concept of spacetimeEdit
Spacetimeis the 4-dimensional world with three space and one time coordinates.
H.7.6 State that moving objects follow the shortest path between two points in spacetimeEdit
In the absence of any forces, a moving object will move in spacetime along path with the shortest length. This path is called geodesics.
H.7.7 Explain gravitational attraction in terms of the warping of spacetime by matterEdit
Large masses will warp space-time in such a way that the shortest distance to be travelled between point A and B for a particle is now a curve around the large mass. As such, this curved path that
the particle follows can be thought of as the gravitational attraction.
H.7.8 Describe black holesEdit
Black hole is a singularity of space time, and is a point of infinite density. It causes extreme curvature of spacetime around it.
H.7.9 Define the term Schwarzschild radiusEdit
Schwarzschild radius (R[s]) is sometimes called gravitational radius, or event horizon. Within R[s], no object can escape from the gravitational field (because the escape velocity exceeds c within R
H.7.10 Calculate the Schwarzschild radiusEdit
R[s] = 2GMc^-2, where G is the gravitational constant and M is mass of the black hole (or the star).
H.7.11 Solve problems involving time dilation close to a black holeEdit
H.7.12Describe the concept of gravitational red-shiftEdit
H.7.13 Solve problems involving frequency shifts between different points in a uniform gravitational fieldEdit
H.7.14 Solve problems using the gravitational time dilation formulaEdit
H.8 Evidence to support general relativityEdit
H.8.1 Outline an experiment for the bending of EM waves by a massive objectEdit
H.8.2 Describe gravitational lensingEdit
Assuming a massive galaxy has a large gravitational pull, light from a distant quasar is bent, thereby creating two quasars when viewed through a telescope. The galaxy acts a lens as it bends the
incoming light from the quasar.
H.8.3 Outline an experiment that provides evidence for gravitational red-shiftEdit
Pound-Rebka Experiment where they shot a photon from the ground floor of a building up to the attic. They found that the frequency of the photon was longer up at the attic then at the ground floor
and thus, provides evidence for gravitational red-shift.
Last modified on 9 April 2014, at 01:06 | {"url":"http://en.m.wikibooks.org/wiki/IB_Physics/Relativity","timestamp":"2014-04-17T01:21:12Z","content_type":null,"content_length":"32692","record_id":"<urn:uuid:5e3549e4-c7d7-450a-888a-f979931dddbb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
The First Norman Invasion
August 2005
History tells us that the first Norman invasion of Ireland commenced in 1169. It seems unlikely that any of the invaders took the time to dwell on the possibilities inherent in riffle shuffling a
pre-arranged deck of cards. That came 789 years later, a continent away, in more peaceful times. But that too was a kind of Norman invasion — in this case, of a stimulating new idea into the world of
card magic.
Riffle shuffling a deck of cards refers to dividing it into two (not necessarily equal) packets, and then dovetailing those together — perhaps using the thumbs to release the cards — with no
particular regularity. This type of shuffling is often thought to help randomize a deck, yet a single riffle shuffle can lead to some quite predictable results. It can be used to "reveal" some
surprising mathematics, which we explore below.
The Density of Primes
"For this trick I need three volunteers, which is quite a coincidence, as 3 is the first odd prime, and this is all about prime numbers, oddly enough." As you speak, you should casually do several
overhand shuffles of the type which merely cycle the cards around. The first volunteer is handed the deck and asked to perform a riffle shuffle. Take the cards back and fan them towards the audience,
stressing the random ordering present, commenting, "I forgot to mention that it's important that all volunteers can tell at a glance if a number is prime or not, I hope that's not a problem." Place
the deck out of view, either under a table (if you are seated) or behind your back.
"In mathematics we often prove that two sets have the same size by establishing a one-to-one correspondence between them, that's exactly what we are about to do now. It is popularly believed that
there are more composite numbers than primes, but we claim that on the contrary, exactly half of the numbers are primes. Jacks and Kings are primes of course, having values 11 and 13 respectively,
and Queens are composites, since they count as 12."
Slowly bring forward pairs of cards, one by one, as if you were doing some difficult mental (and/or physical) gymnastics before making each selection. Hand them to a second volunteer, and ask that
person to confirm that each pair consists of one prime and one composite value. Bring forward several pairs like this, as you build up confidence. Nothing can go wrong. Stop when you have made your
point; few audiences will want to see you run through all of the cards.
Conclude, "As you can see, we have found a natural one-to-one correspondence between the set of primes and the set of non-primes. Hence, the density of primes is exactly one half."
Finally turn to the third volunteer, "I know what you're thinking: there's some potential here. Perhaps some corollaries? Maybe, `The set of primes is finite'? Or a deep connection to the Riemann
Hypothesis? Good luck!"
This trick is actually quite easy to do, thanks to two key facts. The first, which few realize, is that in our context the density of primes is one half: in a standard deck, the prime values are 2,
3, 5, 7, J, K, and the composites are 4, 6, 8, 9, 10, Q. Six of one, half a dozen of the other! Omit the Aces to cut down on arguments over whether 1 is prime or not...
The second key ingredient is that riffle shuffling isn't all it's cracked up to be. Karl Fulves has pointed out that early in the 20th century, O.C. Williams went public with the basic fact that a
single irregular riffle shuffle falls far short of randomizing a deck of cards, contrary to most people's intuition. In the 1920s and 30s, this observation was expanded on by Charles Jordan, and in
the late 1950s Norman Gilbreath and others rediscovered the principle and took it to new heights. It's the 1958 incarnation that we refer to as the First Norman Invasion; a more common appellation is
the (First) Gilbreath Shuffle Principle. (Yes, there was a second one a few years later, which is actually a generalization of the first one. We'll explore that one here in 2006.)
Arrange the deck at the outset, to consist of alternating prime values and composite values, skipping the Aces. (This effect could, for instance, be done as the follow up to a four Aces trick).
Casually fanned to the audience, no pattern will be obvious. Proceed as above, first overhand shuffling as mentioned to give the illusion of card mixing, before having a volunteer do a single riffle
Take the cards back and fan once more, pausing at two adjacent cards which are either both prime or both composite, as you comment on how jumbled the deck now is. Here's the secret: cut the deck
between these two cards. It is essential that you end up with either primes at the top and bottom, or composites at the top and bottom; if this is the case when you get the cards back, no cutting is
Place the deck out of sight. If you merely take pairs off the top (or bottom) every time, then each will automatically consist of a prime value and a composite value. That's all there is to it!
This was initially conceived and performed using a more well-known division of the cards, into two like-sized packets: reds and blacks. Karl Fulves has it in his More Self-Working Card Tricks (Dover,
1984), where he remarks: "This routine was independently devised by Gene Finnell, Norman Gilbreath and others." Gilbreath's discovery was essentially this trick — and the title "Magnetic Colors" is
Why does it work every time? We provide an explanation at the end below, but recommend that readers first try to think it through for themselves. Simple, elegant expositions can be found in Chapter 9
of Martin Gardner's New Mathematical Diversions from Scientific American, which was originally published in 1966 (and is now available from the MAA), and in NG de Bruijn's "A riffle shuffle card
trick and its relation to quasicrystal theory," Nieuw Archief Wiskunde (1987). (Gardner had first brought "Magnetic Colors" to the public's attention in his June 1960 Scientific American column.)
Easy As $$\pi$$
"We're going to do an experiment here, and see if this deck of cards knows any mathematics. Can somebody get a pen and paper, please?" Overhand shuffle, and fan the cards, commenting that they are
well mixed, some face up, others face down. Give them to a volunteer, who is invited to riffle shuffle. Take the cards back, fan them towards the audience again so that all can see how "random" they
are, and then place them out of view.
Bring forward three pairs, in quick succession, and throw them on the table, noting, "Look, three pairs all facing the same way, amazing isn't it?" Then drop some cards on the floor, clumsily.
Reprimand yourself, saying, "Too late now, we'll never know which way those were facing. Let's try again."
The next pair proves to consist of two cards facing different directions, which you comment on. Then you produce four pairs facing the same way. It's time to suggest that the second person keep track
of these details on paper. Recap, "We started with three pairs facing the same way, then we got a pair facing opposite ways, then four more facing the same way. Write down 314 please." Continue,
until three more numbers have been generated, namely one, five and nine. Have 159 written down beside the 314. Then step back, and gasp, "I don't believe it! Remember I dropped some cards after the
first string of three pairs facing the same way? Let's put a marker, a period, after the 3 you wrote to denote that. Do you notice anything? Three point one four one five nine! It's as easy as π."
Actually, any numerical sequence, such as a house number or telephone number, can be spelled out in this trick. At the outset, the deck is arranged so that it alternates face-up and face-down cards.
Casually fanned to the audience, clumps are likely to occur, and the arrangement will not be obvious. Proceed as above, this time cutting (if necessary), after the second fanning, to ensure that the
top and bottom card are facing the same way.
With the cards hidden, take pairs off the top, one at a time. If brought forward as they are, all such pairs would consist of one face-up and one face-down card (in some order). By silently turning
one of the cards over, you can convert any such pair to one whose cards face the same way. In this manner you can control the outcome for all pairs produced, to generate the digits of π or any
desired number. The decimal point gag depends on your "clumsily" dropping an even number of cards, so as not to disturb the order that remains after the riffle shuffle.
This was inspired by "The Hustler" from Peter Duffie & Robin Robertson's Card Conspiracy Vol. 1 (Duffie & Robertson, 2003). The idea of using face-up and face-down cards in place of red and black in
the Gilbreath context goes back as least as far as Nick Trost in 1964.
Prime Locations
We wrap up with a simple item based on a principle which the Norman invaders of 1169 may well have conceived of, before they started marrying the locals and assimilating into the general population.
The deck is shown to be well mixed, and is split between two people, who are encouraged to shuffle further. Each person now peeks at the top card of their pile and memorizes it, before exchanging
cards and shuffling again. Finally, the two piles are recombined, and the cards cut several times. You take the deck back and fan it publicly, rapidly pulling out two cards and placing them face down
on the table. Have the two selections named and the cards turned over.
This trick would be very easy to do if one person got all red cards and the other all black cards. The selected cards would break these runs in the reassembled-and-cut deck you look through. However,
this would necessitate your being the only person who got to see the card faces. The prime and composite division used earlier, perhaps lumping in the Aces with one group just for the fun of it,
removes the REDS factor (Risk of Embarrassment, Detection and Shame), should anyone glance at faces (yours or the cards').
This is based on "Double Location" from Scarne on Card Tricks (Crown, 1950) which uses even/odd cards. Like the separation we suggest, this yields slightly lopsided packets, of twenty-four and
twenty-eight cards, respectively. Scarne credits the idea of proceeding from a colour-separated deck to a (single) location to Martin Gardner.
Why it all works
(Based on de Bruijn's treatment of the first Gilbreath principle.) Let's go back to the original concept where the cards alternate red and black. There are two cases to consider, depending on whether
the bottom card of the two packets being shuffled have opposite colours or not. In the second case, where both cards have the same colour, simply ignore one of them entirely, which means that the
finished shuffled deck is "out of sync by one": taking off the bottom card of this shuffled deck will reduce to the first case, and cutting the deck between two like-coloured cards has an equivalent
There are three things to keep track of: the initial two packets, held in the left and right hands, which we assume consist of alternating cards with bottom cards of different colours, and the stack
of shuffled cards, which starts off empty. Consider the situation after the first two cards have fallen. If they both came from the same hand, then the stack of shuffled cards consist of a pair of
oppositely coloured cards (in what order we cannot say), and the remaining left and right packets still alternate in colour with bottom cards of different colours. But the very same observation is
true if the two fallen cards came from different hands! This argument continues to hold for each successive pair of fallen cards; hence the shuffled cards consists of unmatched pairs as claimed.
Norman Gilbreath lives in Los Angeles with his family and may be contacted through his web site yourmindtrip.com. On most Friday evenings, he may be found at Hollywood's Magic Castle, giving
impromptu performances of his original creations. | {"url":"http://www.maa.org/community/maa-columns/past-columns-card-colm/the-first-norman-invasion","timestamp":"2014-04-19T10:18:39Z","content_type":null,"content_length":"103766","record_id":"<urn:uuid:a9c34879-a040-48fe-9ada-bd6ecf169b86>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hedwig Village, TX ACT Tutor
Find a Hedwig Village, TX ACT Tutor
...I then did an interdisciplinary Master of Arts in Arid and Semi-arid Land Studies at Texas Tech. At the time, my GRE score was the highest they had ever seen. To relax, I solve logic puzzles,
but also like classical music and film.I am a teacher in the state of Texas, certified to teach grades 4-8, since 2003.
41 Subjects: including ACT Math, Spanish, reading, English
...Regardless of what subject I am working with someone on, I will strive to make sure the student understands. Here is a list of the subjects I've have taught or am capable of teaching: Math-
Pre-Algebra High school, Linear and College Algebra, Geometry, Pre-Calculus, Trigono...
38 Subjects: including ACT Math, chemistry, reading, physics
...The skills I’ve acquired while teaching also enable me to better understand the dynamics of the student/teacher/parent relationship. I can use my expertise to not only help with improving math
comprehension, but with anything else associated with math class.I have been teaching/tutoring Algebra ...
6 Subjects: including ACT Math, geometry, algebra 2, precalculus
...When I'm not tutoring, I spend time working as an underwater videographer, doing yoga, and trying to satisfy my insatiable wanderlust. In my opinion, a standardized test is a poor proxy for
college preparedness (and intelligence), and I find it incredibly fulfilling to help students beat what of...
16 Subjects: including ACT Math, reading, writing, biology
My name is Philip, I graduated from Boston College, where I majored in both biology and philosophy. I have been in the classroom for 3 years and am a certified teacher in 8-12 Life Science and
Early Childhood - 6th Grade in the state of Texas. I try to foster an environment that encourages active ...
13 Subjects: including ACT Math, biology, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/hedwig_village_tx_act_tutors.php","timestamp":"2014-04-21T14:46:11Z","content_type":null,"content_length":"24339","record_id":"<urn:uuid:9670a2f5-61f2-4b80-ac69-316b6dbf6cd5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |