content
stringlengths
86
994k
meta
stringlengths
288
619
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/bdean20/answered","timestamp":"2014-04-20T18:28:58Z","content_type":null,"content_length":"104867","record_id":"<urn:uuid:6e14e1b0-b010-4404-a05f-356e4f1c4d82>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Prasad Raghavendra Assistant Professor Research Areas • Theory (THY) Teaching Schedule (Spring 2014) Selected Publications • A. Louis, P. Raghavendra, P. Tetali, and S. Vempala, "Many sparse cuts via higher eigenvalues," in STOC, 2012, pp. 1131-1140. • V. Guruswami, P. Raghavendra, R. Saket, and Y. Wu, "Bypassing UGC from some optimal geometric inapproximability results," in SODA, 2012, pp. 699-717. • P. Raghavendra and N. Tan, "Approximating CSPs with global cardinality constraints using SDP hierarchies," in SODA, 2012, pp. 373-387. • B. Barak, P. Raghavendra, and D. Steurer, "Rounding Semidefinite Programming Hierarchies via Global Correlation," in FOCS, 2011, pp. 472-481. • V. Guruswami, J. H{\aa}stad, R. Manokaran, P. Raghavendra, and M. Charikar, "Beating the Random Ordering Is Hard: Every Ordering CSP Is Approximation Resistant," SIAM J. Comput., vol. 40, no. 3, pp. 878-914, 2011. • P. Gopalan, V. Guruswami, and P. Raghavendra, "List Decoding Tensor Products and Interleaved Codes," SIAM J. Comput., vol. 40, no. 5, pp. 1432-1462, 2011. • P. Raghavendra and D. Steurer, "Graph expansion and the unique games conjecture," in STOC, 2010, pp. 755-764. • J. R. Lee and P. Raghavendra, "Coarse Differentiation and Multi-flows in Planar Graphs," citeKey, Discrete {\&} Computational Geometry, vol. 43, no. 2, pp. 346-362, 2010. • P. Raghavendra and D. Steurer, "Towards computing the Grothendieck constant," in SODA, 2009, pp. 525-534. • P. Raghavendra and D. Steurer, "How to Round Any CSP," in FOCS, 2009, pp. 586-594. • P. Raghavendra and D. Steurer, "Integrality Gaps for Strong SDP Relaxations of UNIQUE GAMES," in FOCS, 2009, pp. 575-585. • V. Feldman, V. Guruswami, P. Raghavendra, and Y. Wu, "Agnostic Learning of Monomials by Halfspaces Is Hard," in FOCS, 2009, pp. 385-394. • V. Guruswami and P. Raghavendra, "Hardness of Solving Sparse Overdetermined Linear Systems: A 3-Query PCP over Integers," TOCT, vol. 1, no. 2, 2009. • V. Guruswami and P. Raghavendra, "Hardness of Learning Halfspaces with Noise," SIAM J. Comput., vol. 39, no. 2, pp. 742-765, 2009. • P. Raghavendra, "Optimal algorithms and inapproximability results for every CSP?," in STOC, 2008, pp. 245-254. • R. Manokaran, J. Naor, P. Raghavendra, and R. Schwartz, "Sdp gaps and ugc hardness for multiway cut, 0-extension,and metric labeling," in STOC, 2008, pp. 11-20. • P. Raghavendra, "A Note on Yekhanin's Locally Decodable Codes," Electronic Colloquium on Computational Complexity (ECCC), vol. 14, no. 016, 2007.
{"url":"https://www.eecs.berkeley.edu/Faculty/Homepages/praghavendra.html","timestamp":"2014-04-17T09:37:50Z","content_type":null,"content_length":"9668","record_id":"<urn:uuid:fe792e59-ea38-4256-a831-05427121bc62>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Estimation of Pi Using Message Passing We intend to use the fact that to approximate pi by approximating the integral on the left. We intend to have the parallel pool perform the calculations in parallel, and to use the spmd keyword to mark the parallel blocks of code. We first look at the size of the parallel pool that is currently open. p = gcp; ans = We approximate pi by the numerical integral of 4/(1 + x^2) from 0 to 1. type pctdemo_aux_quadpi.m function y = pctdemo_aux_quadpi(x) %PCTDEMO_AUX_QUADPI Return data to approximate pi. % Helper function used to approximate pi. This is the derivative % of 4*atan(x). % Copyright 2008 The MathWorks, Inc. y = 4./(1 + x.^2); We divide the work between the workers (labs) by having each worker calculate the integral of the function over a subinterval of [0, 1] as shown in the picture. We define the variables a and b on all the workers, but let their values depend on labindex so that the intervals [a, b] correspond to the subintervals shown in the figure. We then verify that the intervals are correct. Note that the code in the body of the spmd statement is executed in parallel on all the workers in the parallel pool. a = (labindex - 1)/numlabs; b = labindex/numlabs; fprintf('Subinterval: [%-4g, %-4g]\n', a, b); Lab 1: Subinterval: [0 , 0.25] Lab 2: Subinterval: [0.25, 0.5 ] Lab 3: Subinterval: [0.5 , 0.75] Lab 4: Subinterval: [0.75, 1 ] We let all the workers now use a MATLAB quadrature method to approximate each integral. They all operate on the same function, but on the different subintervals of [0,1] shown in the figure above. myIntegral = integral(@pctdemo_aux_quadpi, a, b); fprintf('Subinterval: [%-4g, %-4g] Integral: %4g\n', ... a, b, myIntegral); Lab 1: Subinterval: [0 , 0.25] Integral: 0.979915 Lab 2: Subinterval: [0.25, 0.5 ] Integral: 0.874676 Lab 3: Subinterval: [0.5 , 0.75] Integral: 0.719414 Lab 4: Subinterval: [0.75, 1 ] Integral: 0.567588 The workers have all calculated their portions of the integral of the function, and we add the results together to form the entire integral over [0, 1]. We use the gplus function to add myIntegral across all the workers and return the sum on all the workers. piApprox = gplus(myIntegral); Since the variable piApprox was assigned to inside an spmd statement, it is accessible on the client as a Composite. Composite objects resemble cell arrays with one element for each worker. Indexing into a Composite brings back the corresponding value from the worker to the client. approx1 = piApprox{1}; % 1st element holds value on worker 1. fprintf('pi : %.18f\n', pi); fprintf('Approximation: %.18f\n', approx1); fprintf('Error : %g\n', abs(pi - approx1)) pi : 3.141592653589793116 Approximation: 3.141592653589793560 Error : 4.44089e-16
{"url":"http://www.mathworks.com/help/distcomp/examples/numerical-estimation-of-pi-using-message-passing.html?nocookie=true","timestamp":"2014-04-21T04:51:12Z","content_type":null,"content_length":"37687","record_id":"<urn:uuid:df5d3975-3f51-4e0b-929d-990bf02ae618>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Write down the equation in spherical and cylindrical coordinates. March 20th 2012, 07:57 PM #1 Sep 2010 Write down the equation in spherical and cylindrical coordinates. x^2 + y^2 + z^2 = 2. Not sure how to start this problem. If someone could point me in the right direction that would be great. Re: Write down the equation in spherical and cylindrical coordinates. Re: Write down the equation in spherical and cylindrical coordinates. i know that θ= tan^-1(y/x) and ϕ= cos^-1 (z/r) but how do i know what x,y,z are? Re: Write down the equation in spherical and cylindrical coordinates. any further help on this question would be much appreciated i can't find it in the textbook or on the internet anywhere. Re: Write down the equation in spherical and cylindrical coordinates. Then you must be completely misunderstanding everything because any text I have ever seen defined spherical coordinates using $\rho= \sqrt{x^2+ y^2+ z^2$. So what formulas does your text use to define "polar" and "spherical" coordinates? (The two you post are for spherical coordinates but since there are three coordinates, you should have three formulas, not 2.) Re: Write down the equation in spherical and cylindrical coordinates. Since linalg123 hasn't got back to us, I will continue myself. Every Calculus text I have ever seen has defined "polar coordinates" with the formulas $x= r cos(\theta)$ $y=r sin(\theta)$ Strictly speaking, "polar coordinates" is defined only in two dimensions but and immediate extension is "cylindrical coordinates" using 'z' as the third variable, Spherical coordinates are defined by the formulas $x= \rho cos(\theta)sin(\phi)$ $y= \rho sin(\theta)sin(\phi)$ $z= \rho cos(\theta)$ It is then easy to see that, in cylindrical coordinates, $x^2+ y^2+ z^2= r^2cos^2(\theta)+ r^2sin^2(\theta)+ z^2= r^2(cos^2(\theta)+ cos^2(\theta)+ z^2= r^2+ z^2$ so that $x^2+ y^2+ z^2= 2$ becomes $r^2+ z^2= 2$ And in spherical coordinates, $x^2+ y^2+ z^2= \rho^2 cos^2(\theta)sin^2(\phi)+\rho^2 sin^2(\theta) sin^2(\phi)+ \rho^2 cos^2(\phi)$ $= \rho^2 sin^2(\phi)(cos^2(\theta)+ sin^2(\theta)+ \rho^2 cos^(\phi)= \rho^2 (sin^2(\phi)+ cos^2(\phi))= \rho^2$ So in spherical coordinates $x^2+ y^2+ z^2= 2$ becomes [tex]\rho^2= 2[/itex] or, since $\rho$, the distance from the origin to the point, is never negative, $\rho= \sqrt{2}$. March 21st 2012, 02:33 AM #2 March 27th 2012, 08:49 PM #3 Sep 2010 March 28th 2012, 04:47 AM #4 Sep 2010 March 29th 2012, 09:28 AM #5 MHF Contributor Apr 2005 March 30th 2012, 05:01 PM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/geometry/196203-write-down-equation-spherical-cylindrical-coordinates.html","timestamp":"2014-04-18T03:11:29Z","content_type":null,"content_length":"50620","record_id":"<urn:uuid:8121a77e-b8a9-4ff4-8b02-c65dd24b5c92>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Large numbers Large numbers How would I by any means print out the result of one number being added by two numbers 999999 times, the numbers both happen to be about 16500 digits long. It must display ALL digits in the number (all the digits the result ends up being). I know I can't use normal data-types, so what do I do? Looks kinda like this: 1 for (int i = 0; i < 999999; ++i) 2 { 3 total += a; 4 total += b; 5 } a and b being 2 different numbers with about 16500 digits. Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/92959/","timestamp":"2014-04-18T15:48:39Z","content_type":null,"content_length":"7779","record_id":"<urn:uuid:b72402ad-7fe9-42f7-8295-37345d5eb06a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Shmoop Online Course - Algebra I—Semester B Algebra I—Semester B Double the equations, double the fun. Course Description It doesn't matter whether you love it or hate it. The fact remains that Algebra is around and by golly, it's here to stay. What's not to love about it, though? We'll admit that it might get a bit irrational from time to time, and there's no denying a few of its radical tendencies, but it can simplify your life in more ways than the square root of one. Besides, its graphing skills are off the charts. Why not give it a chance? Take it from us: there's a high probability of it working out. Semester B is chock-full of stuff that we haven't come across in other math classes. In this course, we'll • start out in familiar territory with systems of equations. • move quickly onto radicals and quadratics, lines' curvy cousins. • open the gate to polynomials and rational expressions, which are all about factoring. (And plaid! They're pretty stylish.) • finish up with probability and statistics. (Well, maybe. There’s a 99% chance we'll get there.) Get ready for interactive readings, activities, and problem sets galore. P.S. Algebra I is a two-semester course. You're looking at Semester B, but you can check out Semester A here. Technology Requirements • Microsoft Office, Google Docs, or another word processing program • A scanner (or access to one) • A camera (a camera phone is sufficient) • All other work can be done via the Shmoop website. Required Skills Knowledge of pre-algebra concepts and the material from Semester A of this course Course Breakdown Purchase units individually Unit 7. Systems of Equations After having graphed linear equations and learned the bare bones of functions, we'll solve and graph systems of linear equations. Whether they're given as equations or word problems, we'll be able to tackle these problems in their many forms. Wear a helmet, though. The last thing you want is a concussion. Unit 8. Radicals and Quadratic Equations We'll start by performing major arithmetic operations on square roots and by the time this unit's over, we'll be able to wrangle radicals just about anywhere. That'll lead us into quadratic equations, which are like linear equations with more twists and turns. (We're not kidding. Have you seen them behind the wheel?) Unit 9. Polynomials Polynomials are all about factoring. They also have a fondness for Bocce ball, but that's not as relevant. Once we learn a thing or five about factoring, we'll be able to understand polynomials on the equation level. And if you really want to connect with polynomials, consider joining in a round of Bocce ball. Unit 10. Division of Polynomials Right when you thought you knew all there was to know about polynomials, we go and start dividing them. Luckily, all our old factoring tricks will still apply and they'll be even more useful as we start dealing with rational expressions. Too bad they aren't nearly as rational as their name suggests! Unit 11. Probability and Statistics We’ll start out with the beautification of data, because data alone is just boring. Get ready for stem and leaf plots, bar graphs, histograms, pie charts, box and whisker plots, and finally scatter plots. Then, we'll move on to probability, which is probably going to be fun. Actually, scratch that. It's definitely going to be fun. • Course Length: 18 weeks • Course Number: 210 • Grade Levels: 8, 9, 10 • Course Type: Basic • Category: • Prerequisites: Algebra I—Semester A Just what the heck is a Shmoop Online Course?
{"url":"http://www.shmoop.com/courses/algebra-1-semester-b/","timestamp":"2014-04-17T21:58:02Z","content_type":null,"content_length":"45436","record_id":"<urn:uuid:826e9a27-bb68-400d-8524-f5ca5fc89446>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Quotations by Quotations by Carl Jacobi It is true that Fourier had the opinion that the principal aim of mathematics was public utility and explanation of natural phenomena; but a philosopher like him should have known that the sole end of science is the honor of the human mind, and that under this title a question about numbers is worth as much as a question about the system of the world. Quoted in N Rose Mathematical Maxims and Minims (Raleigh N C 1988). God ever arithmetizes. Quoted in H Eves Mathematical Circles Revisited (Boston 1971). Man muss immer generalisieren One should always generalize. Quoted in P Davis and R Hersh The Mathematical Experience (Boston 1981). The real end of science is the honour of the human mind. Quoted in H Eves In Mathematical Circles (Boston 1969). It is often more convenient to possess the ashes of great men than to possess the men themselves during their lifetime. [Commenting on the return of Descartes' remains to France] Quoted in H Eves Mathematical Circles Adieu (Boston 1977). Mathematics is the science of what is clear by itself. Quoted in J R Newman, The World of Mathematics (New York 1956). The God that reigns in Olympus is Number Eternal. Quoted in T Dantzig, Number: the Language of Science Mathematics exists solely for the honour of the human mind. Quoted in W R Fuchs, Mathematics for the Modern Mind Dirichlet alone, not I, nor Cauchy, nor Gauss knows what a completely rigorous mathematical proof is. Rather we learn it first from him. When Gauss says that he has proved something, it is very clear; when Cauchy says it, one can wager as much pro as con; when Dirichlet says it, it is certain ... Quoted in G Schubring, Zur Modernisierung des Studiums der Mathematik in Berlin, 1820-1840. JOC/EFR February 2006 The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Quotations/Jacobi.html","timestamp":"2014-04-17T04:05:37Z","content_type":null,"content_length":"2741","record_id":"<urn:uuid:85df6e72-ab40-4f1d-9dd4-7eae7cfd5f0f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Relating the Bipolar Spectrum to Dysregulation of Behavioural Activation: A Perspective from Dynamical Modelling Bipolar Disorders affect a substantial minority of the population and result in significant personal, social and economic costs. Understanding of the causes of, and consequently the most effective interventions for, this condition is an area requiring development. Drawing upon theories of Bipolar Disorder that propose the condition to be underpinned by dysregulation of systems governing behavioural activation or approach motivation, we present a mathematical model of the regulation of behavioural activation. The model is informed by non-linear, dynamical principles and as such proposes that the transition from “non-bipolar” to “bipolar” diagnostic status corresponds to a switch from mono- to multistability of behavioural activation level, rather than an increase in oscillation of mood. Consistent with descriptions of the behavioural activation or approach system in the literature, auto-activation and auto-inhibitory feedback is inherent within our model. Comparison between our model and empirical, observational data reveals that by increasing the non-linearity dimension in our model, important features of Bipolar Spectrum disorders are reproduced. Analysis from stochastic simulation of the system reveals the role of noise in behavioural activation regulation and indicates that an increase of nonlinearity promotes noise to jump scales from small fluctuations of activation levels to longer lasting, but less variable episodes. We conclude that further research is required to relate parameters of our model to key behavioural and biological variables observed in Bipolar Disorder. Citation: Steinacher A, Wright KA (2013) Relating the Bipolar Spectrum to Dysregulation of Behavioural Activation: A Perspective from Dynamical Modelling. PLoS ONE 8(5): e63345. doi:10.1371/ Editor: Xiaoxi Zhuang, University of Chicago, United States of America Received: January 10, 2013; Accepted: March 31, 2013; Published: May 14, 2013 Copyright: © 2013 Steinacher, Wright. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was not specifically funded by a grant. AS’s main research is funded by an Engineering and Physical Sciences Research Council (EPSRC) Grant (EP/I017445/1) in the UK. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Mood disorders such as bipolar disorder have not yet attracted substantial interest in the community of dynamical modelling. This is surprising, since bipolar disorder is one type of affective disorder exhibiting strikingly complex switch-like dynamics between normal, depressive and manic or hypomanic states. These state transitions may be regular, but may also lead to chaotic behaviour. Diagnostically, several forms of Bipolar Disorder exist, including: Bipolar I Disorder (BD-I), often considered to be the most severe form of the disorder and the only one to include presence of full manic episodes; Bipolar II Disorder (BD-II), which comprises both hypomanic and major depressive episodes; Cyclothymia, which involves periods of hypomania and minor depression over at least two years with little time spent in a euthymic state; variants of Bipolar Disorder classified as Bipolar Disorder not otherwise specified, which involve fluctuations in levels of depressive and hypomanic symptoms that are not sufficiently severe or prolonged to represent full affective episodes, yet fall outside the person’s normal range of behaviour [1]. Bipolar disorder is a considerable public health problem. Worldwide, the estimated prevalence of ilnesses from the bipolar spectrum is an estimated 2.4% [2]. Compared to the population average, patients suffering from bipolar disorder have a 12.3 times higher rate of suicide [3]. Moreover, bipolar disorder is also associated with increased risks of other illnesses, such as coronary heart disease or cancer [4]. Given that the precise cause of this illness is not yet known, and there is considerable room for improvement in terms of treatment [5], dynamical modelling seems to be a useful way to integrate the knowledge from many levels of research and rigorously test hypotheses of our current understanding of this illness. The pursuit of such an approach has so far been hindered by lack of suitable data, and only a few attempts have been made in this direction. In one mathematical modelling study, bipolar disorder is described in terms of oscillatory behaviour of emotional states using a van der Pol oscillator [6]. In this model, a stable limit cycle is reached at the onset of the illness, which can be reduced in amplitude upon treatment. Aside from this deterministic approach, there are also a few models dealing with stochasticity in affective disorders and the role of noise in episode sensitisation [7]–[9]. These studies implement aspects of the kindling model, which is used in several neuropsychiatric contexts [10], describing the progression from externally induced disease episodes to autonomously occurring episodes following sensitisation. Modelling a positive feedback between sensitisation and an unspecific disease system, these studies provide a conceptual understanding and are able to reproduce some phenomena known from bipolar disorder such as transient events and chaotic behaviour. One recently published mathematical model describes bipolar disorder by means of a dynamical system of a double negative feedback loop which gives rise to bistability [11]. Similar to previous deterministic approaches, it limits itself to the description of sustained oscillations between extreme mood states. However, while mood is often alternating in bipolar disorder, there is evidence based on the investigation of longitudinal studies that this alternation it is not truly oscillatory but is instead directed by chaotic attractors [12], [13]. Moreover, patients typically exhibit often quite extended episodes of normal mood between manic and depressive episodes. These intermediate phases and the events triggering a transition to depressive or hypomanic/manic episodes are of high interest for clinical treatment. In this paper, we present a minimal mathematical model of mood regulation in bipolar disorder, implementing hypotheses regarding the auto-regulatory nature of the Behavioural Activation/Approach System (BAS), and predicting that increasing nonlinearity in this system leads to multistability and switch-like transistions between activation or engagement levels in an individual. This model is informed by data on BAS activity in bipolar patients and is able to reproduce some typical dynamics of bipolar disorder, such as a slower recovery time after frustrating or rewarding events in bipolar patients. This effect has been associated with the number of previous episodes in empirical studies [14]. In our model this can be reproduced by increasing the nonlinearity parameter . The BAS promotes active engagement with the environment following signals of reward [15]–[18]. With increasing BAS activity, an individual experiences increased cognitive activity that aims towards achieving goals and approach behaviours, corresponding to positive emotions (such as motivation or elevated mood), but also potentially irritability and anger if goal progress is thwarted [15], [19], [20]. The BAS is mainly activated by rewarding stimuli, such as food, social contact, sex or novelty. Following such stimuli, it increases further engagement with the environment. Typical behaviours associated with high BAS levels are high energy, locomotion and motivation [15]. Neurobiologically, the BAS is thought to be related to the dopaminergic reward pathways [15], [16]. There is evidence from electroencephalographic and neuroimaging data that higher relative activity in the left prefrontal cerebral cortex is associated with approach-related motivation [21], possibly implicating it in the BAS circuitry. There are a number of studies supporting the idea that hypomanic/manic and depressive symptoms in bipolar spectrum disorders are both related to hypersensitivity of the BAS [15], [22]–[25], however the mechanistic basis of such hypersensitivity is still unclear. Here, we define the BAS to act as an auto-activation system, controlling behavioural activity or engagement and exhibiting inherent properties of nonlinearity. Our minimal model shows that increase in nonlinearity of this system alone is sufficient to model the transition from normal-type activity dynamics to those found in bipolar patients. With the addition of an auto-inhibitory component, this system is multistable at higher degrees of nonlinearity. We can show that the continuous transition between normal-type and bipolar-type dynamics is due only to variation in the nonlinearity parameter in the system. This hypothesis is directly informed by an empirical study on behavioural dynamics in bipolar patients which shows that time taken for behavioural activation levels to recover from rewarding or frustrating events increases with increasing number of manic and bipolar episodes respecitvely [14]. Data from this study have been re-analysed and the model has been scaled to fit these empirical findings. Deterministic and stochastic simulations of the model have been implemented using numerical integration of the system. Results from the stochastic simulations elucidate the roles of extrinsic and intrinsic noise in an individual’s development of bipolar disorder. As known from other control systems based on positive feedback, nonlinearity tends to increase noise in the system. Our results indicate that increasing noise levels lead to a transition from lower-scale variability (local fluctuations in activation levels around a steady state) to higher-scale variability (occurrence of episodes with lower local fluctuations). Model Formulation The model has one variable, , which stands for the engagement or level of behavioural activation and describes the propensity of an individual to interact with its environment. It should be noted at this point that while high activation levels might correlate with positive or elevated mood and low activation levels correlate with low moods, our intention is not to model mood states, but to specifically model behavioural activation/approach levels, which have been shown to correlate with manic, hypomanic and depressive episodes in bipolar disorder [22], [23], [26], [27]. By doing so, we allow for the occurrence of so-called mixed states [28], and moreover account for the finding that manic or hypomanic episodes can manifest themselves as increase in irritability or anger, which can be an output of high BAS activity [20]. The regulation of is here understood as being regulated by two feedbacks: one auto-activation feedback, mirroring the BAS and incorporating the function of self-activation, expressed as the up-regulation of by itself, and an additional auto-regulatory negative feedback, which stands for a tendency to keep at normal levels. Here, is treated analogously to a substance being produced and degraded in certain processes. This mathematical formalism is very common in theoretical approaches to describe regulatory networks on molecular or cellular levels, such as in gene expression or metabolic networks [29]. Since we are interested only in the dynamical relationship of the variables of interest at a very high organisational level, without attempting to capture the exact biological underpinnings of these phenomena, we borrow the terminology of systems biology to describe BAS regulation. This approach allows us to minimise the level of complexity in the system and focus on specific dynamics of interest, which in this case is the role of nonlinearity in mood regulation. The system is open, such that there is a constant influx of at the rate , and a decay of with the rate . The constant influx rate determines the baseline level of the fixed point, especially for the lower fixed point in the region of multistability at high . Thus, if , the lower fixed point at high would be , which corresponds to a non-existing level of activation. While this would not change the system behaviour qualitatively, we chose to be a positive rate to ensure a less extreme steady state level for . Overall, this can be written as(1) where stands for the positive feedback and stands for the negative feedback in which is involved. Auto-activation of is expressed as a Hill function, such that(2) with being the maximum rate of ‘production’, being the rate at which production is half of the maximum rate, and being the nonlinearity parameter. In molecular systems, usually corresponds to the cooperativity of an enzyme. In our case, it could be interpreted as the effectiveness of the system to activate , with defining the shape of its activation curve . Thus, low levels of nonlinearity lead to hyperbolic response dynamics of , whereas high levels of nonlinearity lead to a sigmoidal-shape ultrasensitive response (see fig. S1) [30]. As a consequence, at certain levels of the responsiveness of the activation is low, whereas at a given level determined by the parameter K the responsiveness is high. This renders certain parts of the mood regulation system more sensitive than others. This is in line with the idea that the BAS is poorly regulated in individuals with bipolar disorder, linking depression to an inactive BAS and mania to an overactive BAS [15], [22], [31] . We also propose the existence of an additional negative feedback mechanism, stabilising the system around normal activity levels. This negative feedback loop is expressed as(3) where is the maximum rate of the feedback, is the nonlinearity parameter for the negative feedback and is the level of at which half of the maximum rate is reached, which in our system is defined to be the normal activity level , in the following referred to as . The nonlinearity parameter in the negative feedback loop has a similar role for the shape of this response function as has for , such that higher values for lead to sigmoidal response functions, whereas lower values approach a linear response function. We choose and to have the same value. Moreover, the decay parameter is defined to be at a value which allows the positive feedback to be at equilibrium on at (no nonlinearity), such that If the nonlinearity parameter is increased to a level which allows for bistability in the auto-regulatory feedback system, the unstable fixed point lies at . By introducing the negative feedback, this instability is locally stabilised. This leads to tristability with a high, low and medium activity level (see figure 1 and figure S2). Speculatively, this negative feedback could correspond to behavioural attempts by individuals to avoid extreme levels of behavioural activation (in other words, implementation of active coping strategies [32]). The limited effectiveness of such a mechanism is accounted for by saturating this feedback on its upper and lower bounds, such that it is not sufficient to equilibrate activation levels that are beyond a threshold value that is defined by the dynamical properties of the system (figures 1 and 2). The susceptibility of the BAS system to events of reward or frustration is accounted for by introducing the variable , which feeds in and is consumed by levels of with the rate . Note in this context that frustrating events do not need to be associated with negative directions in , such that frustrations could also increase . We however tested negative and positive directions in to ensure that all potential external effects on the regulation of in our system could be accounted for. Therefore, in the following we refer to the effect of on levels of as inhibiting or activating events rather than as events of reward or frustration. The complete system is thus written as. Figure 1. Bifurcation diagram of the system, showing a saddle-node bifurcation; and susceptibility to parameter changes. Thick squared markers signify stable branches of the system, smaller dots signify unstable branches. A: increasing values of are corresponding to darker colors. B: Change in model parameters and their effects on shifting branches in the bifurcation diagram. Increasing a parameter is indicated by arrow direction. Figure 2. Return of mood to baseline after activating and inhibiting events. A: return to baseline levels after activating events. B: return to baseline levels after inhibiting events. Empirical data (thick black lines) are compared to simulation data (coloured lines). BES (Behavioural Engagement Scale) levels are normalized to maximum value after disturbance of reward and frustration. Zero on the y axis corresponds to the baseline BES level. Solid black line: Control group (N = 18 for reward, N = 51 for frustration plot) Dashed black line: Patients with less than 10 previous episodes. (N = 8 for reward, N = 26 for frustration plot) Dotted black line: Patients with 10 previous episodes or more. (N = 7 for reward, N = 14 for frustration plot). For the simulation data: Black and gray lines correspond to and , respectively. N represents the sample size. All typical parameter values employed in the simulations are given in table 1. Table 1. Parameter values in the model. The system of differential equations was numerically solved using Scientific Python [33] and its implementation of the LSODA solver (scipy.odeint). For stochastic simulations, the Euler-Maruyama scheme [34] was applied by adding a noise term to each equation, defined by the Gaussian distribution and scaled by the square root of the timestep size, . Inhibiting and activating events are chosen to occur spontaneously with the probability , and the amplitude of these events is defined to take a value from a continuous random distribution between 0 and the maximum event amplitude . For an analytic description of the time-series data generated by simulations, we introduce the measure , which is an indicator for the “episodicity” of the activity dynamics. By moving an averaging window with the length of 7 days over the time series data, the number of times in which the average activation level in this window is within defined bounds is recorded. Subsequently, the “episodicity” of a solution is defined as(6) where is the number of averaging windows, corresponding to the total duration of the simulation in days minus the size of the window , and is the amount of occasions at which one of three activity levels is detected: low activity or depression is defined for window averages below 80, high activity or mania is defined for window averages above 120, and medium activity is defined for values between 80 and 120. These values are arbitrary units on our activation level scale. By doing so, we are able to distinguish between solutions that show a high variance, captured by the global signal to noise ratio SNR of the time-course data, and which still stay around one steady state of the system; and solutions that frequently switch between steady states. For time-series data of single individuals, single stochastic simulation runs were recorded. For analysis of dynamical features, such as return to baseline levels after activating or inhibiting external events or measures such as episodicity, the signal-to-noise ratio or the number of switching events per simulation, multiple simulations were performed for each parameter set and the average outcome for each respective analysis was recorded. If not explicitly stated otherwise, the parametric noise value was set to 0.1 and the probability for external events to occur, was set to 0.015. The maximum amplitide of external events occuring, , was set to vary between [−20, 20]. Our main hypothesis is that the degree of nonlinearity in the auto-activation feedback system (the BAS) is a proxy for the stage of illness, such that a value of 1 for the nonlinearity parameter corresponds to an unaffected individual and higher values of stand for an increased propensity for developing the illness. Once the individual is already bipolar, even higher values of may correspond to the number of previous depressive or manic episodes. It has to be noted, however, that our model does not make predictions about how this progression in nonlinearity occurs, and therefore only captures the behaviour of the modelled system at set level of . In the bifurcation analysis of our model the critical values of are shown to correspond to mono- and multistable solutions of the system as a function of (see figure 1A). At low values of , the system is monostable, whereas with increasing bifurcation into bistability and further into tristability happens. The influence of changes in parameter values other than is shown in figure 1B. Parameters , and determine the level of at which bifurcation occurs, and the activity maxima that the system allows for. Parameter , the nonlinearity parameter in the negative feedback standing for behavioural counteraction of mood swings, determines the range of stability for the branch at medium activity levels: Increasing extends this range to higher , but also decreases the range between this middle stable branch to the neighbouring unstable branches at the onset of tristability, compared with lower values. We expect our model to be robust against parameter changes insofar the potential for multistability is given and the middle stable branch in the bifurcation diagram remains stable. For example, if the parameter were decreased, bifurcation of the system would happen at higher levels of , as can be seen from the outcome of our bifurcation analysis in figure 1, and thus higher levels of would be needed to compensate this effect for maintaining a qualitatively similar simulation output. While auto-regulation of behavioural activity by the negative feedback loop is effective for all values of , the attractor for this regulator is identical to the attractor for the positive feedback loop prior to bifurcation. This can be interpreted as a tendency of avoiding fluctuating activity levels related to mood swings that is only observable in bipolar patients already having experienced extreme changes in behavioural activity. Number of Previous Bipolar Episodes is Associated with a Slower Return to Baseline Mood After Disturbing Events Our model is able to reproduce the results that as number of previous bipolar episodes increases, so does time taken to recover from frustrating or rewarding events, solely by increasing the nonlinearity parameter (see figures 2 and 3). This indicates that only by changing nonlinearity in the system, we are able to capture typical response behaviours for several stages in bipolarity, such as found in a previous empirical study [14]. Time series data from this study were re-analysed to infer rate constants for the decay of activation levels employed by our model, which is expressed by the parameter , justifying exponential decay of activation levels in the time domain after disturbing events (figure 2). For this, only the top 20 percent of frustrating or rewarding events recorded in the data were taken from empirical data based on self-report questionnaires (for methods of data collection see [14]). Of these data, the BES (Behavioural Engagement Scale, expressing a scale of activation level) of the actual and following days of the event were traced until the return to the median BES, to give as the time of return to baseline, in terms of number of days. The empirical data points shown in figure 2 represent a normalised curve of the amount of individuals still above their median BES after a rewarding or frustrating event. The time trajectory of these data show a clear exponential decay, justifying our use of which mirrors exponential decay of perturbed levels after external activating or inhibiting events. In the simulations, return to baseline was defined as the time taken after disturbance of activation levels by a inhibiting or activating event to fall below the threshold of mean BES %. Disturbance in behavioural activity was 10 units on our scale of activity levels in these simulations. The time course data of our simulation show a similar behaviour in activation level decay, compared to the empirical data (figure 2). In general, for a larger range of , we find the trend of increasing time to return to baseline after disturbing events both for events of inhibition and activation (figure 3). This value increases exponentially towards values of that lead the system to bifurcation, at which point disturbances can kick the solution to a higher or lower stable branch. A further set of simulations using the stochastic version of our model yielded similar results, confirming the trend of slower return to baseline for higher degrees of nonlinearity in the auto-activation feedback loop (see figure 4). Figure 3. Return to base engagement levels after inhibiting and activating events in deterministic simulations with different values for the nonlinearity parameter . Disturbances were of magnitude (activating events) or (inhibiting events) The solution is defined to have returned if it has fallen under the 1% threshold difference to the base level BES. Figure 4. Return to base level levels after activating and inhibiting events in stochastic simulations with different values for the nonlinear parameter n. A: return to base levels after activating events, B: return to base levels after inhibiting events. Boxplots show the outcome of 50 individual runs per . Disturbances were of magnitude (activating events) or (inhibiting events) The solution is defined to have returned if it has fallen under the 1% threshold difference to the base activity levels. Stochastic Simulations of Behavioural Activity Regulation Reveal Realistic Time-course Data Given different settings of parametric noise , event noise (the maximum amplitude of inhibiting and activating events externally influencing the system) and nonlinearity, our stochastic simulations are able to reproduce realistic time course data of mood dynamics for normal individuals and bipolar patients (figure 5). For lower levels of , activity remains stable around a medium level, despite regular changes due to parametric and event noise. At levels for which lead the system to bifurcation, events are most likely to drive jumps in behavioural activity, whereas parametric noise is sufficient to achieve this at higher . Our model is able to reproduce mood patterns that are not truly oscillatory, yet showing cyclic and regular shifting between extreme mood levels and intermediate mood. Figure 5. Typical outcomes of stochastic simulations for different settings of with different settings for . From A to D the “episodicity” value increases (), an indicator that the time course captures real episodes and not just fluctuations in behavioural activity. Gray lines are simulated activity levels, black lines are moving averages with a window of 7 days, dotted lines signify activating and inhibiting events. The time course spans 2 years. With Increasing Nonlinearity, Noise Moves between Scales Stochastic simulations under different ranges of parametric noise , and event noise, expressed as variations of the maximum inhibiting or activating amplitude of events show that both types of noise are able to account for the increase in intrinsic system noise levels (figure 6). Also, both are able to lead to episodes, as indicated by an increase of our measure for episodicity, . However, while intrinsic noise alone is able to generate episodes at higher , lower requires event noise to be high to generate switches into episodes (figure 7). Nonlinearity increases noise at a given behavioural activity level, which decreases the global signal-to-noise ratio (SNR). We calculated the ratio of this global SNR () against the summed SNR on the moving average windows (). This ratio is close to zero at low and increases rapidly as leads the system to bifurcation (see figure 8). This indicates that the level on which noise occurs switches from a smaller scale to larger scales. At below-bifurcation values for , small scale noise occurs, causing various degrees of instability in activity, which seen from a dynamical viewpoint are fluctuations around one steady state. As grows sufficiently large to allow for switches between several steady states, this leads to longer lasting episodes, and noise is expressed as flipping between steady state solutions, rather than as fluctuations around each of these solutions. Interestingly, this ratio drops back again for even higher values of . We propose that this is due to decreased global noise, since switching between states gets less frequent and episodes are longer and steadier. This is corroborated by an additional analysis that counts the switches between states during an individual stochastic simulation, following the definition of state boundaries in the episodicity measure. The distributions of switching events and episodicity for different degrees of nonlinearity are shown in figures S3 and S4. Figure 6. The signal to noise ratio of the solutions as a function of different settings for intrinsic and extrinsic noise in the stochastic simulations. A: The situation for , B: the situation for . Parametric noise is the value of for the added Gaussian noise in the numerical integration of the system, with varying in the interval . Event noise is defined as the maximum amplitude of activating and inhibiting events hitting the system, with varying in [0,25]. Signal to noise ratios were averaged for 10 simulations at each parameter setting. Figure 7. The “episodicity” as a function of intrinsic and extrinsic noise in the stochastic simulations. A: The situation for , B: the situation for . Parametric noise is the value of for the added Gaussian noise in the numerical integration of the system, with varying in the interval . Event noise is defined as the maximum amplitude of activating and inhibiting events hitting the system, with varying in [0,25]. Episodicity was averaged for 10 simulations at each parameter setting. Figure 8. Signal-to-noise ratios on the global scale (, analysed on the full time scale data) versus the signal-to-noise ratios on the scale of averaging windows (, analysed on moving windows with width 7 days applied successively on the full time scale data) for given values of n with event noise . For lower , noise is similar on the global and on the smaller scales. For values of , this ratio changes drastically towards lower noise on the smaller scale. This indicates that noise in the system is able to generate episodes (switch between behavioural activity states) rather than only fluctuations around one fixed point. Also, solutions are less noisy around a given steady state for close below bifurcation values than for further below bifurcation values. Data points refer to the average value of SNRs, obtained by 25 stochastic simulations per value of , giving a duration of 2 years per time course. Our model captures important features known from bipolar disorder such as transition between states of normal, high and low behavioural activation and traces these back to nonlinear auto-regulation feedback in underlying control systems of behavioural activity and emotional regulation. Without an exact knowledge as to what causes such behaviour mechanistically, we expect nonlinearity to be an inherent property of the control system, as is the case in most biological systems. Our model is able to reproduce typical time course evolution of normal and bipolar-type activity levels under the assumption that an increase in nonlinearity in their BAS renders an individual more prone to develop bipolar disorder. This hypothesis is further corroborated by showing that such an increase in nonlinearity is sufficient to explain slower recovery from rewarding or frustrating events individuals with a large number of previous episodes. Memory of previous states is not inherent in our model, thus progression in nonlinearity does not appear automatically during simulations. Rather, the nonlinearity parameter is set as a fixed parameter value for every simulation. Our analysis of intrinsic noise with respect to parametric and event noise indicate that the positive auto-regulation of the BAS leads to an increase of noise in the system with increasing nonlinearity. Further, our analysis indicates that there are distinct scales on which the effect of noise manifests itself, and, more importantly, that nonlinearity leads to shifts between these scales. Thus, while lower nonlinearity leads to small-scale fluctuations around steady states of behavioural activation, higher nonlinearity leads to noise on the larger scale of fluctuations between extreme and intermediate activation states, together with a decreased level of fluctuations around the respective steady states. In our stochastic simulations, we distinguished between parametric noise and event noise and find that our results are not critically dependent on the source of noise insofar our measures of episodicity and SNR are concerned. While some mathematical models of positive feedback and the effects of noise on occurrences of episodes in recurrent affective disorders have already been undertaken [7]–[9], the role of nonlinearity has not been elucidated specifically in these studies. In our model, nonlinearity is inherent in the system, expressed by auto-activation of the BAS, which is a simplification in terms of the modelling process and reduces the number of involved parameters substantially. Nonlinearity not only triggers the onset of episodes by an increase of noise in the system, but also drives the system into multistability. In contrast to the model of Huber et al., illness progression does not inevitably lead to rapid cycling and chaotic behaviour. Rather, the increase of system noise could either lead to unstable behaviour by keeping the system close to bifurcation with rapid fluctuations of behavioural activation levels, mirroring typical dynamics of cyclothymia or rapid cycling; or to a more stable behaviour and temporal fixation of activation levels in any of the three attractant steady states with a remaining susceptibility to slower variation between the three fixed points, potentially corresponding to BD-I or BD-II. This is in accordance with findings from empirical research that episode cycle lengths vary inversely with total number of cycles [35] and that there is less rapid cycling in BD-I than in BD-II [36]. Despite our model allowing for the finding that a substantial proportion of individuals with cyclothymia progress to experience BD-I or BD-II [37], we are cautious to postulate that nonlinearity increases in the time domain for every individual, thereby leading to an orderly succession along the spectrum from cyclothymia over BD-II to BD-I. In fact, our model is limited to providing a description of the system dynamics at certain levels of nonlinearity, but does not make assumptions of how the evolution of nonlinearity is structured, whether it increases or decreases continuously or in jumps, such as in the manner of a biased or unbiased random walk. Further investigations on potential candidates for the functional nonlinear relationship within the BAS will be needed to elucidate this question. Most other mathematical models brought forward so far deal with oscillatory dynamics and therefore lack a description of how intermediate episodes or intermediate periods between bipolar episodes are possible, which is of importance for clinical research [38], [39]. Our model is able to reproduce this common phenomenon and also accounts for the finding that, while apparently cyclic in nature, episodes are not oscillatory [12], [13]. While some parameters in our model have been estimated due to lack of time-course data, some potential underpinnings of their meanings can be elucidated in the context of empirical findings. The parameter , mirroring the half activation of the BAS auto-activation feedback, relates to the relative onsets of the upper versus the lower branch in our bifurcation analysis with respect to the bifurcation parameter . Thus, at higher levels of , increasing nonlinearity will introduce a switch from normal to low activity levels at bifurcation, whereas at lower a switch to high activity levels is more likely. This seems to relate to the finding that the type of the first episode (manic/hypomanic or depressive) predicts the predominant course of following episodes [35], [36], and suggests that variations in between individuals are able to capture these predominant trajectories. Our model incorporates and predicts varying degrees of nonlinearity in BAS regulation, however its mechanistic basis remains unclear and needs further investigations. High levels of noise at degrees of nonlinearity that allow for multistability, yet also for the system to remain close to bifurcation, might correspond to a finding from BP-II individuals that inter-episode lability was higher than for unipolar depression samples [40]. Our model is also in line with findings which suggest that regulation of the BAS of bipolar patients is impaired in individuals with Bipolar Disorder [15], [22]– [25]. Recent developments in the understanding of the pathophysiology of bipolar disorder, based on neuroimaging studies, point towards potential roles of feedback pathways in prefrontal cortical neural regions implicated in emotion regulation [41]. Among other subregions within this area, the medial prefrontal cortex (MPFC) and the orbitofrontal cortex (OFC) might play a role in the reward processing activities attributed to the BAS [42]–[44]. While we currently lack sufficiently detailed knowledge about functional connectivities and dynamics of such connectivities between respective pathways to allow for conclusions about the mechanistic basis of the postulated nonlinearity in our model, we expect this nonlinearity to be dependent on the neurophysiological basis of emotional regulation and dysregulation, which are commonly associated with bipolar disorder. In a similar manner, the role of dopaminergic pathways that have been associated with the BAS [15] and the question of whether dopamine plays a role in putative increased ultrasensitivity due to high nonlinearity in the regulation of the BAS, lies outside the scope of our model and would require further research. In conclusion, we present a mathematical model to describe a spectrum of variation in behavioural activation regulation, parts of this spectrum corresponding to the presence of clinically-diagnosable Bipolar Disorder. A strength of this model is its ability to reflect patterns revealed by observational studies of Bipolar Disorder, including the apparent non-oscillatory nature of mood swings, increasing episodicity for subtypes of Bipolar Disorder that are further along the putative spectrum, and an association between initial episode type and subsequent course of the disorder. Furthermore, the model was developed and refined with direct reference to an existing set of data concerning behavioural engagement functioning amongst individuals with and without Bipolar Disorder. A further strength of the model could also be considered a limitation: at this point the precise biological and behavioural variables corresponding to its parameters are not determined, meaning that whilst the model has considerable potential for application to multiple levels of organisation, its explanatory power is limited. Future research should seek to test predictions about the behaviour of candidate variables corresponding to the parameters of the model. This could be in terms of fluctuations in symptoms in relation to dimensions such as BAS sensitivity, implementation of coping strategies, and the action of medications, but could also be in terms of the functioning of brain areas involved in heightening approach motivation in response to signals of potential reward, and in the inhibition of such activity. Supporting Information A steady state analysis showing production and decay terms in the systems. Where production (solid lines) cross degradation terms (dotted line), the system is at steady state. At higher degrees of nonlinearity (here at ), the system is tristable with three stable and two unstable steady states. The terms of positive feedback (dashed lines) and negative feedback (solid line) as functions of behavioural engagement levels. Negative feedback gives a saturated function with direction towards a medium engagement (). The distribution of episodicity for a set of 100 simulations for every value of the nonlinearity parameter . With increasing , median episodicity and episodicity variance first increase, then median episodicity falls again with variance remaining high for even higher . The distribution of switching events per simulation for sets of 100 simulations at different values of the nonlinearity parameter . Switching events are defined as times the average level of on the moving averaging window of 7 days shifts between high, medium and low states, defined as for low state, for medium state and for high state. These state boundaries are also at the base of our episodicity measurement (see main text). Our analysis shows that as increases, the number of switching events first goes up and decreases again at values for that lead deeper into the multistable We would like to thank Ozgur E. Akman, Natalia Lawrence, Mary Phillips and Orkun S. Soyer for fruitful discussions and useful comments on this manuscript. Also, our thanks go to the two anonymous reviewers for their helpful feedback. Author Contributions Conceived and designed the experiments: AS KAW. Performed the experiments: AS. Analyzed the data: AS. Contributed reagents/materials/analysis tools: AS KAW. Wrote the paper: AS KAW.
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0063345?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+plosone%2FPLoSONE+%28PLOS+ONE+Alerts%3A+New+Articles%29","timestamp":"2014-04-19T09:25:03Z","content_type":null,"content_length":"250615","record_id":"<urn:uuid:bea3d492-9502-4d57-a53a-cb11a1176a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Length of the last edge when visiting points by nearest neighbor order up vote 14 down vote favorite Take $n$ points uniformly in $[0,1] \times [0,1]$. Then pick uniformly $X_0$ one of these points as your starting point. Then let $X_1$ be the nearest neighbor of $X_0$, let $X_2$ be the nearest neighbor (not yet visited) of $X_1$ and so on. What can be said of the asymptotic of $X_n-X_{n-1}$ the length of the last crossed edge? What about the length of the longest crossed edge? I stumbled upon these kind of models in the area of environmental statistics where one tries to find clusters in a geographical dataset but I am not sure if this question is interesting. Edit : Many more questions could be asked about this model : If one makes the model dynamic by adding the points in $[0,1]^2$ sequentially, most of the time the addition of an extra point changes the path only locally, but from time to time it will have a big impact. Does the path converges locally? (I guess not). How often do you see catastrophic modifications of the paths? Finally, is there a way to find an interesting local spectrum in this object by renormalizing it and looking at the sizes of the edge ? It is probably related to the local dimension of the counting measure of the uniform PPP on [0,1]^2. But as pointed out in the answer below the first step would be to obtain the asymptotic of $L_n$ the total length. add comment 1 Answer active oldest votes This is closely related to a nice open problem of David Aldous, from the list of open problems on his web site, some version of which in fact has quite a long history in the combinatorial optimization community. At the above link Aldous has references to existing knowledge about the problem. The state of the art is that the sum of all edge lengths is $O(n^ up vote 5 down {1/2})$ in expectation. vote accepted Thanks for pointing me to this! I had in the back of my mind that Aldous had surely considered a version of this problem but didn't think to check his open problem list. By the way, I imagine that $L_n$ is of order $n^{1/2}$ and not $n^{-1/2}$ ($L_n$ can't be smaller than the diameter of the cloud of points and therefore cannot tends to 0). I was also made aware of this work projecteuclid.org/… which seems relevant. – Julien Berestycki Nov 1 '11 at 16:40 Sorry, you're right. That's what I get for commenting before my first coffee. I've corrected my response. Welcome to MO. – Louigi Addario-Berry Nov 1 '11 at 23:29 add comment Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/79534/length-of-the-last-edge-when-visiting-points-by-nearest-neighbor-order/79687","timestamp":"2014-04-17T12:37:24Z","content_type":null,"content_length":"54522","record_id":"<urn:uuid:897ad478-d469-4c8a-90f2-8acb46fc8845>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate the inverse of the sum of two eigen-decomposed matrices up vote 2 down vote favorite The are two eigen-decomposed matrices $A$ = $U_1$$V_1$$U_1$$^H$, $B$ = $U_2$$V_2$$U_2$$^H$, in which $V_1$ and $V_2$ are the eigen-matrices formed by the non-negative eigenvalues and the eigenvalues are all less than 1, $U_1$ and $U_2$ are unitary matrices formed by the eigenvectors. Is there any efficient way (including any efficient iterative solution) to calcualte the following vector? $y$ = ($A$+$B$+$I$)$^{-1}$$x$ in which $I$ is the identity matrix, and $x$ could be an arbitrary vector. Thanks for any discussions. is $x$ given? Are $U, V$ unitary matrices? – Betrand Dec 6 '12 at 19:42 You are right. x is a given vector. U1 and U2 are the unitary matrices, while V1 and V2 are diagonal matrices, formed by the eigen-vectors of A and B respectively. – Soup Dec 7 '12 at 6:42 2 I think there is no general trick, unless $A$ or $B$ have very low rank or share a large eigenspace. There is much research going on on recycling subspaces in Krylov method, but unfortunately there are no easy formulas giving the answers you want. – Federico Poloni Dec 7 '12 at 14:42 What if $A$ or $B$ have very low rank (but do not share a large eigenspace)? Any solution for this scenario? Thanks. – Soup Dec 7 '12 at 17:32 add comment 1 Answer active oldest votes In general we cannot do much to exploit the eigendecompositions. But assuming that either $A$ or $B$ has low-rank, we can exploit the situation. Let me outline the details below. Let $A=UDU^\ast$ and $B=VLV^\ast$ be the decompositions. The question asks for a solution of $(I+A+B)y=x$. Consider therefore, up vote 1 $$A + B + I = U( D + U^\ast VLV^\ast U + I)U^\ast = U(D' + WLW^\ast)U^\ast,$$ so that using $\bar{y}=U^\ast y$, $\bar{x}=U^\ast x$, we may write the linear system as $$(D' + WLW^*)\bar{y} down vote = \bar{x}.$$ We can now obtain the solution $\bar{x}$ by inverting $(D' + WLW^\ast)$ using the Matrix inversion lemma (SMW)---this lemma applies because $D'$ is invertible, and assuming $B$ is low-rank, we have $WLW^\ast = \sum_i l_i w_iw_i^\ast$, which can be exploited in the SMW formula. So we only need to inverse a matrix with the size = rank($B$), right? This is a good solution. Thank you very much! – Soup Dec 11 '12 at 7:08 Yes, $WLW^\ast = PCR$, where $P$ is $n\times k$, $C=I_k$, and $R$ is $k \times n$, when applying the SMW formula. – Suvrit Dec 11 '12 at 17:01 add comment Not the answer you're looking for? Browse other questions tagged matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/115476/how-to-calculate-the-inverse-of-the-sum-of-two-eigen-decomposed-matrices?answertab=oldest","timestamp":"2014-04-17T04:36:31Z","content_type":null,"content_length":"57245","record_id":"<urn:uuid:4f445f1d-ecf1-4b1e-92f2-d955f6817d37>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
User Shpigle bio website age 24 visits member for 1 year, 7 months seen 4 hours ago stats profile views 1,596 5h answered Does the property (P) holds true for the derivatives of $L$? 30 accepted Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ Mar Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ 29 revised added 18 characters in body Mar Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ 29 revised added 93 characters in body Mar Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ 29 comment @StevenLandsburg: It is the first sentence. Mar Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ 29 revised edited tags 29 asked Find sufficient and necessary conditions on $f$ in which the level curve $f(x,y)=0$ implies only one case $x=a$ for all real $y$ 25 awarded Tumbleweed 18 asked Structural stability of the Chen system Sep Does the property (P) holds true for the derivatives of $L$? 20 revised edited tags Sep functional equation 15 revised deleted 4 characters in body Sep Does the property (P) holds true for the derivatives of $L$? 15 revised added 209 characters in body 15 awarded Promoter Sep Does the property (P) holds true for the derivatives of $L$? 13 revised edited tags Sep Does the property (P) holds true for the derivatives of $L$? 12 revised added 4 characters in body 12 asked Does the property (P) holds true for the derivatives of $L$? 25 awarded Citizen Patrol 27 accepted Can we find a set of elliptic curves over rationals associated with $f$?. 27 asked Can we find a set of elliptic curves over rationals associated with $f$?. Apr awarded Critic
{"url":"http://mathoverflow.net/users/25947/shpigle?tab=activity","timestamp":"2014-04-16T22:48:56Z","content_type":null,"content_length":"45213","record_id":"<urn:uuid:f328272c-510d-4283-8c97-32113622cf5b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Digitize linear and (semi-)log scale graphs with multiple point sets June 5, 2012 By Bart Rogiers Working on a paper, I ran into the problem of needing data from a graph that was not mine, and for which no underlying table was published. With today's software packages, it is however not very difficult to digitize a figure yourself. I remembered reading something about it on or in the R journal , and it turns out both had useful information. The R package to go for is ' ', of which you find the publication , and a blog post on how to use it . You can install it the usual way: I now like to use this example from Gelhar et al. (1992) , since I was actually looking at dispersivity data. The figure can be found at (other versions: - it is quite a famous paper you know - and a pdf of the paper seems to be available The figure gives longitudinal dispersivity in function of scale, as obtained by a large number of authors. Now suppose we do our own experiments to determine dispersivity at a certain scale in a certain sediment. It would be very useful to compare the results to this compilation of literature values. This paper shows the data in a table though, but this is not always true. Especially for older papers, it might be difficult to retrieve the actual data, and this is where the package comes in. When the graph shows several point sets (and you want to digitize them separately), and has one or two log-scale axes, the simple wrapper function at the bottom of this page will make the task at hand a lot easier! The function arguments are the following: • name: Name of or path to the figure (has to be *.jpg; convert with GIMP if necessary) • x1,x2,y1,y2: Minimum and maximum values of the x and y axes • sets: Number of point sets you want to digitize separately (default 1) • setlabels: Labels of the different point sets (numbers by default) • log: Argument similar to the standard R plot argument for logarithmic axes (can take 'x','y' or 'xy') • xlab, ylab: Optional specification of the axes in the plot that is generated by the function The command I used: digitize.graph('gelhar.jpg',10E-1,10E5,10E-3,10E5,sets=3,setlabels=c('high','intermediate','low'),log='xy', xlab='Scale (m)',ylab='Longitudinal Dispersivity (m)') First you have to mark the 4 points on the axes, and then you can click on all points of the first point set, click finish, continue with the next, etc. The function returns a dataset with x and y coordinates and the labels corresponding to the different point sets. Easy to program, but very convenient! digitize.graph <- function(name,x1,x2,y1,y2,sets=1,setlabels=1:sets,log='',xlab='x axis',ylab='y axis') dataset <- data.frame(x=NULL,y=NULL,lab=NULL) cat('Mark axes min and max values \n') axes.points <- ReadAndCal(name) if(log=='x'){x1 <- log10(x1);x2 <- log10(x2)} if(log=='y'){y1 <- log10(y1);y2 <- log10(y2)} if(log=='xy'){x1 <- log10(x1);x2 <- log10(x2);y1 <- log10(y1);y2 <- log10(y2)} for(i in 1:sets) cat(paste('Mark point set "',setlabels[i],'"\n',sep='')) data.points <- DigitData(col = 'red') dat <- Calibrate(data.points, axes.points, x1, x2, y1, y2) dat$lab <- rep(setlabels[i],nrow(dat)) dataset <- rbind(dat, dataset) if(log=='x'){dataset$x <- 10^(dataset$x)} if(log=='y'){dataset$y <- 10^(dataset$y)} if(log=='xy'){dataset$x <- 10^(dataset$x);dataset$y <- 10^(dataset$y)} legend('bottomright',setlabels, pch=1:sets,col=1:sets, bty='n') daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/digitize-linear-and-semi-log-scale-graphs-with-multiple-point-sets/","timestamp":"2014-04-21T04:48:38Z","content_type":null,"content_length":"68888","record_id":"<urn:uuid:5ba08852-80c3-4d41-b3c0-3ee999872597>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability non-portable Stability experimental Maintainer sjoerd@w3future.com Safe Haskell Safe-Infered Class of data structures with 3 type arguments that can be unfolded. class Triunfoldable t whereSource Data structures with 3 type arguments (kind * -> * -> * -> *) that can be unfolded. For example, given a data type data Tree a b c = Empty | Leaf a | Node (Tree a b c) b (Tree a b c) a suitable instance would be instance Triunfoldable Tree where triunfold fa fb fc = choose [ pure Empty , Leaf <$> fa , Node <$> triunfold fa fb fc <*> fb <*> triunfold fa fb fc i.e. it follows closely the instance for Biunfoldable, but for 3 type arguments instead of 2. triunfold :: Unfolder f => f a -> f b -> f c -> f (t a b c)Source Given a way to generate elements, return a way to generate structures containing those elements. triunfoldBF :: (Triunfoldable t, Unfolder f) => f a -> f b -> f c -> f (t a b c)Source Breadth-first unfold, which orders the result by the number of choose calls. Specific unfolds triunfoldr :: Triunfoldable t => (d -> Maybe (a, d)) -> (d -> Maybe (b, d)) -> (d -> Maybe (c, d)) -> d -> Maybe (t a b c)Source triunfoldr builds a data structure from a seed value. fromLists :: Triunfoldable t => [a] -> [b] -> [c] -> Maybe (t a b c)Source Create a data structure using the lists as input. This can fail because there might not be a data structure with the same number of element positions as the number of elements in the lists.
{"url":"http://hackage.haskell.org/package/unfoldable-0.6.0.1/docs/Data-Triunfoldable.html","timestamp":"2014-04-20T08:17:48Z","content_type":null,"content_length":"12409","record_id":"<urn:uuid:abd55405-7a96-4d93-82fb-0e8787fb0f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
How the Software Treats Loop Openings To obtain an open-loop transfer function from a model, you specify a loop opening. Loop openings affect only how the software recombines linearized blocks, not how the software linearizes each block. In other words, the software ignores openings when determining the input signal levels into each block, which influences how nonlinear blocks are linearized. Consider the following model, where you obtain the transfer function from e2 to y2, with the outer-loop open at y1: Here, k[1], k[2], g[1], and g[2] are nonlinear. The software linearizes each block at the specified operating point. At this stage, the software does not break the signal flow at y1. Therefore, the block linearizations include the effects of the inner-loop and outer-loop feedback signals. K[1], K[2], G[1], and G[2] are the linearized blocks. Finally, to compute the transfer function, the software enforces the loop opening at y1, injects an input signal at e2, and measures the output at y2. The software returns (I+G[2]K[2])^-1G[2]K[2] as the transfer function. See Also addOpening | getCompSensitivity | getIOTransfer | getLoopTransfer | getSensitivity | linearize More About Was this topic helpful? [Yes] [No]
{"url":"http://www.mathworks.com/help/slcontrol/ug/how-the-software-treats-loop-openings.html?nocookie=true","timestamp":"2014-04-20T16:10:14Z","content_type":null,"content_length":"46505","record_id":"<urn:uuid:0d46af04-ccaa-4b5c-a589-749f47e38c53>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond Number of results: 69,830 Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both cards are kings if the first card drawn was a king? I'm thinking its 13/51 but not Tuesday, December 7, 2010 at 9:40am by Rena two cards are drawn without replacement from an ordinary deck of 52 playing cards. what is the probability that both cards are kins if the first card drawn was a king? I was thinking it was 13/51 but not sure. Tuesday, December 7, 2010 at 9:46am by Rena Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both are spades if the first card drawn was a spade? Wednesday, September 30, 2009 at 10:52pm by Aleah Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing two red cards? Sunday, May 30, 2010 at 9:53pm by lisa Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both are spades if the first card drawn was a spade? Answer: 12/51 24%. Is this right? Monday, August 3, 2009 at 12:05pm by B.B. Math/ Probability Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both are spades if the first card drawn was a spade? Wednesday, May 12, 2010 at 11:34pm by lisa I can not seem to figure out this question. Can someone please help? Two cards are drawn from an ordinary deck of 52 playing cards with replacement. What is the probability that A) both cards are the same color? B) both cards are from the same suit? C) How would your answers ... Monday, August 3, 2009 at 4:01pm by B.B. Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond? Sunday, October 11, 2009 at 6:50pm by Matt Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond? Monday, October 12, 2009 at 2:43pm by Matt Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond? Thursday, November 5, 2009 at 11:42am by Carli Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond? Thursday, November 5, 2009 at 11:42am by Carli math 157 three cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the pronanility that the third card is a spade if the first two cards were not spades? How do I even start Friday, January 8, 2010 at 9:03pm by Rmz Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both are spades if the first card drawn was a spade? *Did I do this right? . 52 cards. 13 are spades. 1 picked is already a spade. That leaves 12 spades in the deck... Tuesday, August 4, 2009 at 1:09pm by Anonymous Five cards are drawn at random without replacement from and ordinary deck of 52. What is the probability all 5 cards are from the same suit? Tuesday, April 6, 2010 at 10:02pm by Kassie Basic Math I'm really stuck in this question can someone please help me out in this one. Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing a club and a diamond? Sunday, August 9, 2009 at 8:17pm by Maria Math of Educators Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing two red cards? ______ A) 25 : 77 B) 77 : 25 C) 3 : 1 D) 102 : 25 Sunday, May 30, 2010 at 10:44pm by lisa MATH Prob. Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that the second card is a face card,if the first card was a queen? Saturday, August 8, 2009 at 1:52pm by Twg Two cards are drawn without replacement from an ordinary deck of 52 cards. What is the probability that the second card is a face card, if the first card was a queen? Would this be 11/51? Sunday, November 28, 2010 at 11:29am by m Three cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that the second and third cards are kings if the first card was not a king? Monday, December 7, 2009 at 12:26am by Brian Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that the second card is a spade if the first card was not a spade? Thursday, September 24, 2009 at 4:16pm by chandice Two cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that the second card is a number card, if the first card was a queen? The face cards are King, Queen, Jack. The number cards are ace – 10. Sunday, September 27, 2009 at 1:29pm by B 3. From a deck of 52 ordinary playing cards, two cards are drawn with replacement. Find the probability that both are hearts. Monday, August 3, 2009 at 4:01pm by Anonymous Three cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that the second and third cards are spades if the first card was not a spade? Answer: 36/ 52 18/26 9/13= 69%. Is this right? Thanks for the help. Monday, August 3, 2009 at 8:03am by B.B. Math check answer Two cards are drawn without replacement from an ordinary deck of 52 cards. What is the probability that the second card is a spade if the first card was not a spade ? 13/51 is that correct, do I need to reduce this number? Sunday, January 17, 2010 at 7:20pm by Christy Math check answer Two cards are drawn without replacement from an ordinary deck of 52 cards. What is the probability that the second card is a spade if the first card was not a spade ? 13/51 is that correct, do I need to reduce this number? Sunday, January 17, 2010 at 9:20pm by Christy Math check answer Two cards are drawn without replacement from an ordinary deck of 52 cards. What is the probability that the second card is a spade if the first card was not a spade ? 13/51 is that correct, do I need to reduce this number? Sunday, January 17, 2010 at 9:20pm by Christy Math check answer Two cards are drawn without replacement from an ordinary deck of 52 cards. What is the probability that the second card is a spade if the first card was not a spade ? 13/51 is that correct, do I need to reduce this number? Sunday, January 17, 2010 at 9:20pm by Christy From a standard deck of 52 cards, all of the clubs and the jack of diamonds are removed. Two cards are drawn at random from the altered deck without replacement. What is the probability that both cards will be of the same suit? Justify your answer, which should be a lowest ... Tuesday, April 5, 2011 at 10:26pm by nathon Math (Statistics) Two Cards are drawn without replacement from a deck of 52 cards. What are the odds in favor of drawing two honour cards (A,K,Q,J,10)? Sunday, April 21, 2013 at 11:47am by Josh Two cards are drawn without replacement from a well shuffled deck of 52 cards. What is the probability that the second card drawn is a heart, if the first card drawn was not a heart? Wednesday, September 21, 2011 at 7:58pm by Chantel Two cards are drawn with replacement from an ordinary deck of 52 playing cards. What is the probability that the first card is a heart and the second card is a diamond? Wednesday, October 7, 2009 at 9:27pm by Gerry Grade 12 Data Management Two cards are drawn without replacement from a deck of 52 cards. a. What is the probability of drawning 2 aces? b. What are the odds in favour of drawing 2 honour cards? (A, K, Q, J, 10) Wednesday, December 2, 2009 at 10:40pm by Nate Two cards are drawn without replacement from a deck of 52 cards. What is the probability that both are tens, if the first card was a ten? Sunday, November 28, 2010 at 11:30am by m Tree diagrams/ counting principles/ scp question: Two cards are drawn in succesion and with out replacement from a deck of 52 cards. find the following a. the number of ways in which we can obtain the ace of spades and the king of hearts, in that order. b. The total number of ... Thursday, April 15, 2010 at 1:55pm by Jennifer Five cards are drawn from an ordinary deck without replacement. Find the probability of getting a. All red cards b. All diamonds c. All aces Friday, April 29, 2011 at 10:34pm by Michelle Three cards are drawn without replacement from a well-shuffled deck of 52 playing cards. What is the probability that the third card drawn is a diamond? Thursday, November 15, 2012 at 7:44pm by eric Two cards are drawn at random froma standard deck of 52 cards without replacement. What is the probability if drawing a 7 and a king in that order? Wednesday, June 9, 2010 at 7:17pm by Tosha two cards are drawn without replacement from a 52 deck of cards. what ae the odds against drawing a club and a diamond? can you walk me through this? thank you Friday, August 7, 2009 at 4:46pm by Diana Consider the experiment of drawing two cards without replacement from an ordinary deck of 52 playing cards. What are the odds against drawing two kings? Monday, September 28, 2009 at 10:35pm by B b. You are dealt 2 cards from a shuffled deck of 52 cards, without replacement. There are four suits of 13 cards each in a deck of cards; two of them are black and two of them are red. What is the probability that both cards are black? Round your answer to 3 decimal places Wednesday, September 8, 2010 at 11:27am by Anonymous Two cards are selected at random without replacement from a well-shuffled deck of 52 playing cards. Find the probability of the given event. A pair is not drawn. Thursday, November 8, 2012 at 12:46pm by Sam In a deck of 52 cards there are 13 hearts. If 3 are drawn from the deck, without replacement, what is the probability that all 3 cards will be hearts? Sunday, February 26, 2012 at 11:23am by Jane A standard deck of cards has four different suits:hearts ,diamonds ,spades, and clubs.Each suit has 13 cards, making a total of 52.Two cards are drawn without replacement. What is the probability of drawing first a heart, and then a spade? Wednesday, May 29, 2013 at 6:32pm by Christina 2 cards are drawn without replacement from an ordinary deck of 52 playing cards. What is the probability that both are spades if the first card drawn was a spade? I think you subtract out the first spade making the probability 12/51 since you take off the first card already ... Monday, November 22, 2010 at 12:24pm by m From a standard deck of cards, two cards are randomly drawn, one after the other without replacement. The color of each car is noted. Which is less likely to occur: two red cards or a red card and a black card? Explain. Wednesday, September 28, 2011 at 7:52pm by TRACEY Two cards are drawn from an ordinary deck of cards, and the first is not replaced before the second is drawn. What is the probability that one card is an ace and the other is a king Tuesday, April 2, 2013 at 5:12pm by Michelle AP Statistics A standard deck of cards consisting of 52 cards, 13 in each of 4 different suits, is shuffled, and 4 cards are drawn without replacement. What is the probability that all four cards are of a different suit? Wednesday, December 4, 2013 at 8:24pm by Emily 21) The number of oil spills occurring off the Alaskan coast A) Discrete B) Continuous Find the odds. 22) When a single card is drawn from an ordinary 52-card deck, find the odds in favor of getting a red 10 or a black 6. A) 2: 25 B) 1 : 13 C) 1 : 25 D) 1 : 12 23) If the ... Friday, June 10, 2011 at 1:23pm by candy A standard deck of cards has had all the face cards (Jacks, queens, and kings) removed so that only the ace through ten of each suit remains. A game is played in which two cards are drawn (without replacement) from this deck and a six-sided die is rolled. For the purpose of ... Thursday, February 24, 2011 at 9:28pm by Joey Intro to Statistics Two cards are drawn at random from a standard 52-card deck of playing cards without replacement. Find the probability that the first card is a heart and the second is red. Sunday, April 17, 2011 at 1:30pm by Terry In a standard deck of playing cards there are 13 hearts. If 3 cards are drawn without replacement what's the probibility that all 3 cards will be hearts? Wednesday, October 27, 2010 at 10:22am by Alex MATH!! probability Two cards are drawn without replacement from a deck of 52 cards. Determine P(A and B) where A : the first card is a spade B: the second card is a face card please help!! Sunday, January 9, 2011 at 8:44pm by chris -Three cards are drawn from a standard deck of cards without replacement. what is the probability that all three cards will be clubs f. 3/4 g. 11/850 h. 1/3 i. 36/153 please help....!!!!! Monday, April 4, 2011 at 4:56pm by Emmalie Two cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of Jacks drawn in a set of 2 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed ... Thursday, October 4, 2012 at 11:04am by mom Two cards are drawn in succession without replacement from a standard deck of 52 cards. What is the probability that the first card is a spade given that the second card is a club? (Round your answer to three decimal places) Tuesday, October 16, 2012 at 4:24pm by paul Two cards are drawn in succession without replacement from a standard deck of 52 cards. What is the probability that the first card is a spade given that the second card is a club? (Round your answer to three decimal places.) Wednesday, October 17, 2012 at 11:10am by Diana Two cards are drawn in succession without replacement from a standard deck of 52 cards. What is the probability that the first card is a heart given that the second card is a diamond? (Round your answer to three decimal places.) Wednesday, November 28, 2012 at 4:38pm by Chandler Three cards are drawn without replacement from a well-shuffled standard deck of 52 playing cards. Find the probability that none is a face card. Wednesday, November 23, 2011 at 4:39pm by lisa finite math three cards are randomly drawn without replacement from a standard deck of 52 cards. What is the probability of drawing an ace on the third draw? Tuesday, November 13, 2012 at 6:00pm by ryan finite math three cards are randomly drawn without replacement from a standard deck of 52 cards. What is the probability of drawing an ace on the third draw? Tuesday, November 13, 2012 at 6:05pm by ryan Suppose 5 cards are drawn, without replacement, from a standard bridge deck of 52 cards. Find the probability of drawing 4 clubs and 1 non- club. Sunday, December 1, 2013 at 4:26pm by DD consider the experiment of selecting a card from an ordinary deck of 52 playing cards and determine the probability of the stated event. A card that is not a king and not a spade is drawn. I know there are 52 cards in a deck and there are 4 kings to a deck. And 13 spades in a ... Monday, March 15, 2010 at 4:13pm by alex 7 cards drawn from a deck without replacement what is probabilty one is a spade Wednesday, February 17, 2010 at 8:48pm by C.O. 1)let k and w be two consecutive integers such that k<x<w. If log base 7 of 143 = x, find the value of k+w 2) if 7 and -1 are two of the solutiosn for x in the equation 2x^3 +kx^2 -44x+w=0, find the value of k+w 3) from an ordinary deck of 52 cards, two cards are ... Sunday, October 14, 2012 at 10:16pm by ANR A standard deck of cards contains 52 cards. Of these cards there are 13 of each type of suit (hearts, spades, clubs, diamonds) and 4 of each type of rank (A – K). Two cards are drawn from the deck and replaced each time back into the deck. What is the probability of drawing ... Friday, September 27, 2013 at 3:02pm by Alex Two cards are drawn in succession without replacement from a standard deck of 52 cards. What is the probability that the first card is a spade given that the second card is a club? (Round your answer to three decimal places Wondering why I keep getting 1/4? Tuesday, October 16, 2012 at 5:40pm by Paul Two cards are selected at random without replacement from a well-shuffled deck of 52 playing cards. Find the probability of the given event. (Round your answer to three decimal places.) find the probability that A pair is not drawn. Tuesday, November 8, 2011 at 3:48pm by Rico if 2 cards are drawn from 52-cards deck without replacement, how many different ways is it possible to obtain a king on the first draw and a king on the second? Sunday, April 3, 2011 at 9:29pm by Anonymous Two cards are chosen from a standard deck of 52 playing cards without replacement. What is the probability both cards will be an Ace Thursday, March 24, 2011 at 10:48pm by maria Two cards are drawn in succession without replacement from a standard deck of 52 cards. What is the probability that the first card is a face card (jack, queen, or king) given that the second card is an ace? (Round your answer to three decimal places.) Wednesday, October 17, 2012 at 11:13am by Diana college math Select Two Cards In a board game uses the deck of 20 cards shown. first roll has five birds 2 yellow, 3 red, second roll 5 lions same, third roll five frogs the same, fourth roll has five monkeys the same. Two cards are selected at random from this deck. Determine the ... Tuesday, January 17, 2012 at 9:46pm by kim math word problems I don't even know what this means. Please help Five cards are drawn in succession, without replacement, from a standard deck of 52 cards. How many sets of five cards are possible? A. 500 B. 12,994,800 C. 2,598,960 D. 433,160 Sunday, March 27, 2011 at 5:22pm by kacee Math 157 What are the odds in favor of drawing a spade and a heart without replacement from an ordinary deck of 52 playing cards? Wednesday, March 31, 2010 at 10:14pm by Tyga MATH Prob. You are dealt two cards successively without replacement from a standard deck of 52 playing cards. Find the probability that both cards are black. Saturday, August 8, 2009 at 1:53pm by Twg Three cards are drawn from a deck without replacement. Find these probabilities. a)all are jacks b)all are clubs c)all are red cards Friday, July 30, 2010 at 10:41pm by kathy Anne, Joe and Lynn, in this order, will take turns to draw two cards from a standard deck of 52 cards. Note that each draw of 2 cards is w/o replacement but draws of different turns are with replacement (the preceding two cards will be put back and the deck will be reshuffled... Monday, November 7, 2011 at 9:38am by Edward math - help please Assume that 2 cards are drawn in succession and without replacement from a standard deck of 52 cards. Find the probability that the following occurs: the second card is a 9, given that the first card was a 9. Saturday, July 6, 2013 at 1:28am by sarah Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... Friday, October 21, 2011 at 3:00pm by Jen Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... Sunday, October 23, 2011 at 1:02pm by Jennifer NEED HELP PLZ!!! STATS Three cards are selected, one at a time from a standard deck of 52 cards. Let x represent the number of tens drawn in a set of 3 cards. (A) If this experiment is completed without replacement, explain why x is not a binomial random variable. (B) If this experiment is completed... Sunday, October 23, 2011 at 11:48pm by Jennifer grade 12 data management Q- 2 cards are drawn without replacement from a deck of 52 cards. What is the probability that the both card are aces given that they are the same color? I am having a hard time with this question. If anyone knows this I would appreciate it very much thanks. Friday, November 19, 2010 at 4:22pm by mike Consider the experiment of drawing two cards without replacement from an ordinary deck of 52 playing cards. #1. What are the odds in favor of drawing a spade and a heart? Coin question... What are the odds in favor of getting at least one head in three successive flips of a coin? Tuesday, November 24, 2009 at 1:02pm by PD Probability math From a deck of 52 cards, 7 are drawn randomly without replacement. Let X and Y be the number of hearts and spades respectively . a) What is P(X > or = Y)? Tuesday, October 25, 2011 at 12:10pm by Jenna Two cards are drawn without replacement from a standard deck of 52 cards. Find the probability a) both cards are red ,b) both cards are the same color, c) the second card is a king given that the first card is a queen, d) the second card is the queen of hearts given that the ... Monday, November 26, 2012 at 8:35pm by Andrew Two cards are drawn without replacement from a standard deck of 52 cards. Find the probability a) both cards are red ,b) both cards are the same color, c) the second card is a king given that the first card is a queen, d) the second card is the queen of hearts given that the ... Monday, November 26, 2012 at 9:25pm by Katarzyna Two cards are drawn without replacement from a standard deck of 52 cards. Find the probability a) both cards are red ,b) both cards are the same color, c) the second card is a king given that the first card is a queen, d) the second card is the queen of hearts given that the ... Monday, November 26, 2012 at 10:02pm by Andrew Two cards are drawn without replacement from a standard deck of 52 cards. Find the probability a) both cards are red ,b) both cards are the same color, c) the second card is a king given that the first card is a queen, d) the second card is the queen of hearts given that the ... Monday, November 26, 2012 at 11:06pm by Andrzej Did I do this problem right? Consider the experiment of drawing two cards without replacement from an ordinary deck of playing cards. What are the odds in favor of drawing a spade or a heart? Answer: P(Spade)*P(Spade)=(13/52)x(13/51)=169/2652=0.06372. Is this right? Thanks. Thursday, August 13, 2009 at 12:02pm by B.B. A standard deck of 52 playing cards is composed of 4 suits. Each mix contains 13 cards. Jacks, Queens, and Kings are considered face cards. Assume three cards are selected without replacement. Calculate the probability of drawing two hearts, followed by a spade. Tuesday, March 26, 2013 at 7:59pm by Bill 8 cards are drawn from a standard deck of cards with replacement let X=no of diamonds observed find P(X=3) Wednesday, October 3, 2012 at 9:10am by virginia A card is drawn from an ordinary deck of 52 cards, and the result is recorded on paper. The card is then returned to the deck and another card is drawn and recorded. Find the probability that the following occurs: a) the first card is a diamond b) the second card is a diamond ... Sunday, July 7, 2013 at 10:37am by Annonymous Consider the experiment of choosing the top 7 cards from a shuffled deck in two ways: (a) Without replacement. 1. What is the sample space? 2. What is the probability that you get only red cards? (b) With replacement. 1. What is the sample space? What is the probability ... Sunday, January 27, 2013 at 6:38pm by Laura MATH 12 two cards are drawn without replacement from a shuffled deck of 52 cards. determine the probability of each event: a) the first card is a heart and the second is the queen of hearts b) the first card is the queen of hearts and the second is a heart Sunday, January 9, 2011 at 2:42pm by tina math problem two cards are drawn at random (without replacement) from a regular deck of 52 cards a) what is the probability that the first card is a heart and the second is red? b) what is the probability that the first card is red and the second is a heart? Tuesday, July 27, 2010 at 11:35am by sha The red face cards and the black cards numbered 2-9 are put into a bag. Four cards are drawn at random without replacement. Find the following probabilities: a) All 4 cards are red b) 2 cards are red and two cards are black c) At least one of the red cards is red. d) All four ... Sunday, December 12, 2010 at 11:12am by oscar I was wondering if someone could help me check my work on this problem and see if I'm correct. Consider the experiment of drawing two cards without replacement from an ordinary deck of 52 playing cards. What are the odds in favor of drawing a spade and a heart? There are 52 ... Saturday, November 28, 2009 at 9:26pm by Punkie A deck of cards consists of 8 blue cards and 5 white cards. A simple random sample (random draws without replacement) of 6 cards is selected. What is the chance that one of the colors appears twice as many times as the other? Sunday, April 28, 2013 at 4:19am by Anonymous Finite Mathmatics Two cards are drawn from a well-shuffled deck of 52 playing cards. Let X denote the number of aces drawn. Find P(X = 2). Sunday, February 13, 2011 at 8:28pm by Dustin Data Management Two cards are drawn from a standard 52 card deck. a) what is the probability that both cards drawn are black? b) what is the probability that one card is red and the other is black? c) what is the probability that a two face cards are drawn? d)what is the probability that two ... Wednesday, March 3, 2010 at 6:56pm by Jud two cards are drawn from a deck of cards. Once a card is drawn, it is not replaced. Find the probability of drawing a queen followed by a king. Sunday, January 27, 2013 at 9:03pm by Jesse Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Two+cards+are+drawn+without+replacement+from+an+ordinary+deck+of+52+playing+cards.+What+are+the+odds+against+drawing+a+club+and+a+diamond","timestamp":"2014-04-19T07:13:29Z","content_type":null,"content_length":"42479","record_id":"<urn:uuid:def64033-17d1-4adf-8bff-36b8fc0fb002>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Sir Michael Francis Atiyah Didier Vandenbosch—The Abel Prize/The Norwegian Academy of Science and Letters Sir Michael Francis Atiyah, (born April 22, 1929, London, England), British mathematician who was awarded the Fields Medal in 1966 primarily for his work in topology. Atiyah received a knighthood in 1983 and the Order of Merit in 1992. He also served as president of the Royal Society (1990–95). Atiyah’s father was Lebanese and his mother Scottish. He attended Victoria College in Egypt and Trinity College, Cambridge (Ph.D., 1955). He held appointments at the Institute for Advanced Study, Princeton, New Jersey, U.S. (1955), and at the University of Cambridge (1956–61). In 1961 Atiyah moved to the University of Oxford, where from 1963 to 1969 he held the Savilian Chair of Geometry. He returned to the Institute in 1969 before becoming the Royal Society Research Professor at Oxford in 1972. In 1990 Atiyah became master of Trinity College and director of the Isaac Newton Institute for Mathematical Sciences, both at Cambridge; he retired from the latter position in 1996. Anne Lise FlavikI—The Abel Prize/The Norwegian Academy of Science and LettersAtiyah was awarded the Fields Medal at the International Congress of Mathematicians in Moscow in 1966 for his work on topology and analysis. He was one of the pioneers, along with the Frenchman Alexandre Grothendieck and the German Friedrich Hirzebruch, in the development of K-theory—culminating in 1963, in collaboration with the American Isadore Singer, in the famous Atiyah-Singer index theorem, which characterizes the number of solutions for an elliptic differential equation. (Atiyah and Singer were jointly recognized for this work with the 2004 Abel Prize.) His early work in topology and algebra was followed by work in a number of different fields, a phenomenon regularly observed in Fields medalists. He contributed, along with others, to the development of the theory of complex manifolds—i.e., generalizations of Riemann surfaces to several variables. He also worked on algebraic topology, algebraic varieties, complex analysis, the Yang-Mills equations and gauge theory, and superstring theory in mathematical physics. Atiyah’s publications include K-theory (1967); with I.G. Macdonald, Introduction to Commutative Algebra (1969); Elliptic Operators and Compact Groups (1974); Geometry of Yang-Mills Fields (1979); with Nigel Hitchin, The Geometry and Dynamics of Magnetic Monopoles (1988); and The Geometry and Physics of Knots (1990). His Collected Works, in five volumes, appeared in 1988.
{"url":"http://www.britannica.com/print/topic/41025","timestamp":"2014-04-20T16:48:11Z","content_type":null,"content_length":"13554","record_id":"<urn:uuid:1cf0072d-4adb-45ca-bfc4-bd5eae6ebb69>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics 126 > Amidei > Notes > Practice Problem - Chapter 22, Part 2 | StudyBlue Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
{"url":"http://www.studyblue.com/notes/note/n/practice-problem-chapter-22-part-2/file/458398","timestamp":"2014-04-20T20:55:29Z","content_type":null,"content_length":"32376","record_id":"<urn:uuid:47483b4f-8f69-4937-8da2-4dd95a0c40a5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
The Data of Macroeconomics Download Policy: Content on the Website is provided to you AS IS for your information and personal use only and may not be sold or licensed nor shared on other sites. SlideServe reserves the right to change this policy at anytime. While downloading, If for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1. The Data of Macroeconomics This PowerPoint chapter contains in-class exercises requiring students to have calculators. To help motivate the chapter, it may be helpful to remind the students that much of macroeconomics---and this book---is devoted to understanding the behavior of aggregate output, prices, and unemployment. Much of Chapter 2 will be familiar to students who have taken an introductory economics course. Therefore, you might consider going over Chapter 2 fairly quickly. This would allow more class time for the subsequent chapters, which are more challenging. Instructors who wish to shorten the presentation might consider omitting : a couple of slides on GNP vs. GDP a slide on chain-weighted real GDP vs. constant dollar real GDP some of the in-class exercises (though I suggest you ask your students to try them within 8 hours of the lecture, to reinforce the concepts while the material is still fresh in their memory.) The slides on stocks vs. flows. Subsequent chapters do not refer to these concepts very much. There are hidden slides you may want to ?unhide.? They show that the GDP deflator and CPI are, indeed, weighted averages of prices. If your students are comfortable with algebra, then this material might be helpful. However, it?s a bit technical, and doesn?t appear in the textbook, so I?ve hidden these slides--they won?t appear in the presentation unless you intentionally ?unhide? them. This PowerPoint chapter contains in-class exercises requiring students to have calculators. To help motivate the chapter, it may be helpful to remind the students that much of macroeconomics---and this book---is devoted to understanding the behavior of aggregate output, prices, and unemployment. Much of Chapter 2 will be familiar to students who have taken an introductory economics course. Therefore, you might consider going over Chapter 2 fairly quickly. This would allow more class time for the subsequent chapters, which are more challenging. Instructors who wish to shorten the presentation might consider omitting : a couple of slides on GNP vs. GDP a slide on chain-weighted real GDP vs. constant dollar real GDP some of the in-class exercises (though I suggest you ask your students to try them within 8 hours of the lecture, to reinforce the concepts while the material is still fresh in their memory.) The slides on stocks vs. flows. Subsequent chapters do not refer to these concepts very much. There are hidden slides you may want to ?unhide.? They show that the GDP deflator and CPI are, indeed, weighted averages of prices. If your students are comfortable with algebra, then this material might be helpful. However, it?s a bit technical, and doesn?t appear in the textbook, so I?ve hidden these slides--they won?t appear in the presentation unless you intentionally ?unhide? them. 2. CHAPTER 2 The Data of Macroeconomics In this chapter, you will learn? ?the meaning and measurement of the most important macroeconomic statistics: Gross Domestic Product (GDP) The Consumer Price Index (CPI) The unemployment rate These are three of the most important economic statistics. Policymakers and businesspersons use them to monitor the economy and formulate appropriate policies. Economists use them to develop and test theories about how the economy works. Because we?ll be learning many of these theories, it?s worth spending some time now to really understand what these statistics mean, and how they are measured. These are three of the most important economic statistics. Policymakers and businesspersons use them to monitor the economy and formulate appropriate policies. Economists use them to develop and test theories about how the economy works. Because we?ll be learning many of these theories, it?s worth spending some time now to really understand what these statistics mean, and how they are measured. 3. CHAPTER 2 The Data of Macroeconomics Gross Domestic Product: Expenditure and Income Two definitions: Total expenditure on domestically-produced final goods and services. Total income earned by domestically-located factors of production. Most students, having taken principles of economics, will have seen this definition and be familiar with it. It?s not worth spending a lot of time on. It might be worthwhile, however, to briefly review the factors of production. Most students, having taken principles of economics, will have seen this definition and be familiar with it. It?s not worth spending a lot of time on. It might be worthwhile, however, to briefly review the factors of production. 4. CHAPTER 2 The Data of Macroeconomics The Circular Flow 5. CHAPTER 2 The Data of Macroeconomics Value added definition: A firm?s value added is the value of its output minus the value of the intermediate goods the firm used to produce that output. It might be useful here to remind students what ?intermediate goods? are. It might be useful here to remind students what ?intermediate goods? are. 6. CHAPTER 2 The Data of Macroeconomics Exercise: (Problem 2, p. 40) A farmer grows a bushel of wheat and sells it to a miller for $1.00. The miller turns the wheat into flour and sells it to a baker for $3.00. The baker uses the flour to make a loaf of bread and sells it to an engineer for $6.00. The engineer eats the bread. Compute & compare value added at each stage of production and GDP When students compute GDP, they should assume that these are the only transactions in the economy. Lessons of this problem: GDP = value of final goods = sum of value at all stages of production We don?t include the value of intermediate goods in GDP because their value is already embodied in the value of the final goods. Answer: Each person?s value-added (VA) equals the value of what he/she produced minus the value of the intermediate inputs he/she started with. Farmer?s VA = $1 Miller?s VA = $3 Baker?s VA = $3 GDP = $6 Note that GDP = value of final good = sum of value-added at all stages of production. Even though this problem is highly simplified, its main lesson holds in the real world: the value of all final goods produced equals the sum of value-added in all stages of production of all goods. When students compute GDP, they should assume that these are the only transactions in the economy. Lessons of this problem: GDP = value of final goods = sum of value at all stages of production We don?t include the value of intermediate goods in GDP because their value is already embodied in the value of the final goods. Answer: Each person?s value-added (VA) equals the value of what he/she produced minus the value of the intermediate inputs he/she started with. Farmer?s VA = $1 Miller?s VA = $3 Baker?s VA = $3 GDP = $6 Note that GDP = value of final good = sum of value-added at all stages of production. Even though this problem is highly simplified, its main lesson holds in the real world: the value of all final goods produced equals the sum of value-added in all stages of production of all goods. 7. CHAPTER 2 The Data of Macroeconomics Final goods, value added, and GDP GDP = value of final goods produced = sum of value added at all stages of production. The value of the final goods already includes the value of the intermediate goods, so including intermediate and final goods in GDP would be double-counting. 8. CHAPTER 2 The Data of Macroeconomics The expenditure components of GDP consumption investment government spending net exports 9. CHAPTER 2 The Data of Macroeconomics Consumption (C) durable goods last a long time ex: cars, home appliances nondurable goodslast a short time ex: food, clothing serviceswork done for consumers ex: dry cleaning, air travel. A consumer?s spending on a new house counts under investment, not consumption. More on this in a few moments, when we get to Investment. A tenant?s spending on rent counts under services -- rent is considered spending on ?housing services.? So what happens if a renter buys the house she had been renting? Conceptually, consumption should remain unchanged: just because she is no longer paying rent, she is still consuming the same housing services as before. In national income accounting, (the services category of) consumption includes the imputed rental value of owner-occupied housing. To help students keep all this straight, you might suggest that they think of a house as a piece of capital which is used to produce a consumer service, which we might call ?housing services?. Thus, spending on the house counts in ?investment?, and the value of the housing services that the house provides counts under ?consumption? (regardless of whether the housing services are being consumed by the owner of the house or a tenant). A consumer?s spending on a new house counts under investment, not consumption. More on this in a few moments, when we get to Investment. A tenant?s spending on rent counts under services -- rent is considered spending on ?housing services.? So what happens if a renter buys the house she had been renting? Conceptually, consumption should remain unchanged: just because she is no longer paying rent, she is still consuming the same housing services as before. In national income accounting, (the services category of) consumption includes the imputed rental value of owner-occupied housing. To help students keep all this straight, you might suggest that they think of a house as a piece of capital which is used to produce a consumer service, which we might call ?housing services?. Thus, spending on the house counts in ?investment?, and the value of the housing services that the house provides counts under ? consumption? (regardless of whether the housing services are being consumed by the owner of the house or a tenant). 10. CHAPTER 2 The Data of Macroeconomics U.S. consumption, 2005 source: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.govsource: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.gov 11. CHAPTER 2 The Data of Macroeconomics Investment (I) Definition 1: Spending on [the factor of production] capital. Definition 2: Spending on goods bought for future use Includes: business fixed investmentSpending on plant and equipment that firms will use to produce other goods & services. residential fixed investmentSpending on housing units by consumers and landlords. inventory investmentThe change in the value of all firms? inventories. In definition #1, note that aggregate investment equals total spending on newly produced capital goods. (If I pay $1000 for a used computer for my business, then I?m doing $1000 of investment, but the person who sold it to me is doing $1000 of disinvestment, so there is no net impact on aggregate investment.) The housing issue A consumer?s spending on a new house counts under investment, not consumption. A tenant?s spending on rent counts under services -- rent is considered spending on ?housing services.? So what happens if a renter buys the house she had been renting? Conceptually, consumption should remain unchanged: just because she is no longer paying rent, she is still consuming the same housing services as before. In national income accounting, (the services category of) consumption includes the imputed rental value of owner-occupied housing. To help students keep all this straight, you might suggest that they think of a house as a piece of capital which is used to produce a consumer service, which we might call ?housing services?. Thus, spending on the house counts in ?investment?, and the value of the housing services that the house provides counts under ?consumption? (regardless of whether the housing services are being consumed by the owner of the house or a tenant). Inventories If total inventories are $10 billion at the beginning of the year, and $12 billion at the end, then inventory investment equals $2 billion for the year. Note that inventory investment can be negative (which means inventories fell over the year). In definition #1, note that aggregate investment equals total spending on newly produced capital goods. (If I pay $1000 for a used computer for my business, then I?m doing $1000 of investment, but the person who sold it to me is doing $1000 of disinvestment, so there is no net impact on aggregate investment.) The housing issue A consumer?s spending on a new house counts under investment, not consumption. A tenant?s spending on rent counts under services -- rent is considered spending on ?housing services.? So what happens if a renter buys the house she had been renting? Conceptually, consumption should remain unchanged: just because she is no longer paying rent, she is still consuming the same housing services as before. In national income accounting, (the services category of) consumption includes the imputed rental value of owner-occupied housing. To help students keep all this straight, you might suggest that they think of a house as a piece of capital which is used to produce a consumer service, which we might call ?housing services?. Thus, spending on the house counts in ?investment?, and the value of the housing services that the house provides counts under ?consumption? (regardless of whether the housing services are being consumed by the owner of the house or a tenant). Inventories If total inventories are $10 billion at the beginning of the year, and $12 billion at the end, then inventory investment equals $2 billion for the year. Note that inventory investment can be negative (which means inventories fell over the year). 12. CHAPTER 2 The Data of Macroeconomics U.S. investment, 2005 source: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.gov source: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.gov 13. CHAPTER 2 The Data of Macroeconomics Investment vs. Capital Note: Investment is spending on new capital. Example (assumes no depreciation): 1/1/2006: economy has $500b worth of capital during 2006:investment = $60b 1/1/2007: economy will have $560b worth of capital If you teach the stocks vs. flows concepts, this is a good example of the difference. If you teach the stocks vs. flows concepts, this is a good example of the difference. 14. CHAPTER 2 The Data of Macroeconomics Stocks vs. Flows A flow is a quantity measured per unit of time. E.g., ?U.S. investment was $2.5 trillion during 2006.? The bathtub example is the classic means of explaining stocks and flows, and appears in Chapter 2. The bathtub example is the classic means of explaining stocks and flows, and appears in Chapter 2. 15. CHAPTER 2 The Data of Macroeconomics Stocks vs. Flows - examples Point out that a specific quantity of a flow variable only makes sense if you know the size of the time unit. If someone tells you her salary is $5000 but does not say whether it is ?per month? or ?per year? or otherwise, then you?d have no idea what her salary really is. A pitfall with flow variables is that many of them have a very standard time unit (e.g., per year). Therefore, people often omit the time unit: ?John?s salary is $50,000.? And omitting the time unit makes it easy to forget that John?s salary is a flow variable, not a stock. Another point: It is often the case that a flow variable measures the rate of change in a corresponding stock variable, as the examples on this slide (and the investment/ capital example) make clear. Point out that a specific quantity of a flow variable only makes sense if you know the size of the time unit. If someone tells you her salary is $5000 but does not say whether it is ?per month? or ?per year? or otherwise, then you?d have no idea what her salary really is. A pitfall with flow variables is that many of them have a very standard time unit (e.g., per year). Therefore, people often omit the time unit: ?John?s salary is $50,000.? And omitting the time unit makes it easy to forget that John?s salary is a flow variable, not a stock. Another point: It is often the case that a flow variable measures the rate of change in a corresponding stock variable, as the examples on this slide (and the investment/capital example) make clear. 16. CHAPTER 2 The Data of Macroeconomics Now you try: Stock or flow? the balance on your credit card statement how much you study economics outside of class the size of your compact disc collection the inflation rate the unemployment rate You can use this slide to get some class participation. I suggest you display the entire slide, give students a few moments to formulate their answers, and then ask for volunteers. Doing so results in wider participation than if you ask for someone to volunteer the answer immediately after displaying each item on the list. Here are the answers, and explanations: The balance on your credit card statement is a stock. (A corresponding flow would be the amount of new purchases on your credit card statement.) How much you study is a flow. The statement ?I study 10 hours? is only meaningful if we know the time period ? whether 10 years per day, per week, per month, etc. The size of your compact disc collection is a stock. (A corresponding flow would be how many CDs you buy per month.) The inflation rate is a flow: we say ?prices are increasing by 3.2% per year? or ?by 0.4% per month?. The unemployment rate is a stock: It?s the number of unemployed people divided by the number of people in the workforce. In contrast, the number of newly unemployed people per month would be a flow. Note: Students have not yet seen official definitions of the inflation and unemployment rates. However, it is likely they are familiar with these terms, either from their introductory economics course or from reading the newspaper. Note: The stocks vs. flows concept is not mentioned very much in the subsequent chapters. If you do not want your students to forget it, then a good idea would be to do the following: As subsequent chapters introduce new variables, ask students whether each new variable is a stock or a flow. You can use this slide to get some class participation. I suggest you display the entire slide, give students a few moments to formulate their answers, and then ask for volunteers. Doing so results in wider participation than if you ask for someone to volunteer the answer immediately after displaying each item on the list. Here are the answers, and explanations: The balance on your credit card statement is a stock. (A corresponding flow would be the amount of new purchases on your credit card statement.) How much you study is a flow. The statement ?I study 10 hours? is only meaningful if we know the time period ? whether 10 years per day, per week, per month, etc. The size of your compact disc collection is a stock. (A corresponding flow would be how many CDs you buy per month.) The inflation rate is a flow: we say ?prices are increasing by 3.2% per year? or ?by 0.4% per month?. The unemployment rate is a stock: It?s the number of unemployed people divided by the number of people in the workforce. In contrast, the number of newly unemployed people per month would be a flow. Note: Students have not yet seen official definitions of the inflation and unemployment rates. However, it is likely they are familiar with these terms, either from their introductory economics course or from reading the newspaper. Note: The stocks vs. flows concept is not mentioned very much in the subsequent chapters. If you do not want your students to forget it, then a good idea would be to do the following: As subsequent chapters introduce new variables, ask students whether each new variable is a stock or a flow. 17. CHAPTER 2 The Data of Macroeconomics Government spending (G) G includes all government spending on goods and services.. G excludes transfer payments (e.g., unemployment insurance payments), because they do not represent spending on goods and services. Transfer payments are included in ?government outlays,? but not in government spending. People who receive transfer payments use these funds to pay for their consumption. Thus, we avoid double-counting by excluding transfer payments from G. Transfer payments are included in ?government outlays,? but not in government spending. People who receive transfer payments use these funds to pay for their consumption. Thus, we avoid double-counting by excluding transfer payments from G. 18. CHAPTER 2 The Data of Macroeconomics U.S. government spending, 2005 source: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.gov source: Bureau of Economic Analysis, U.S. Department of Commerce http://www.bea.gov 19. Net exports: NX = EX ? IM def: The value of total exports (EX) minus the value of total imports (IM). source: FRED Database, The Federal Reserve Bank of St. Louis, http://research.stlouisfed.org/ fred2/ Before showing the data graph, the following explanation might be helpful: Remember, GDP is the value of spending on our country?s output of goods & services. Exports represent foreign spending on our country?s output, so we include exports. Imports represent the portion of domestic spending (C, I, and G) that goes to foreign goods and services, so we subtract off imports. NX, therefore, equals net spending by the foreign sector on domestically produced goods & services. source: FRED Database, The Federal Reserve Bank of St. Louis, http://research.stlouisfed.org/fred2/ Before showing the data graph, the following explanation might be helpful: Remember, GDP is the value of spending on our country?s output of goods & services. Exports represent foreign spending on our country?s output, so we include exports. Imports represent the portion of domestic spending (C, I, and G) that goes to foreign goods and services, so we subtract off imports. NX, therefore, equals net spending by the foreign sector on domestically produced goods & services. 20. CHAPTER 2 The Data of Macroeconomics An important identity Y = C + I + G + NX A few slides ago, we defined GDP as the total expenditure on the economy?s output of goods and services (as well as total income). We can also define GDP as (the value of) aggregate output, not just spending on output. An identity is an equation that always holds because of the way the variables are defined. A few slides ago, we defined GDP as the total expenditure on the economy?s output of goods and services (as well as total income). We can also define GDP as (the value of) aggregate output, not just spending on output. An identity is an equation that always holds because of the way the variables are defined. 21. CHAPTER 2 The Data of Macroeconomics A question for you: Suppose a firm produces $10 million worth of final goods but only sells $9 million worth. Does this violate the expenditure = output identity? If you do not wish to pose this as a question, you can ?hide? this slide and skip right to the next one, which simply gives students the information. Suggestion (applies generally, not just here): When you pose a question like this to your class, don?t ask for students to volunteer their answers right away. Instead, tell them to think about it for a minute and write their answer down on paper. Then, ask for volunteers (or call on students at random). Giving students this extra minute will increase the quality of participation as well as the number of students who participate. Correct answer to the question: Unsold output adds to inventory, and thus counts as inventory investment ? whether intentional or unplanned. Thus, it?s as if a firm ?purchased? its own inventory accumulation. Here?s where the ?goods purchased for future use? definition of investment is handy: When firms add newly produced goods to their inventory, the ?future use? of those goods, of course, is future sales. Note, also, that inventory investment counts intentional as well as unplanned inventory changes. Thus, when firms sell fewer units than planned, the unsold units go into inventory and are counted as inventory investment. This explains why ?output = expenditure? -- the value of unsold output is counted under inventory investment, just as if the firm ?purchased? its own output. Remember, the definition of investment is goods bought for future use. With inventory investment, that future use is to give the firm the ability in the future to sell more than its output.If you do not wish to pose this as a question, you can ?hide? this slide and skip right to the next one, which simply gives students the information. Suggestion (applies generally, not just here): When you pose a question like this to your class, don?t ask for students to volunteer their answers right away. Instead, tell them to think about it for a minute and write their answer down on paper. Then, ask for volunteers (or call on students at random). Giving students this extra minute will increase the quality of participation as well as the number of students who participate. Correct answer to the question: Unsold output adds to inventory, and thus counts as inventory investment ? whether intentional or unplanned. Thus, it?s as if a firm ?purchased? its own inventory accumulation. Here?s where the ?goods purchased for future use? definition of investment is handy: When firms add newly produced goods to their inventory, the ?future use? of those goods, of course, is future sales. Note, also, that inventory investment counts intentional as well as unplanned inventory changes. Thus, when firms sell fewer units than planned, the unsold units go into inventory and are counted as inventory investment. This explains why ?output = expenditure? -- the value of unsold output is counted under inventory investment, just as if the firm ?purchased? its own output. Remember, the definition of investment is goods bought for future use. With inventory investment, that future use is to give the firm the ability in the future to sell more than its output. 22. CHAPTER 2 The Data of Macroeconomics Why output = expenditure Unsold output goes into inventory, and is counted as ?inventory investment?? ?whether or not the inventory buildup was intentional. In effect, we are assuming that firms purchase their unsold output. 23. CHAPTER 2 The Data of Macroeconomics GDP: An important and versatile concept We have now seen that GDP measures total income total output total expenditure the sum of value-added at all stages in the production of final goods This is why economists often use the terms income, output, expenditure, and GDP interchangeably. This is why economists often use the terms income, output, expenditure, and GDP interchangeably. 24. CHAPTER 2 The Data of Macroeconomics GNP vs. GDP Gross National Product (GNP): Total income earned by the nation?s factors of production, regardless of where located. Gross Domestic Product (GDP):Total income earned by domestically-located factors of production, regardless of nationality. (GNP ? GDP) = (factor payments from abroad) ? (factor payments to abroad) Emphasize that the difference b/w GDP and GNP boils down to two things: location of the economic activity, and ownership (domestic vs. foreign) of the factors of production. From the perspective of the U.S., factor payments from abroad includes things like wages earned by U.S. citizens working abroad profits earned by U.S.-owned businesses located abroad income (interest, dividends, rent, etc) generated from the foreign assets owned by U.S. citizens Factor payments to abroad includes things like wages earned by foreign workers in the U.S. profits earned by foreign-owned businesses located in the U.S. income (interest, dividends, rent, etc) that foreigners earn on U.S. assets Chapter 3 introduces factor markets and factor prices. Unless you?ve already covered that material, it might be worth mentioning to your students that factor payments are simply payments to the factors of production, for example, the wages earned by labor. Emphasize that the difference b/w GDP and GNP boils down to two things: location of the economic activity, and ownership (domestic vs. foreign) of the factors of production. From the perspective of the U.S., factor payments from abroad includes things like wages earned by U.S. citizens working abroad profits earned by U.S.-owned businesses located abroad income (interest, dividends, rent, etc) generated from the foreign assets owned by U.S. citizens Factor payments to abroad includes things like wages earned by foreign workers in the U.S. profits earned by foreign-owned businesses located in the U.S. income (interest, dividends, rent, etc) that foreigners earn on U.S. assets Chapter 3 introduces factor markets and factor prices. Unless you?ve already covered that material, it might be worth mentioning to your students that factor payments are simply payments to the factors of production, for example, the wages earned by labor. 25. CHAPTER 2 The Data of Macroeconomics Discussion question: In your country, which would you want to be bigger, GDP, or GNP? Why? This issue is subjective, and the question is intended to get students to think a little deeper about the difference between GNP and GDP. Of course, there is no single correct answer. Some students offer this response: It?s better to have GNP > GDP, because it means our nation?s income is greater than the value of what we are producing domestically. If, instead, GDP > GNP, then a portion of the income generated in our country is going to people in other countries, so there?s less income left over for us to enjoy. This issue is subjective, and the question is intended to get students to think a little deeper about the difference between GNP and GDP. Of course, there is no single correct answer. Some students offer this response: It?s better to have GNP > GDP, because it means our nation?s income is greater than the value of what we are producing domestically. If, instead, GDP > GNP, then a portion of the income generated in our country is going to people in other countries, so there?s less income left over for us to enjoy. 26. CHAPTER 2 The Data of Macroeconomics (GNP ? GDP) as a percentage of GDP selected countries, 2002 How to interpret the numbers in this table: In Canada, GNP is 1.9% smaller than GDP. This sounds like a tiny number, but it means that about 2% of all the income generated in Canada is taken away and paid to foreigners. In Angola, about 14% of the value of domestic production is paid to foreigners. Kuwait?s GNP is 9.5% bigger than its GDP. This means that the income earned by the citizens of Kuwait is 9.5% larger than the value of production occurring within Kuwait?s borders. Teaching suggestion: Point out a few countries with positive numbers. Ask your students to take a moment to think of possible reasons why GNP might exceed GDP in a country, and write them down. Point out a few countries with negative numbers. Ask your students to take a moment to think of possible reasons why a country?s GDP might be bigger than its GNP, and write them down. After students have had a chance to think of some reasons, ask for volunteers. (Better yet, have them pair up and compare answers with a classmate before volunteering their answers to the class.) Reasons why GNP may exceed GDP: Country has done a lot of lending or investment overseas and is earning lots of income from these foreign investments (income on nationally-owned capital located abroad). Take Kuwait. This tiny country earns (from oil revenue) more than it spends; the difference is invested (in the layperson?s sense of the term investment) in foreign assets, such as stocks and real estate. Thus, Kuwait has a lot of foreign-owned capital that generates income. This income comes back to Kuwait, making its GNP bigger than its GDP. A significant number of citizens have left the country to work overseas (their income is counted in GNP, not GDP). Reasons why GDP may exceed GNP: - Country has done a lot of borrowing from abroad, or foreigners have done a lot of investment in the country (income earned by foreign-owned domestically-located capital). This is most likely why Mexico?s GDP > GNP. - Country has a large immigrant labor forceHow to interpret the numbers in this table: In Canada, GNP is 1.9% smaller than GDP. This sounds like a tiny number, but it means that about 2% of all the income generated in Canada is taken away and paid to foreigners. In Angola, about 14% of the value of domestic production is paid to foreigners. Kuwait?s GNP is 9.5% bigger than its GDP. This means that the income earned by the citizens of Kuwait is 9.5% larger than the value of production occurring within Kuwait?s borders. Teaching suggestion: Point out a few countries with positive numbers. Ask your students to take a moment to think of possible reasons why GNP might exceed GDP in a country, and write them down. Point out a few countries with negative numbers. Ask your students to take a moment to think of possible reasons why a country?s GDP might be bigger than its GNP, and write them down. After students have had a chance to think of some reasons, ask for volunteers. (Better yet, have them pair up and compare answers with a classmate before volunteering their answers to the class.) Reasons why GNP may exceed GDP: Country has done a lot of lending or investment overseas and is earning lots of income from these foreign investments (income on nationally-owned capital located abroad). Take Kuwait. This tiny country earns (from oil revenue) more than it spends; the difference is invested (in the layperson?s sense of the term investment) in foreign assets, such as stocks and real estate. Thus, Kuwait has a lot of foreign-owned capital that generates income. This income comes back to Kuwait, making its GNP bigger than its GDP. A significant number of citizens have left the country to work overseas (their income is counted in GNP, not GDP). Reasons why GDP may exceed GNP: - Country has done a lot of borrowing from abroad, or foreigners have done a lot of investment in the country (income earned by foreign-owned domestically-located capital). This is most likely why Mexico?s GDP > GNP. - Country has a large immigrant labor force 27. CHAPTER 2 The Data of Macroeconomics Real vs. nominal GDP GDP is the value of all final goods and services produced. nominal GDP measures these values using current prices. real GDP measure these values using the prices of a base year. 28. CHAPTER 2 The Data of Macroeconomics Practice problem, part 1 Compute nominal GDP in each year. Compute real GDP in each year using 2006 as the base year. This slide (and a few of the following ones) contain exercises that you can have your students do in class for immediate reinforcement of the material. This problem requires calculators. If most of your students do not have calculators, you might ?hide? this slide and instead pass out a printout of it for a homework exercise. Tell students that if they don?t have Or: just have them write down the expressions that they would enter into a calculator if they had calculators, i.e. Nominal GDP in 2001 = 30*900 + 100*192. This slide (and a few of the following ones) contain exercises that you can have your students do in class for immediate reinforcement of the material. This problem requires calculators. If most of your students do not have calculators, you might ?hide? this slide and instead pass out a printout of it for a homework exercise. Tell students that if they don?t have Or: just have them write down the expressions that they would enter into a calculator if they had calculators, i.e. Nominal GDP in 2001 = 30*900 + 100*192. 29. CHAPTER 2 The Data of Macroeconomics Answers to practice problem, part 1 nominal GDP multiply Ps & Qs from same year2006: $46,200 = $30 ? 900 + $100 ? 192 2007: $51,400 2008: $58,300 real GDP multiply each year?s Qs by 2006 Ps2006: $46,2002007: $50,000 2008: $52,000 = $30 ? 1050 + $100 ? 205 30. CHAPTER 2 The Data of Macroeconomics Real GDP controls for inflation Changes in nominal GDP can be due to: changes in prices. changes in quantities of output produced. Changes in real GDP can only be due to changes in quantities, because real GDP is constructed using constant base-year prices. Suppose from 2006 to 2007, nominal GDP rises by 10%. Some of this growth could be due to price increases, because an increase in the price of output causes an increase in the value of output, even if the real quantity remains the same. Hence, to control for inflation, we use real GDP. Remember, real GDP is the value of output using constant base-year prices. If real GDP grows by 6% from 2006 to 2007, we can be sure that all of this growth is due to an increase in the economy?s actual production of goods and services, because the same prices are used to construct real GDP in 2006 and 2007. Suppose from 2006 to 2007, nominal GDP rises by 10%. Some of this growth could be due to price increases, because an increase in the price of output causes an increase in the value of output, even if the real quantity remains the same. Hence, to control for inflation, we use real GDP. Remember, real GDP is the value of output using constant base-year prices. If real GDP grows by 6% from 2006 to 2007, we can be sure that all of this growth is due to an increase in the economy?s actual production of goods and services, because the same prices are used to construct real GDP in 2006 and 2007. 31. CHAPTER 2 The Data of Macroeconomics U.S. Nominal and Real GDP, 1950?2006 Source: http://research.stlouisfed.org/fred2/ Notice that the brown line (nominal GDP) is steeper than the blue line. That?s because prices generally rise over time. So, nominal GDP grows at a faster rate than real GDP. If you?re anal like me, you might ask students what is the significance of the two lines crossing in 2000. Answer: 2000 is the base year for this real GDP data, so RGDP = NGDP in 2000 only. Before 2000, RGDP > NGDP, while after 2000, RGDP < NGDP. This is intuitive if you think about it for a minute: Take 1970. When the economy?s output of 1970 is measured in the (then) current prices, GDP is about $1 trillion. Between 1970 and 2000, most prices have risen. Hence, if you value the country?s 1970 using the higher year-2000 prices (to get real GDP), you get a bigger value than if you measure 1970?s output using 1970 prices (nominal GDP). This explains why real GDP is larger than nominal GDP in 1970 (as in most or all years before the base year). Source: http://research.stlouisfed.org/fred2/ Notice that the brown line (nominal GDP) is steeper than the blue line. That?s because prices generally rise over time. So, nominal GDP grows at a faster rate than real GDP. If you?re anal like me, you might ask students what is the significance of the two lines crossing in 2000. Answer: 2000 is the base year for this real GDP data, so RGDP = NGDP in 2000 only. Before 2000, RGDP > NGDP, while after 2000, RGDP < NGDP. This is intuitive if you think about it for a minute: Take 1970. When the economy?s output of 1970 is measured in the (then) current prices, GDP is about $1 trillion. Between 1970 and 2000, most prices have risen. Hence, if you value the country?s 1970 using the higher year-2000 prices (to get real GDP), you get a bigger value than if you measure 1970?s output using 1970 prices (nominal GDP). This explains why real GDP is larger than nominal GDP in 1970 (as in most or all years before the base year). 32. CHAPTER 2 The Data of Macroeconomics GDP Deflator The inflation rate is the percentage increase in the overall level of prices. One measure of the price level is the GDP deflator, defined as After revealing the first bullet point, mention that there are several different measures of the overall price level. Your students are probably familiar with one of them---the Consumer Price Index, which will be covered shortly. For now, though, we learn about a different one <reveal next bullet point>, the GDP deflator. The GDP deflator is so named because it is used to ?deflate? (remove the effects of inflation from) GDP and other economic variables. After revealing the first bullet point, mention that there are several different measures of the overall price level. Your students are probably familiar with one of them---the Consumer Price Index, which will be covered shortly. For now, though, we learn about a different one <reveal next bullet point>, the GDP deflator. The GDP deflator is so named because it is used to ?deflate? (remove the effects of inflation from) GDP and other economic variables. 33. CHAPTER 2 The Data of Macroeconomics Practice problem, part 2 Use your previous answers to compute the GDP deflator in each year. Use GDP deflator to compute the inflation rate from 2006 to 2007, and from 2007 to 2008. 34. CHAPTER 2 The Data of Macroeconomics Answers to practice problem, part 2 35. CHAPTER 2 The Data of Macroeconomics Understanding the GDP deflator This slide and the next one use simple algebra to show that the GDP deflator is a weighted average of prices; the weight on each price reflects that good?s relative importance in real GDP. This material is not in the textbook, so I have ?hidden? this slide ? it will not automatically display when viewing this PowerPoint presentation in Slide Show mode. If you wish to include this material, please ?unhide? this slide and the next one, by unselecting ?Hide Slide? on the Slide Show drop-down menu. This slide and the next one use simple algebra to show that the GDP deflator is a weighted average of prices; the weight on each price reflects that good?s relative importance in real GDP. This material is not in the textbook, so I have ?hidden? this slide ? it will not automatically display when viewing this PowerPoint presentation in Slide Show mode. If you wish to include this material, please ?unhide? this slide and the next one, by unselecting ?Hide Slide? on the Slide Show drop-down menu. 36. CHAPTER 2 The Data of Macroeconomics Understanding the GDP deflator (I have omitted the ?100 x? from the formula for the GDP deflator so that this slide remains legible.) The formula for the GDP deflator is 100*NGDP/RGDP. It?s not obvious to most students that this is a measure of the average level of prices. But, using some simple algebra, this slide shows that the GDP deflator really is a weighted average of prices. Note: Because the weights don?t all sum to 1, the GDP deflator is a weighted sum, not a weighted average. (I have omitted the ?100 x? from the formula for the GDP deflator so that this slide remains legible.) The formula for the GDP deflator is 100*NGDP/RGDP. It?s not obvious to most students that this is a measure of the average level of prices. But, using some simple algebra, this slide shows that the GDP deflator really is a weighted average of prices. Note: Because the weights don?t all sum to 1, the GDP deflator is a weighted sum, not a weighted average. 37. CHAPTER 2 The Data of Macroeconomics Two arithmetic tricks for working with percentage changes EX: If your hourly wage rises 5% and you work 7% more hours, then your wage income rises approximately 12%. These handy arithmetic tricks will be useful in many different contexts later in this book. For example, in the Quantity Theory of Money in chapter 4, they help us understand how the Quantity Equation, MV = PY, gives us a relation between the rates of inflation, money growth, and GDP growth. The example on this slide uses wage income = (hourly wage) x (number of hours worked) Another example would be revenue = price x quantity Students will see many more examples later in the textbook. These handy arithmetic tricks will be useful in many different contexts later in this book. For example, in the Quantity Theory of Money in chapter 4, they help us understand how the Quantity Equation, MV = PY, gives us a relation between the rates of inflation, money growth, and GDP growth. The example on this slide uses wage income = (hourly wage) x (number of hours worked) Another example would be revenue = price x quantity Students will see many more examples later in the 38. CHAPTER 2 The Data of Macroeconomics Two arithmetic tricks for working with percentage changes EX: GDP deflator = 100 ? NGDP/RGDP. If NGDP rises 9% and RGDP rises 4%, then the inflation rate is approximately 5%. Again, we will see uses for this in many different contexts later in the textbook. For example, if your wage rises 10% while prices rise 6%, then your real wage ? the purchasing power of your wage ? rises by about 4%, because real wage = (nominal wage)/(price level) Again, we will see uses for this in many different contexts later in the textbook. For example, if your wage rises 10% while prices rise 6%, then your real wage ? the purchasing power of your wage ? rises by about 4%, because real wage = (nominal wage)/(price level) 39. CHAPTER 2 The Data of Macroeconomics Chain-Weighted Real GDP Over time, relative prices change, so the base year should be updated periodically. In essence, chain-weighted real GDP updates the base year every year, so it is more accurate than constant-price GDP. Your textbook usually uses constant-price real GDP, because: the two measures are highly correlated. constant-price real GDP is easier to compute. Since constant-price GDP is easier to understand and compute, and because the two measures of real GDP are so highly correlated, this textbook emphasizes the constant-price version of real GDP. However, if this topic is important to you and your students, you should have them carefully read page 24, and give them one or two exercises requiring students to computer or compare constant-price and chain-weighted real GDP. Since constant-price GDP is easier to understand and compute, and because the two measures of real GDP are so highly correlated, this textbook emphasizes the constant-price version of real GDP. However, if this topic is important to you and your students, you should have them carefully read page 24, and give them one or two exercises requiring students to computer or compare constant-price and chain-weighted real GDP. 40. CHAPTER 2 The Data of Macroeconomics Consumer Price Index (CPI) A measure of the overall level of prices Published by the Bureau of Labor Statistics (BLS) Uses: tracks changes in the typical household?s cost of living adjusts many contracts for inflation (?COLAs?) allows comparisons of dollar amounts over time Regarding the comparison of dollar figures from different years: If we want to know whether the average college graduate today is better off than the average college graduate of 1975, we can?t simply compare the nominal salaries, because the cost of living is so much higher now than in 1975. We can use the CPI to express the 1975 in ?current dollars?, i.e. see what it would be worth at today?s prices. Also: when the price of oil (and hence gasoline) shot up in 2000, some in the news reported that oil prices were even higher than in the 1970s. This was true, but only in nominal terms. If you use the CPI to adjust for inflation, the highest oil price in 2000 is still substantially less than the highest oil prices of the 1970s. Regarding the comparison of dollar figures from different years: If we want to know whether the average college graduate today is better off than the average college graduate of 1975, we can?t simply compare the nominal salaries, because the cost of living is so much higher now than in 1975. We can use the CPI to express the 1975 in ? current dollars?, i.e. see what it would be worth at today?s prices. Also: when the price of oil (and hence gasoline) shot up in 2000, some in the news reported that oil prices were even higher than in the 1970s. This was true, but only in nominal terms. If you use the CPI to adjust for inflation, the highest oil price in 2000 is still substantially less than the highest oil prices of the 1970s. 41. CHAPTER 2 The Data of Macroeconomics How the BLS constructs the CPI 1. Survey consumers to determine composition of the typical consumer?s ?basket? of goods. 2. Every month, collect data on prices of all items in the basket; compute cost of basket 3. CPI in any month equals 42. CHAPTER 2 The Data of Macroeconomics Exercise: Compute the CPI Basket contains 20 pizzas and 10 compact discs. From 2002 to 2003, it?s not obvious that the inflation rate will be positive (that the basket?s cost will increase): the price of pizza rises by $1, the price of CDs falls by $1. However, since the basket contains twice as many pizzas as CDs, a given change in the price of pizza will have a bigger impact on the basket?s cost (and CPI) than the same sized price change in CDs. From 2002 to 2003, it?s not obvious that the inflation rate will be positive (that the basket?s cost will increase): the price of pizza rises by $1, the price of CDs falls by $1. However, since the basket contains twice as many pizzas as CDs, a given change in the price of pizza will have a bigger impact on the basket?s cost (and CPI) than the same sized price change in CDs. 43. CHAPTER 2 The Data of Macroeconomics Cost of Inflation basket CPI rate 2002 $350 100.0 n.a. 2003 370 105.7 5.7% 2004 400 114.3 8.1% 2005 410 117.1 2.5% Answers: 44. CHAPTER 2 The Data of Macroeconomics The composition of the CPI?s ?basket? Each number is the percent of the ?typical? household?s total expenditure. source: Bureau of Labor Statistics, http:// www.bls.gov/cpi/ Ask students for examples of how the breakdown of their own expenditure differs from that of the typical household shown here. Then, ask students how the typical elderly person?s expenditure might differ from that shown here. (This is relevant because the CPI is used to give Social Security COLAs to the elderly; however, the elderly spend a much larger fraction of their income on medical care, a category in which prices grow much faster than the CPI.) The website listed above also gives a very fine disaggregation of each category, which enables students to compare their own spending on compact discs, beer, or cell phones to that of the ?typical? household. Each number is the percent of the ?typical? household?s total expenditure. source: Bureau of Labor Statistics, http://www.bls.gov/cpi/ Ask students for examples of how the breakdown of their own expenditure differs from that of the typical household shown here. Then, ask students how the typical elderly person?s expenditure might differ from that shown here. (This is relevant because the CPI is used to give Social Security COLAs to the elderly; however, the elderly spend a much larger fraction of their income on medical care, a category in which prices grow much faster than the CPI.) The website listed above also gives a very fine disaggregation of each category, which enables students to compare their own spending on compact discs, beer, or cell phones to that of the ?typical? household. 45. CHAPTER 2 The Data of Macroeconomics Understanding the CPI The next slide uses simple algebra to show that the CPI is a weighted average of prices; the weight on each price reflects that good?s relative importance in the CPI basket. The algebra is very similar to that of an earlier slide that showed that the GDP deflator is a weighted average of prices. I chose ?E? to represent the cost of the basket because ?E? stands for ?Expenditure?, and using ?B? or ?C? for the cost of the basket didn?t feel right to me. (You can, of course, edit the slides to substitute whatever other letter or symbol you think would make more sense here.) The next slide uses simple algebra to show that the CPI is a weighted average of prices; the weight on each price reflects that good?s relative importance in the CPI basket. The algebra is very similar to that of an earlier slide that showed that the GDP deflator is a weighted average of prices. I chose ?E? to represent the cost of the basket because ?E? stands for ?Expenditure?, and using ?B? or ?C? for the cost of the basket didn?t feel right to me. (You can, of course, edit the slides to substitute whatever other letter or symbol you think would make more sense here.) 46. CHAPTER 2 The Data of Macroeconomics Understanding the CPI Note: Because the weights don?t all sum to 1, the CPI is a weighted sum, not a weighted average. Note: Because the weights don?t all sum to 1, the CPI is a weighted sum, not a weighted average. 47. CHAPTER 2 The Data of Macroeconomics Reasons why the CPI may overstate inflation Substitution bias: The CPI uses fixed weights, so it cannot reflect consumers? ability to substitute toward goods whose relative prices have fallen. Introduction of new goods: The introduction of new goods makes consumers better off and, in effect, increases the real value of the dollar. But it does not reduce the CPI, because the CPI uses fixed weights. Unmeasured changes in quality: Quality improvements increase the value of the dollar, but are often not fully measured. 48. CHAPTER 2 The Data of Macroeconomics The size of the CPI?s bias In 1995, a Senate-appointed panel of experts estimated that the CPI overstates inflation by about 1.1% per year. So the BLS made adjustments to reduce the bias. Now, the CPI?s bias is probably under 1% per year. 49. CHAPTER 2 The Data of Macroeconomics Discussion questions: If your grandmother receives Social Security, how is she affected by the CPI?s bias? Where does the government get the money to pay COLAs to Social Security recipients? If you pay income and Social Security taxes, how does the CPI?s bias affect you? Is the government giving your grandmother too much of a COLA? How does your grandmother?s ?basket? differ from the CPI?s? If you can afford a few minutes of class time, you can use these questions to illustrate one reason why the CPI?s bias is important, and also to get students to think about the implications of applying a measure of the ?typical household?s cost of living? to groups (like the elderly) that are not typical. If you can afford a few minutes of class time, you can use these questions to illustrate one reason why the CPI?s bias is important, and also to get students to think about the implications of applying a measure of the ?typical household?s cost of living? to groups (like the elderly) that are not typical. 50. CHAPTER 2 The Data of Macroeconomics CPI vs. GDP Deflator prices of capital goods included in GDP deflator (if produced domestically) excluded from CPI prices of imported consumer goods included in CPI excluded from GDP deflator the basket of goods CPI: fixed GDP deflator: changes every year 51. CHAPTER 2 The Data of Macroeconomics Two measures of inflation in the U.S. source: http://research.stlouisfed.org/fred2/ In 1980, the CPI increased much faster than the GDP deflator. Ask students if they can offer a possible explanation. In 1955, the CPI showed slightly negative inflation, while the GDP deflator showed positive inflation. Ask students for possible explanations. (For possible answers, just refer to previous slide.) source: http://research.stlouisfed.org/fred2/ In 1980, the CPI increased much faster than the GDP deflator. Ask students if they can offer a possible explanation. In 1955, the CPI showed slightly negative inflation, while the GDP deflator showed positive inflation. Ask students for possible explanations. (For possible answers, just refer to previous slide.) 52. CHAPTER 2 The Data of Macroeconomics Categories of the population employed working at a paid job unemployed not employed but looking for a job labor force the amount of labor available for producing goods and services; all employed plus unemployed persons not in the labor force not employed, not looking for work 53. CHAPTER 2 The Data of Macroeconomics Two important labor force concepts unemployment rate percentage of the labor force that is unemployed labor force participation rate the fraction of the adult population that ?participates? in the labor force 54. CHAPTER 2 The Data of Macroeconomics Exercise: Compute labor force statistics U.S. adult population by group, June 2006 Number employed = 144.4 million Number unemployed = 7.0 million Adult population = 228.8 million source: Bureau of Labor Statistics, U.S. Department of Labor. http://www.bls.gov source: Bureau of Labor Statistics, U.S. Department of Labor. http://www.bls.gov 55. CHAPTER 2 The Data of Macroeconomics Answers: data: E = 144.4, U = 7.0, POP = 228.8 labor forceL = E +U = 144.4 + 7 = 151.4 not in labor forceNILF = POP ? L = 228.8 ? 151.4 = 77.4 unemployment rateU/L x 100% = (7/151.4) x 100% = 4.6% labor force participation rateL/POP x 100% = (151.4/228.8) x 100% = 66.2% 56. CHAPTER 2 The Data of Macroeconomics Exercise: Compute percentage changes in labor force statistics Suppose population increases by 1% labor force increases by 3% number of unemployed persons increases by 2% Compute the percentage changes in the labor force participation rate: the unemployment rate: Allow two minutes of class time for your students to work this exercise. This will give them immediate reinforcement of the definitions of the labor force participation rate, the unemployment rate, and the ?arithmetic tricks for working with percentage changes? introduced earlier. Allow two minutes of class time for your students to work this exercise. This will give them immediate reinforcement of the definitions of the labor force participation rate, the unemployment rate, and the ?arithmetic tricks for working with percentage changes? introduced earlier. 57. CHAPTER 2 The Data of Macroeconomics The establishment survey The BLS obtains a second measure of employment by surveying businesses, asking how many workers are on their payrolls. Neither measure is perfect, and they occasionally diverge due to: treatment of self-employed persons new firms not counted in establishment survey technical issues involving population inferences from sample data This slide and the next correspond to new material in the 6th edition on the Establishment Survey. See pp.38-39. The material on Okun?s Law, which formerly appeared at this point in Chapter 2, has been moved to Chapter 9, section 9-1. This slide and the next correspond to new material in the 6th edition on the Establishment Survey. See pp.38-39. The material on Okun?s Law, which formerly appeared at this point in Chapter 2, has been moved to Chapter 9, section 9-1. 58. CHAPTER 2 The Data of Macroeconomics Two measures of employment growth Source: http://research.stlouisfed.org/fred2/ This graph shows the percentage change in total U.S. non-farm employment from 12 months earlier (based on monthly, seasonally-adjusted data from the Bureau of Labor Statistics), from two surveys: The household survey, which is used to generate the widely-known unemployment rate data, and the establishment survey. Pp.38-39 discusses the establishment survey in detail and contrasts it with the household survey to help explain the divergences. Source: http:// research.stlouisfed.org/fred2/ This graph shows the percentage change in total U.S. non-farm employment from 12 months earlier (based on monthly, seasonally-adjusted data from the Bureau of Labor Statistics), from two surveys: The household survey, which is used to generate the widely-known unemployment rate data, and the establishment survey. Pp.38-39 discusses the establishment survey in detail and contrasts it with the household survey to help explain the divergences. 59. Chapter Summary 1. Gross Domestic Product (GDP) measures both total income and total expenditure on the economy?s output of goods & services. 2. Nominal GDP values output at current prices; real GDP values output at constant prices. Changes in output affect both measures, but changes in prices only affect nominal GDP. 3. GDP is the sum of consumption, investment, government purchases, and net exports. 60. Chapter Summary 4. The overall level of prices can be measured by either the Consumer Price Index (CPI), the price of a fixed basket of goods purchased by the typical consumer, or the GDP deflator, the ratio of nominal to real GDP 5. The unemployment rate is the fraction of the labor force that is not employed.
{"url":"http://www.slideserve.com/Patman/the-data-of-macroeconomics","timestamp":"2014-04-16T13:32:04Z","content_type":null,"content_length":"104381","record_id":"<urn:uuid:a0c4c368-51d2-4929-9451-bf6a5eb37d0e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Marina Dl Rey, CA ACT Tutor Find a Marina Dl Rey, CA ACT Tutor ...I also have an extensive background in working with elementary and middle school students in all subjects. I have a Master's degree from UCLA in education, and I've taught methods classes to beginning teachers. Over the years I've used many strategies and techniques and I have a good sense of how to reach a variety students by tapping into individual learning styles. 72 Subjects: including ACT Math, English, reading, physics ...In a test taking environment in which time is pressuring, being precise and fast at the same time requires training. SAT Math is one of my favorites. My International Marketing career took me to attend several conferences around the world. 20 Subjects: including ACT Math, Spanish, physics, calculus ...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma... 28 Subjects: including ACT Math, Spanish, chemistry, calculus ...Additionally, physics is often referred to as the science of linear algebra and consequently I've picked up a lot of the more complex methods from studying operators in graduate quantum mechanics courses. I have done three years of physics research, all of which was in MATLAB. I've simulated sy... 26 Subjects: including ACT Math, physics, calculus, geometry Hello, my name is David Angeles and I am currently attending California State University, Northridge to pursue a Major in Applied Mathematics. I want to be a math professor one day and help out many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. 10 Subjects: including ACT Math, calculus, geometry, algebra 1 Related Marina Dl Rey, CA Tutors Marina Dl Rey, CA Accounting Tutors Marina Dl Rey, CA ACT Tutors Marina Dl Rey, CA Algebra Tutors Marina Dl Rey, CA Algebra 2 Tutors Marina Dl Rey, CA Calculus Tutors Marina Dl Rey, CA Geometry Tutors Marina Dl Rey, CA Math Tutors Marina Dl Rey, CA Prealgebra Tutors Marina Dl Rey, CA Precalculus Tutors Marina Dl Rey, CA SAT Tutors Marina Dl Rey, CA SAT Math Tutors Marina Dl Rey, CA Science Tutors Marina Dl Rey, CA Statistics Tutors Marina Dl Rey, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Marina_Dl_Rey_CA_ACT_tutors.php","timestamp":"2014-04-17T16:00:42Z","content_type":null,"content_length":"24234","record_id":"<urn:uuid:5c866071-66bf-4010-b690-02718cc228da>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Addition and Subtraction of Rational Expressions ( Read ) | Algebra What if you had two rational expressions like $\frac{x}{x + 5}$$\frac{3}{x - 4}$rational expressions like these. Watch This CK-12 Foundation: 1210S Adding and Subtracting Rational Expressions Watch this video for more examples of how to add and subtract rational expressions. PatrickJMT: Adding and Subtracting Rational Expressions Like fractions, rational expressions represent a portion of a quantity. Remember that when we add or subtract fractions we must first make sure that they have the same denominator. Once the fractions have the same denominator, we combine the different portions by adding or subtracting the numerators and writing that answer over the common denominator. Add and Subtract Rational Expressions with the Same Denominator Fractions with common denominators combine in the following manner: $\frac{a}{c}+\frac{b}{c} = \frac{a+b}{c} \qquad \text{and} \qquad \frac{a}{c} - \frac{b}{c}=\frac{a-b}{c}$ Example A a) $\frac{8}{7} - \frac{2}{7} + \frac{4}{7}$ b) $\frac{4x^2-3}{x+5} + \frac{2x^2-1}{x+5}$ c) $\frac{x^2-2x+1}{2x+3} - \frac{3x^2-3x+5}{2x+3}$ a) Since the denominators are the same we combine the numerators: $\frac{8}{7} - \frac{2}{7} + \frac{4}{7} = \frac{8-2+4}{7} = \frac{10}{7}$ b) $\text{Since the denominators are the same we combine the numerators:} \qquad \frac{4x^2-3+2x^2-1}{x+5}\!\\\text{Simplify by collecting like terms:} \qquad \qquad \qquad \qquad \qquad \qquad \ qquad \qquad \qquad \frac{6x^2-4}{x+5}$ c) Since the denominators are the same we combine the numerators. Make sure the subtraction sign is distributed to all terms in the second expression: $\frac{x^2-2x+1-(3x^2-3x+5)}{2x+3} = \frac{x^2-2x+1-3x^2+3x-5}{2x+3}= \frac{-2x^2+x-4}{2x+3}$ Find the Least Common Denominator of Rational Expressions To add and subtract fractions with different denominators, we must first rewrite all fractions so that they have the same denominator. In general, we want to find the least common denominator . To find the least common denominator, we find the least common multiple (LCM) of the expressions in the denominators of the different fractions. Remember that the least common multiple of two or more integers is the least positive integer that has all of those integers as factors. The procedure for finding the lowest common multiple of polynomials is similar. We rewrite each polynomial in factored form and we form the LCM by taking each factor to the highest power it appears in any of the separate expressions. Example B Find the LCM of $48x^2y$$60xy^3z$ First rewrite the integers in their prime factorization. $48 & = 2^4 \cdot 3\\60 & = 2^2 \cdot 3 \cdot 5$ The two expressions can be written as: $& 48x^2y=2^4 \cdot 3 \cdot x^2 \cdot y\\& 60xy^3z=2^2 \cdot 3 \cdot 5 \cdot x \cdot y^3 \cdot z$ To find the LCM, take the highest power of each factor that appears in either expression. $\text{LCM} = 2^4 \cdot 3 \cdot 5 \cdot x^2 \cdot y^3 \cdot z = 240x^2y^3z$ Example C Find the LCM of $2x^2+8x+8$$x^3-4x^2-12x$ Factor the polynomials completely: $2x^2+8x+8 & = 2(x^2+4x+4)\\& = 2(x+2)^2$ $x^3-4x^2-12x & = x(x^2-4x-12)\\& = x(x+2)(x-6)$ To find the LCM, take the highest power of each factor that appears in either expression. $\text{LCM} = 2x(x+2)^2 (x-6)$ It’s customary to leave the LCM in factored form, because this form is useful in simplifying rational expressions and finding any excluded values. Add and Subtract Rational Expressions with Different Denominators Now we’re ready to add and subtract rational expressions. We use the following procedure. 1. Find the least common denominator (LCD) of the fractions. 2. Express each fraction as an equivalent fraction with the LCD as the denominator. 3. Add or subtract and simplify the result. Example D Perform the following operation and simplify: $\frac{2}{x+2} - \frac{3}{2x-5}$ The denominators can’t be factored any further, so the LCD is just the product of the separate denominators: $(x+2)(2x-5)$$(2x-5)$$(x+2)$ $\frac{2}{x+2} \cdot \frac{(2x-5)}{(2x-5)} - \frac{3}{2x-5} \cdot \frac{(x+2)}{(x+2)}$ $\text{Combine the numerators and simplify:} \qquad \qquad \frac{2(2x-5)-3(x+2)}{(x+2)(2x-5)} = \frac{4x-10-3x-6}{(x+2)(2x-5)}\!\\\\\text{Combine like terms in the numerator:} \qquad \qquad \frac {x-16}{(x+2)(2x-5)} \quad \mathbf{Answer}$ Example E Perform the following operation and simplify: $\frac{4x}{x-5}-\frac{3x}{5-x}$ Notice that the denominators are almost the same; they just differ by a factor of -1. $\text{Factor out -1 from the second denominator:} \qquad \qquad \qquad \qquad \qquad \qquad \frac{4x}{x-5} - \frac{3x}{-(x-5)}\!\\\\\text{The two negative signs in the second fraction cancel:} \ qquad \qquad \qquad \qquad \frac{4x}{x-5}+\frac{3x}{(x-5)}\!\\\\\text{Since the denominators are the same we combine the numerators:} \ \qquad \frac{7x}{x-5} \quad \mathbf{Answer}$ Watch this video for help with the Examples above. CK-12 Foundation: Adding and Subtracting Rational Expressions • Add and Subtract Rational Expressions with the Same Denominator Fractions with common denominators combine in the following manner: $\frac{a}{c}+\frac{b}{c} = \frac{a+b}{c} \qquad \text{and} \qquad \frac{a}{c} - \frac{b}{c}=\frac{a-b}{c}$ Guided Practice a.) Find the LCM of $x^2-25$$x^2+3x+2$ b.) Perform the following operation and simplify: $\frac{2x-1}{x^2-9}-\frac{3x+4}{x^2-9}$ a.) First factor each polynomial to see if they have any common factors: Since the two polynomials do not have any common factors, this means that the LCM of the two polynomials is: b.) To subtract the second fraction from the first, subtraction the numerator of the second from the numerator of the first. Make sure to put parenthesis around the numerator of the second fraction, so you remember to subtract each term. Perform the indicated operation and simplify. Leave the denominator in factored form. 1. $\frac{5}{24}-\frac{7}{24}$ 2. $\frac{2x}{13}-\frac{x}{3}$ 3. $\frac{5}{2x+3}+\frac{3}{2x+3}$ 4. $\frac{1}{5x-7}+\frac{10}{5x-7}$ 5. $\frac{3x-1}{x+9}-\frac{4x+3}{x+9}$ 6. $\frac{1-7x}{3x+10}-\frac{x+20}{3x+10}$ 7. $\frac{4x+7}{2x^2}-\frac{3x-4}{2x^2}$ 8. $\frac{10x-5}{9x^2}-\frac{5}{9x^2}$ 9. $\frac{x^2}{x+5}-\frac{25}{x+5}$ 10. $\frac{.25x^2}{x+100}-\frac{0.1}{x+100}$ 11. $\frac{1}{x}+\frac{2}{3x}$ 12. $\frac{4}{5x^2}-\frac{2}{7x^3}$ 13. $\frac{10}{3x-1}-\frac{7}{1-3x}$ 14. $\frac{10}{x+5}+\frac{2}{x+2}$ 15. $\frac{2x}{x-3}-\frac{3x}{x+4}$ 16. $\frac{4x-3}{2x+1}+\frac{x+2}{x-9}$ 17. $\frac{x^2}{x+4}-\frac{3x^2}{4x-1}$ 18. $\frac{2}{5x+2}-\frac{x+1}{x^2}$ 19. $\frac{x+4}{2x}+\frac{2}{9x}$ 20. $\frac{5x+3}{x^2+x}+\frac{2x+1}{x}$ 21. $\frac{4}{(x+1)(x-1)}-\frac{5}{(x+1)(x+2)}$ 22. $\frac{2x}{(x+2)(3x-4)}+\frac{7x}{(3x-4)^2}$ 23. $\frac{3x+5}{x(x-1)}-\frac{9x-1}{(x-1)^2}$ 24. $\frac{1}{(x-2)(x-3)}+\frac{4}{(2x+5)(x-6)}$ 25. $\frac{3x-2}{x-2}+\frac{1}{x^2-4x+4}$ 26. $\frac{-x^3}{x^2-7x+6}+x-4$ 27. $\frac{2x}{x^2+10x+25}-\frac{3x}{2x^2+7x-15}$ 28. $\frac{1}{x^2-9}+\frac{2}{x^2+5x+6}$ 29. $\frac{-x+4}{2x^2-x-15}+\frac{x}{4x^2+8x-5}$ 30. $\frac{4}{9x^2-49}-\frac{1}{3x^2+5x-28}$
{"url":"http://www.ck12.org/algebra/Addition-and-Subtraction-of-Rational-Expressions/lesson/Addition-and-Subtraction-of-Rational-Expressions---Intermediate/","timestamp":"2014-04-20T13:52:30Z","content_type":null,"content_length":"132927","record_id":"<urn:uuid:48e84456-0de8-4d0b-b7a2-6631e33303b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Need analysis on the effect of math anxiety among secondary school children in nigeira by Amakacharles1 Background of Study The issue of poor performance of students in mathematics has become a perennial problem, both elsewhere and in Nigeria. In Nigeria, the performances of students in external examinations in mathematics have continued to slide on a downward trend. Learners continue to manifest weak understanding of mathematics concepts, skills, generalizations, etc. Tobias (1993) defines mathematics anxiety as feelings of tension and anxiety that interfere with the manipulation of numbers and the solving of mathematical problems in a wide variety of ordinary life and academic situations and can cause one to forget and loose one’s self-confidence. Math anxiety usually arises from the lack of confidence when working in mathematical situations (Stuart, 2000). It has been attributed to one of five sources including: myths associated with mathematics, out of class experiences, expectations, reading and language facility, and classroom stress (Handler, 1990). Math Anxiety is often compounded over time and can affect students of math in a variety of ways. Math anxiety can begin at any age of schooling, but most students most commonly have negative experiences (Clawson, 1991, p.2). Unless addressed directly, this anxiety often continues or even worsens through secondary level, and into higher institution. This anxiety is not only difficult for the student to deal with, but it compounds into a lack of understanding of major concepts. This can close doors for students who not only may have otherwise chosen careers that would deal with math directly, but indirectly also. The lack of understanding of basic mathematical principles can result in an inability to solve chemistry, engineering,...
{"url":"http://www.studymode.com/essays/Need-Analysis-On-The-Effect-Of-454832.html","timestamp":"2014-04-16T17:05:11Z","content_type":null,"content_length":"33592","record_id":"<urn:uuid:8dd267b3-7a68-423f-94ad-caa6cdab53bc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
1006 Submissions [1] viXra:1006.0046 [pdf] replaced on 2012-01-11 19:16:15 U-Statistics Based on Spacings Authors: David D. Tung, S. Rao Jammalamadaka Comments: 23 Pages. In this paper, we investigate the asymptotic theory for U-statistics based on sample spacings, i.e. the gaps between successive observations. The usual asymptotic theory for U-statistics does not apply here because spacings are dependent variables. However, under the null hypothesis, the uniform spacings can be expressed as conditionally independent Exponential random variables. We exploit this idea to derive the relevant asymptotic theory both under the null hypothesis and under a sequence of close alternatives. The generalized Gini mean difference of the sample spacings is a prime example of a U-statistic of this type. We show that such a Gini spacings test is analogous to Rao's spacings test. We find the asymptotically locally most powerful test in this class, and it has the same efficacy as the Greenwood statistic. Category: Statistics
{"url":"http://vixra.org/stat/1006","timestamp":"2014-04-19T00:07:16Z","content_type":null,"content_length":"4223","record_id":"<urn:uuid:0ff4358b-40a0-4d39-817a-0c9fbaf3aa2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric Series App Trigonometric Series Approximations The trigonometric functions are useful for modeling periodic behavior. For example, we may describe the motion of an object oscillating at the end of a spring (ignoring any damping forces, such as friction, and assuming the object is at x = 0 at time t = 0) with x(t) = asin(ωt), where a is the amplitude of the motion and ω/2π is the frequency of the motion. Sound waves For a more complicated example, consider the motion of a molecule of air as a sound wave passes. The action of the sound wave causes a particular molecule of air to oscillate back and forth about some equilibrium position. If we let x(t) represent the position of the air molecule at time t, with x = 0 corresponding to the equilibrium position and x considered to be positive in one direction from the equilibrium position and negative in the other, then for many sounds x will be a periodic function of t. In general, this will be true for musical sounds, but not true for sounds we would normally classify as noise. Moreover, even if x is a periodic function, it need not be simply a sine or cosine function. The graph of x for a musical sound, although periodic, may be very complicated. However, many simple sounds, such as the sound of a tuning fork, are represented by sine curves. For example, if x is the displacement of an air molecule for a tuning fork which vibrates at 440 cycles per second with a maximum displacement from equilibrium of 0.002 centimeters, then x(t) = 0.002sin(880πt). In the early part of the 19th century, Joseph Fourier (1768-1830) showed that the story does not end here. Fourier demonstrated that any "nice" periodic curve (for example, one which is continuous) can be approximated as closely as desired by a sum of sine and cosine functions. In particular, this means that for any musical sound the function x may be approximated well by a sum of sine and cosine functions. For example, in his book The Science of Musical Sounds (Macmillan, New York, 1926), Dayton Miller shows that, with an appropriate choice of units, the sequence of functions x[1](t) = 22.4sin(t) + 94.1cos(t) x[2](t) = x[1](t) + 49.8sin(2t) - 43.6cos(2t) x[3](t) = x[2](t) + 33.7sin(3t) - 14.2cos(3t) x[4](t) = x[3](t) + 19.0sin(4t) - 1.9cos(4t) x[5](t) = x[4](t) + 8.90sin(5t) - 5.22cos(5t) x[6](t) = x[5](t) - 8.18sin(6t) - 1.77cos(6t) x[7](t) = x[6](t) + 6.40sin(7t) - 0.54cos(7t) x[8](t) = x[7](t) + 3.11sin(8t) - 8.34cos(8t) x[9](t) = x[8](t) - 1.28sin(9t) - 4.10cos(9t) x[10](t) = x[9](t) - 0.71sin(10t) - 2.17cos(10t) give successively better approximations to the displacement curve of a sound wave generated by the tone C[3] of an organ pipe. Notice that the terms in this expression for x(t) are written in pairs with frequencies which are always integer multiples of the frequency of the first pair. This is a general fact which is part of Fourier's theory; if we added more terms to obtain more accuracy, the next terms would be of the form asin(11t) + bcos(11t) for some constants a and b. Notice also that the amplitudes of the sine and cosine curves tend to decrease as the frequencies are increasing. As a consequence, the higher frequencies have less impact on the total curve. Put another way, Fourier's theorem says that every musical sound is the sum of simple tones which could be generated by tuning forks. Hence in theory, although certainly not in practice, the instruments of any orchestra could all be replaced by tuning forks. On a more practical level, Fourier's analysis of periodic functions has been fundamental for the development of such modern conveniences as radios, televisions, stereos, and compact disc players. In the applet below, clicking on the buttons will load an audio file to play the given tone for the functions x[1] through x[10] described above. The units are scaled so that the fundamental is played at a frequency of 261 cycles per second (middle C). Each audio file is 41k, and so may take a moment to download the first time you play it. As overtones are added, you can see the complexity of the motion increase, while remaining periodic. Square wave The function x(t) which is 1 for 0 ≤ t < 0.5 and -1 for 0.5 ≤ t < 1, and then repeats these values over every interval of length 1, is an example of a square wave. The following applet plots this square wave along with an approximating trigonometric series. If n terms are requested, the approximating sum is p[n](t) = (π/2)^1/2(sin(2πt) + sin(6πt)/3 + . . . + sin(2π(2n-1)t)/n). Note that although p[n] is continuous, it approximates the discontinuous square wave well for even small values of n. At the same time, note that the error in approximation at the points of discontinuity of x does not appear to be decreasing in the same way as it does at points of continuity. 1. According to Dayton Miller in The Science of Musical Sounds, the function x(t) = 151sin(t) - 67cos(t) + 24sin(2t) + 55cos(2t) + 27sin(3t) + 5cos(3t) gives a good approximation to the shape of the displacement curve for the tone B[4] played on the E string of a violin. a. Graph each of the individual terms of x on the interval [-15, 15]. Use a common scale for the vertical axis. b. Graph x on [-15, 15]. c. Graph x and its individual terms (a total of 7 graphs) together on the interval [-15, 15]. Suppose we define a function f by saying that it is periodic with period 1 and that f(x) = 1 - 2x for 0 ≤ x < 1. a. Sketch the graph of f over the interval [-3, 3]. b. Let g[n](x) = 2(sin(2πx)/π + sin(4πx)/2π + sin(6πx)/3π + . . . + + sin(2nπx)/nπ) What is the period of g[n]? Graph g[1], g[2], g[3], g[4], g[5], and g[10] over the interval [-3, 3]. c. What do you think happens to g[n] as n gets large? For an interesting account of sound waves, Fourier's theorem, and related ideas in electromagnetism, read Chapters 19 ("The Sine of G Major") and 20 ("Mastery of the Ether Waves") in Morris Kline's Mathematics in Western Culture (Oxford University Press, 1953). Copyright © 2002 by Dan Sloughter.
{"url":"http://math.furman.edu/~dcs/soundwave/soundwave.html","timestamp":"2014-04-19T07:03:33Z","content_type":null,"content_length":"9107","record_id":"<urn:uuid:16acb585-9f5d-4ff1-b91d-c18f6c775464>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: graph the percent missing fairly efficiently [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: graph the percent missing fairly efficiently From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: graph the percent missing fairly efficiently Date Fri, 15 Nov 2002 17:17:58 -0000 > I want to produce a report for all variables of the percent > of observations > that are missing. I know about codebook and inspect, and > 'nmissing' (from the > web). However, if possible, what is wanted is a percent of missing > observations for each variable. I could use 'tabulate' > but ideally we want a > pie chart showing the number missing. For this I could > use 'graph' and > 'pie' but to make 'pie' work, I think I need to turn all > the missings [recode > or mvencode] to a recognizable coded missing like '-99' or > something similar. > This is one option, but perhaps there is an another option > (that won't require > variable recoding or generation, or if it does, certain > solutions are fairly > efficient). Perhaps some variant of egen or egenodd is > applicable, or > perhaps the matter is simplified with a different type of > graph chart. Richard Goldstein has suggested various tabular outputs. You are aware of -nmissing- (STB-49, STB-60). One possibility is to scoop up the output of that and show it in a graph: producing percents from the counts is naturally easy. I am not clear what kind of pie graph you want, but in any case I suspect there are better displays. (Eyeballing 100 pies simultaneously is fairly ineffective.) Also how many variables have you got? 10s 100s 1000s 10000s? Presumably not many, as it is difficult to think of a display which will not be unreadable unless the number of variables is small. However, one possibility is to omit all variables for which no values are missing. On the assumption that there are no fewer observations than variables, two quick graphs can be knitted in this way, assuming no variables called -missing- -present- or -varname-: ======================== mygraph.do qui d, s local nvars = r(k) unab vars : * gen missing = . gen str1 varname = "" local i = 1 qui foreach v of var `vars' { replace varname = "`v'" in `i' count if missing(`v') replace missing = r(N) in `i' local i = `i' + 1 replace missing = 100 * missing / _N gen present = 100 - missing hbar present missing in 1/`nvars', l(varname) xla(0(10)100) border hbar present missing if missing in 1/`nvars', l(varname) xla(0(10)100) Watch for mailers wrapping the two lines with -hbar-. -hbar- is not in official Stata. If you don't have it you need to install it from SSC. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2002-11/msg00262.html","timestamp":"2014-04-18T21:36:48Z","content_type":null,"content_length":"7429","record_id":"<urn:uuid:50f60935-ddb5-4903-af35-6372863d3d7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: BB Find the slope of the line shown in the given graph. View Solution BB Which of the following is true for the slope of the line shown in the graph? BB View Solution BB Find the slope of the line that passes through the points (3, 5) and (-1, - 8). BB View Solution BB Find the slope of the line passing through the points (23, - 3) and (- 35, - 2). BB View Solution BB Find the slope of the equation 2x + y = 5. BB View Solution BB What is the slope of the diagonal PQ of the rectangle shown in the graph? BB View Solution BB Find the slope of the line shown in the graph. BB View Solution BB Find the slope of the line AB as shown in the graph. BB View Solution BB Kate started reading a story book. Initially, she read 20 pages. By the end of 4 days, she completed 84 pages. Find the average rate of reading the book and use it to determine the View number of pages can be completed after 10 days. Solution BB The profit of a company in the year 1998 is $40,000. The profit raised to $48,000 in the year 2000. At this rate what would be the profit of the company in 2004? View Solution BB Find the x - intercept of the line y = 5x + 30. View Solution BB Find the x - intercept of the equation of a line y = 3x - 13. View Solution BB Find the y - intercept of the line 3x - 2y = 17. View Solution BB Find the y - intercept of the line y = 7x - 19. View Solution BB Use intercepts to identify the graph of the equation 3x + 2y = 18. BB View Solution BB Use intercepts to choose the graph of the equation 3x - 4y - 12 = 0. View Solution BB Write the intercepts of the line shown in the graph. BB View Solution BB Find the x and y - intercepts of the equation - 3x + (12) y = - 4. BB View Solution BB Find the x - and y - intercepts of the equation (- 56)x + 30y = 15. BB View Solution BB Use intercepts to choose the graph of the equation 5 = - 3x - 4y. View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgedjxkjgdh&.html","timestamp":"2014-04-20T23:28:12Z","content_type":null,"content_length":"62016","record_id":"<urn:uuid:876a58d6-9334-4f44-94d9-42ff3ae94bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Pleasant Prairie Precalculus Tutor Find a Pleasant Prairie Precalculus Tutor ...I am a full-time engineer at a Telecom company. My passion is mathematics; I am Wyzant certified to tutor Pre-Algebra, Algebra 1, Algebra 2, Geometry, Pre-Calculus, Trigonometry, Calculus, Statistics, Excel, and Chemistry. I am married and a father of a boy (14) and a girl (11). Both of my chil... 18 Subjects: including precalculus, geometry, algebra 1, ASVAB ...They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. This is a field I am passionate in and have much background in on a personal, out-of-classroom basis. 16 Subjects: including precalculus, chemistry, algebra 2, calculus I have over 10 years of experience teaching math and English in the public, private, and international school setting. I've taught all levels of math in the middle school and high school levels. As an experienced classroom teacher (now a stay-at-home mom to two little ones), I know the struggles and triumphs that students face in the classroom. 8 Subjects: including precalculus, geometry, algebra 1, trigonometry Hello! My name is Mathieu E. and I am an alumnus of Cornell College in Mount Vernon, IA. I have a B.A. in Mathematics and Art History and graduated in 2012. 27 Subjects: including precalculus, reading, chemistry, English ...I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many students who hated math started liking it after my tutoring. That is my specialty. 12 Subjects: including precalculus, calculus, trigonometry, statistics
{"url":"http://www.purplemath.com/pleasant_prairie_wi_precalculus_tutors.php","timestamp":"2014-04-25T01:24:10Z","content_type":null,"content_length":"24321","record_id":"<urn:uuid:d4903991-3c6f-46f4-9868-67644cb7fb18>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD AND APPARATUS FOR CORRECTION OF ARTIFACTS IN MAGNETIC RESONANCE IMAGES Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP In a method and apparatus for the correction of artifacts in magnetic resonance images (MR) acquired with an MR pulse sequence in which gradients are switched simultaneously during the radiation of at least one non-selective excitation pulse, measurement data acquired with the pulse sequence in k-space are loaded into a processor, in which a perturbation matrix is determined on the basis of spatial and k-space point data of the acquired measurement data and the gradients used during the excitation. A corrected image is calculated from the acquired measurement data in k-space and the perturbation matrix, with the calculation of the corrected image including a matrix inversion of the perturbation matrix. The corrected image is then stored or displayed. A method for correction of artifacts in magnetic resonance (MR) resonance images, comprising: entering MR measurement data, acquired with a pulse sequence in which gradients are activated simultaneously during radiation of at least one non-selective excitation pulse, into a memory representing k-space; from a processor, accessing said MR measurement data in said k-space memory and, in said processor, determining a perturbation matrix on spatial data of the acquired MR measurement data, and said measurement data in k-space, and the gradients used in said sequence; in said processor, inverting said perturbation matrix to obtain an inverted perturbation matrix, and calculating a corrected image from the MR measurement data in k-space and the inverted perturbation matrix; and making said corrected image available as a data file at an output of said processor. A method as claimed in claim 1 comprising determining said perturbation matrix based on an excitation profile of said at least one non-selected excitation pulse. A method as claimed in claim 1 comprising calculating said corrected image by a matrix multiplication of said inverted perturbation matrix with said MR measurement data in k-space. A method as claimed in claim 1 comprising calculating said corrected image by calculating an undistorted k-space by correcting said MR measurement data in k-space using said inverted perturbation A method as claimed in claim 1 comprising separating said MR measurement data in k-space into a plurality of groups before calculating said corrected image, dependent on acquisition of the respective A method as claimed in claim 5 comprising calculating said corrected image as a collection of all MR measurement data acquired in a Cartesian manner in a matrix of distorted k-space, scanned in said Cartesian manner. A method as claimed in claim 5 comprising calculating said corrected image as a separate calculation of a corrected image or corrected k-space for MR measurement data acquired as a one-dimensional A method as claimed in claim 1 comprising separating said MR measurement data into a plurality of groups before calculating said corrected image, dependent on a manner by which said MR measurement data were acquired in said sequence and, for each group, calculating undistorted k-space using said inverted perturbation matrix, thereby obtaining a plurality of undistorted k-spaces for the respective groups, and combining said plurality undistorted k-spaces into a common undistorted k-space. A method as claimed in claim 1 comprising separating the acquired MR measurement data into groups before calculating said corrected image, dependent on a manner by which the respective groups were acquired in said sequence, and, for each group, calculating a corrected image, thereby obtaining a plurality of corrected images for the respective groups, and calculating a common corrected image from said plurality of corrected images by complex multiplication. A method as claimed in claim 1 wherein said MR measurement data include data dependent on an excitation profile of said non-selective excitation pulse, and, in said processor, calculating an additional corrected image by dividing respective pixels of a distorted image by said excitation profile. A method as claimed in claim 10 comprising, in said processor, calculating a difference image and said corrected image and said additional corrected image, and making said difference image available at an output of said processor as a further data file. A magnetic resonance (MR) apparatus, comprising: an MR data acquisition unit; a control unit configured to operate said MR data acquisition unit to acquire MR measurement data with a pulse sequence in which gradients are activated simultaneously during radiation of at least one non-selective excitation pulse, and to enter said MR measurement data into a memory representing k-space; a processor configured to access said MR measurement data in said k-space memory and determine a perturbation matrix based on spatial data of the acquired MR measurement data, and said measurement data in k-space, and the gradients used in said sequence; said processor being configured to invert said perturbation matrix to obtain an inverted perturbation matrix, and to calculate a corrected image from the MR measurement data in k-space and the inverted perturbation matrix; and said processor being configured to make said corrected image available as a data file at an output of said processor. A non-transitory, computer-readable data storage medium encoded with programming instructions that, when said storage medium is loaded into a computerized processor, cause said computerized processor to: operate said MR data acquisition unit to acquire MR measurement data with a pulse sequence in which gradients are activated simultaneously during radiation of at least one non-selective excitation pulse, and to enter said MR measurement data into a memory representing k-space; access said MR measurement data in said k-space memory and determine a perturbation matrix based on spatial data of the acquired MR measurement data, and said measurement data in k-space, and the gradients used in said sequence; invert said perturbation matrix to obtain an inverted perturbation matrix, and calculate a corrected image from the MR measurement data in k-space and the inverted perturbation matrix; and make said corrected image available as a data file. BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The invention concerns: a method to correct artifacts in magnetic resonance (MR) images by means of an MR pulse sequence in which gradients are switched (activated) simultaneously during the radiation of at least one excitation pulse, as well as a magnetic resonance apparatus and an electronically readable data medium for implementing such a method. 2. Description of the Prior Art The magnetic resonance modality (also known as magnetic resonance tomography) is a known technique with which images of the inside of an examination subject can be generated. Expressed simply, for this purpose the examination subject is positioned within a strong, static, homogeneous basic magnetic field (also called a B field) having a field strength of 0.2 Tesla to 7 Tesla and more, such that the nuclear spins of the examination subject are oriented along the basic magnetic field. To trigger nuclear magnetic resonance signals, radio-frequency excitation pulses (RF pulses) are radiated into the examination subject, the triggered magnetic resonance signals are measured (detected) in a form known as k-space data, and MR images are reconstructed, or spectroscopy data are determined, based on these nuclear magnetic resonance signals. For spatial coding of the measurement data, rapidly switched magnetic gradient fields (also shortened to "gradients") are superimposed on the basic magnetic field. The acquired measurement data are digitized and stored as complex numerical values in a k-space matrix. An associated MR image can be reconstructed from the k-space matrix populated with such values, for example by means of a multidimensional Fourier transformation. Sequences with very short echo times TE, for instance TE less than 0.5 milliseconds, offer new fields of application for magnetic resonance tomography. They enable the depiction of substances that cannot be shown with conventional sequences such as (T)SE ((Turbo)Spin Echo) or GRE (Gradient Echo), since the respective decay time of the transverse magnetization T2 in such ultrashort sequences is markedly shorter than the possible echo times of the conventional sequences, which means that in the conventional sequences the detectable signal has already decayed at the acquisition point in time. In contrast, with echo times in the same time range of these decay times, it is possible to show the signals of these substances, for example in an MR image. The decay times T2 of teeth, bones or ice lie between 30 and 80 microseconds, for example. The application of sequences with ultra-short echo times (UEZ sequences) thus enables bone and/or teeth imaging and/or the depiction of cryo-ablations by means of MR, for example, and can be used for MR-PET (combination of MR and positron emission tomography, PET) or PET attenuation correction. Examples of UEZ sequences are UTE ("Ultrashort Echo Time"), for example as it is described in the article by Sonia Nielles-Vallespin, "3D radial projection technique with ultrashort echo times for sodium MRI: Clinical applications in human brain and skeletal muscle", Magn. Res. Med. 2007; 57; P. 74-81; PETRA ("Pointwise Encoding Time reduction with Radial Acquisition") as is described by Grodzki et al. in "Ultra short Echo Time Imaging using Pointwise Encoding Time reduction with Radial Acquisition (PETRA)", Proc. Intl. Soc. Mag. Reson. Med. 19 (2011) P. 2815; or z-TE as is described by Weiger et al. in "MRI with zero echo time: hard versus sweep pulse excitation", Magn. Reson. Med. 66 (2011) P. 379-389. Generally, in these sequences, a hard delta pulse is applied as a radio-frequency excitation pulse, and the data acquisition is subsequently started. In PETRA or z-TE, the gradients are already activated during the excitation. The spectral profile of the excitation pulse corresponds approximately to a sinc function. In the case of insufficient pulse bandwidth or gradients that are too strong, it may be that the outer image regions are no longer sufficiently excited. In the reconstructed MR image, this incorrect excitation has the effect of blurring artifacts at the image edge, which are pronounced more strongly the stronger the gradients switched during the An insufficient excitation thus leads to artifact-plagued MR images. This problem has previously for the most part been ignored. At best it is attempted to optimally reduce the strength of the gradients. However, imaging-relevant variables such as the readout bandwidth, the repetition time TR and the contrast of the image therefore change. For example, a reduction of the gradient strength increases the minimum necessary repetition time TR, and therefore also the total measurement time. Furthermore, such artifacts could be avoided in that the excitation pulses are selected to be particularly short in order to increase the excitation width. However, at the same time the maximum possible flip angle and the precision of the actually sent RF excitation pulse are therefore proportional to the duration of the RF excitation pulse. For example, given a duration of the excitation pulse of 14 microseconds the maximum flip angle amounts to approximately 9°, and given a reduced duration of the excitation pulse to 7 microseconds the maximum flip angle would amount to only approximately 4.5°. This procedure therefore also cannot be used without limitations and is accompanied by a degradation of the image quality. SUMMARY OF THE INVENTION [0011] An object of the present invention is to provide a magnetic resonance system, method and an electronically readable data storage medium that allow a reduction of artifacts in MR measurement with gradients switched during the excitation, without limiting the MR measurement. The method according to the invention for the correction of artifacts in magnetic resonance images, which images were acquired by means of an MR pulse sequence in which gradients are switched simultaneously during the radiation of at least one non-selective excitation pulse, includes the steps of acquiring load measurement data acquired with the pulse sequence in k-space, determining a perturbation matrix on the basis of spatial and k-space point data of the acquired measurement data and the gradients used during the excitation, calculating a corrected image from the acquired measurement data in k-space and the perturbation matrix wherein the calculation of the corrected image includes a matrix inversion of the perturbation matrix, and displaying and/or storing the corrected image. By the calculation of a perturbation matrix on the basis of the location to be measured, the read-out k-space points and the gradients applied during the excitation, and the inversion of this perturbation matrix, the interfering influence of a non-uniform, incorrect excitation can be remedied in a simple manner. The image quality can thus be markedly improved, primarily in the outer regions of the reconstructed image. In particular, a high homogeneity in the image and a sharp contrast can be achieved without artifacts. The strength of the applied gradients is not subjected to any limitations by the method according to the invention. This means that stronger gradients can also be switched without having to accept losses in the image quality. Longer lasting excitation pulses--and therefore higher flip angles--can likewise also be used with the method according to the invention, without negatively affecting the image quality. The invention is based on the following considerations. In measurements with gradients switched during the excitation, the excited region changes with each repetition because different gradient configurations are switched in each repetition. This leads to perturbations since, for example, with a repetition with a gradient configuration of Gx=0 and Gy=G, an image resulting from this measurement point is respectively overlaid with a sinc function corresponding to the incorrect excitation (the sinc function being symmetrical in the y-direction). In contrast, in the case of a repetition with a gradient configuration of Gx=G and Gy=0, for example, an image resulting from this measurement point is overlaid with a sinc function corresponding to the incorrect excitation (which sinc function is symmetrical in the x-direction). The dependency of the excitation profile in the x-direction (specified in millimeters "mm")--and therefore of the produced effect P(k,x) (specified in arbitrary units "a.U.")--on a currently applied gradient strength G1, G2, G3, G4, G5 is presented as an example in FIG. 1. In the shown example, G5>G4>G3>G2>G1. As is apparent, the excitation profile becomes wider as the applied gradient strength becomes smaller. The widest excitation profile (drawn with solid line)--i.e. an optimally homogenous excitation (P(k,x)) over the largest possible region (x)--is therefore achieved at G1. The narrowest excitation profile (drawn with double dash-dot line)--which already entails a drastic change in the excitation (P(k,x)) given a small spatial change (x)--is achieved at G5. The problem can be described mathematically as follows. In MR measurements, what is known as k-space F(k) which corresponds to the examination region of the measured subject that is to be imaged is scanned, wherein: . (1) wherein f(x) describes the signal of the subject to be measured, a k-space F(k) is filled with the acquired measurement data. The image I(x) is calculated by Fourier back-transformation from k-space filled with the measurement data: ikxdk. (2) In the case of insufficient excitation, instead of desired k-space F(k) distorted k-space F'(k) is measured, i.e. filled with the measurement data. In distorted k-space F'(k), the signal of the subject f(x) to be measured is overlaid with a perturbation function P(k,x) which corresponds to the spectral shape of the actual excitation pulse, thus the excitation profile: . (3) The excitation profile P(k,x) depends both on the location x and on the measured k-space point k and on the gradient strength. The excitation profile of an excitation pulse essentially corresponds to the Fourier transform of the pulse shape of the excitation pulse in time period p(t); in the example shown using FIG. 1, the excitation profiles correspond to a respective sinc function, for example as they result given "hard", rectangular excitation pulses p(t) which have a constant value (B1, for example) not equal to zero during the duration τ of the excitation pulse: A rectangular excitation pulse p ( t ) = { B 1 , for t < τ / 2 0 , otherwise ##EQU00001## corresponds in frequency space to a sinc -shaped spectral excitation profile P(ω) with ( ω ) = sin ( 1 2 ωτ ) 1 2 ωτ = sin c ( 1 2 ωτ ) and a phase factor . ##EQU00002## In the presence of switched gradients , the resonance frequency ω is a function of the location (represented here by x) in the image domain: ω=2πγωG, with γ the gyromagnetic ratio and G the strength of the applied gradient. Given gradients varying in the course of the MR pulse sequence (for example in different repetitions), ω is also a function of the read-out k-space point k, which is why the excitation pulse can be written as P(ω)=P(k,x). A distorted MR image I'(x) plagued with artifacts can be reconstructed from distorted k-space F'(k): ikxdk. (4) According to the invention, the distorting influence of the incorrect excitation pulse is eliminated from the measured measurement data in that the excitation error is calculated in a perturbation matrix D , and the error of the excitation is subsequently remedied via inversion of the perturbation matrix D If Equation (3) is written as a sum (discrete values are actually measured) and if the perturbation matrix is defined with N ×N elements (wherein N is a natural number), Equation (3) can be written in matrix form: . (6) The perturbation matrix D thus reproduces an excitation profile of the excitation pulse used to acquire the measurement data. The elements of Equation (5) are known and can be calculated from the shape of the excitation pulse, the location x to be excited and read-out k-space point k, as well as the applied gradients G. The distorted k-space F'(k) is measured. The undistorted image I(x) can therefore be calculated via matrix inversion of D and matrix multiplication with distorted k-space: . (7) The calculation of a corrected image I (x) comprises a matrix multiplication of the perturbation matrix D inverted via the matrix inversion with the measurement data acquired in k-space F'x. A magnetic resonance system according to the invention comprises a basic field magnet; a gradient field system; a radio-frequency antenna; a control device to control the gradient field system and the radio-frequency antenna; and an image computer to receive measurement signals acquired by the radio-frequency antenna, to evaluate the measurement signals, and to create magnetic resonance images, and is designed to implement the method described herein. The present invention also encompasses a non-transitory, computer-readable data storage medium encoded with programming instructions that, when the storage medium is loaded into a processor, cause the processor to implement one or more of the embodiments of the method according to the invention described above. The advantages and embodiments described with regard to the method analogously apply to the magnetic resonance system, and the electronically readable data medium. BRIEF DESCRIPTION OF THE DRAWINGS [0029] FIG. 1 shows the influence of the applied gradient strength on the excitation profile of an excitation pulse. FIG. 2 schematically illustrates a magnetic resonance system according to the invention. FIG. 3 is a flowchart of an embodiment of the method according to the invention. FIG. 4 is a flowchart of a further embodiment of the method according to the invention. DESCRIPTION OF THE PREFERRED EMBODIMENTS [0033] FIG. 2 schematically illustrates a magnetic resonance system 5 (a magnetic resonance imaging or magnetic resonance tomography apparatus). A basic field magnet 1 generates a temporally constant, strong magnetic field for polarization or alignment of the nuclear spins in an examination region of an examination subject U, for example of a part of a human body that is to be examined, which part lies on a table 23 and is moved into the magnetic resonance system 5. The high homogeneity of the basic magnetic field that is required for the magnetic resonance measurement is defined in a typically spherical measurement volume M into which the parts of the human body that are to be examined are introduced. To support the homogeneity requirements, and in particular to eliminate temporally variable influences, shim plates made of ferromagnetic material are mounted at a suitable point. Temporally variable influences are eliminated via shim coils 2 and a suitable controller 27 for the shim coils 2. A cylindrical gradient coil system 3 that has three sub-windings is used in the basic field magnet 1. Each sub-winding is supplied by a corresponding amplifier 24-26 with current to generate a linear gradient field in the respective direction of a Cartesian coordinate system. The first sub-winding of the gradient field system 3 thereby generates a gradient G in the x-direction; the second sub-winding generates a gradient G in the y-direction; and the third sub-winding generates a gradient G, in the z-direction. The amplifiers 24-26 each include a digital/analog converter (DAC), which is controlled by a sequence controller 18 for time-accurate generation of gradient pulses. Located within the gradient field system 3 is a radio-frequency antenna 4 which converts the radio-frequency pulses emitted by a radio-frequency power amplifier into an alternating magnetic field to excite the nuclei and align the nuclear spins of the subject to be examined or, respectively, of the region of the subject that is to be examined. The radio-frequency antenna 4 has one or more RF transmission coils and multiple RF reception coils in the form of an arrangement (annular, linear or matrix-like, for example) of coils. The alternating field emanating from the precessing nuclear spins--normally the nuclear spin echo signals caused by a pulse sequence made up of one or more radio-frequency pulses and one or more gradient pulses--is also transduced by the RF reception coils of the radio-frequency antenna 4 into a voltage (measurement signal) which is supplied via an amplifier 7 to a radio-frequency reception channel 8, 8' of a radio-frequency system 22. The radio-frequency system 22 furthermore has a transmission channel 9 in which the radio-frequency pulses are generated for the excitation of the magnetic resonance signals. The respective radio-frequency pulses are represented digitally in the sequence controller 18 as a series of complex numbers based on a pulse sequence predetermined by the system computer 20. This number series is supplied as real part and imaginary part via respective inputs 12 to a digital/analog converter (DAC) in the radio-frequency system 22, and from this to the transmission channel 9. In the transmission channel 9 the pulse sequences are modulated on a radio-frequency carrier signal whose base frequency corresponds to the resonance frequency of the nuclear spins in the measurement volume. The modulated pulse sequences are supplied to the RF transmission coil of the radio-frequency antenna 4 via an amplifier 28. The switch-over from transmission operation to reception operation takes place via a transmission/reception diplexer 6. The RF transmission coil of the radio-frequency antenna 4 radiates the radio-frequency pulses into the measurement volume M to excite the nuclear spins and samples resulting echo signals via the RF reception coils. The correspondingly acquired nuclear magnetic resonance signals are phase-sensitively demodulated at an intermediate frequency in a first demodulator 8' of the reception channel of the radio-frequency system 22 and are digitized in the analog/digital converter (ADC). This signal is further demodulated to a frequency of zero. The demodulation to a frequency of zero and the separation into real part and imaginary part occur after the digitization in the digital domain in a second demodulator 8 which outputs the demodulated data via outputs 11 to an image computer 17. An MR image is reconstructed by the image computer 17 from the measurement data acquired in such a manner, in particular using the method according to the invention, which comprises a calculation of at least one perturbation matrix and its inversion (by means of the image computer 17, for example). The administration of the measurement data, the image data and the control programs takes place via the system computer 20. Based on a specification with control programs, the sequence controller 18 controls the generation of the respective desired pulse sequences and the corresponding scanning of k-space. In particular, the sequence controller 18 controls the accurately-timed switching of the gradients, the emission of the radio-frequency pulses with defined phase amplitude, and the reception of the nuclear magnetic resonance signals. The time base for the radio-frequency system 22 and the sequence controller 18 is provided by a synthesizer. The selection of corresponding control programs to generate a series of MR images (which are stored on a DVD 21, for example) as well as other inputs on the part of the user and the presentation of the generated MR images take place via a terminal 13 that has input means (for example a keyboard 15 and/or a mouse 16) to enable an input and display means (a monitor 14, for example) to enable a display. A workflow diagram of an example of a method according to the invention is schematically presented in FIG. 3. In the course of an MR measurement, in Step 101 a non-selective excitation pulse is radiated into the subject to be measured while a gradient is switched at the same time. As described above, the excitation is hereby insufficient due to the switched gradients. In an additional step 102, magnetic resonance signals triggered by the insufficient excitation 101 are measured and acquired as measurement data F'(k) in k-space (see Equation (3) above). A perturbation matrix D is calculated (as has likewise already been described above) in a further Step 103 and inverted in Step 104. The inverted perturbation matrix D is obtained via the matrix inversion of the perturbation matrix D in Step 104. If the MR measurement is a one-dimensional 1D measurement--thus for example a 1D projection of the subject to be measured (Query 105, downward arrow)--in Step 105 the perturbation-free, corrected image I(x) can be calculated with the aid of the inverted perturbation matrix D and the measured F'(k) using Equation (7) I(x)=D F'(k). The calculated corrected image can furthermore be displayed and/or be stored for further use, for example on an image computer of the magnetic resonance system (Step 116). If the MR measurement is a two-dimensional (2D) or three-dimensional (3D) measurement (Query 105, leftward arrow), the workflow can proceed differently depending on the type of acquisition of the measurement data. This is described in the following without limitation of the generality in the example of an MR measurement by means of a PETRA sequence, which acquires part of the measurement data by means of a radial scanning of k-space and part of the measurement data by means of a Cartesian scan of k-space. In order to keep the matrix sizes and the calculation times as small as possible, it can be reasonable to utilize a present radial symmetry in k-space and, for example, to correct individual radial projections (1D) given radially acquired measurement data, as is described in Steps 101 through 106. Measurement data acquired in a Cartesian manner can be collected and, in larger matrices, can also be corrected in two-dimensional or three-dimensional space depending on the measurement type, as is described further using Steps 101 through 110. The corrected images acquired from the individual measurement parts can ultimately be assembled into a common corrected image via a complex multiplication (see FIG. 4). It can be reasonable to do this not in image space but rather in k-space. If a Fourier transformation is applied to Equation (7), the following relationship is found between distorted k-space F'(k) or F' in matrix notation and undistorted, corrected k-space F(k) or F in matrix notation: . (8) For E(k,x)=E it applies that: . (9) Using Equation (8), the calculation of a corrected image thus comprises a calculation of an undistorted k-space F in which the acquired measurement data are corrected from distorted k-space F' in which the measurement data were acquired using the perturbation matrix D inverted by the matrix inversion. The workflow can proceed as follows during an MR measurement, for example. If the measurement data are acquired by means of a Cartesian scanning of k-space (Query 107, rightward arrow "cart."), all measurement points acquired in a Cartesian manner are initially collected bit by bit in a matrix of distorted k-space F' (k) that is scanned in a Cartesian manner (Step 108), until all k-space points acquired in a Cartesian manner are excited. In Query 109 a query is made as to whether all k-space points to be acquired have been acquired in the matrix of k-space F' (k) that is scanned in a Cartesian manner (Query 109, downward arrow), or whether additional k-space points have yet to be excited (Step 101) and acquired (Step 102) (Query 109, rightward arrow). If the entirety of k-space F' (k) to be scanned in a Cartesian manner has been acquired, undistorted Cartesian k-space F (k) can be calculated by means of Equation (8) (Step 110). Alternatively, a corrected image I (k) can be calculated directly by means of Equation (7) from the entirety of k-space F' (k) to be scanned in a Cartesian manner (described in detail further below with regard to FIG. 4). In order to obtain a corrected MR image reflecting all k-space points acquired within the entire MR measurement, such a corrected image I (x) would, for example, be to be complexly multiplied, for example as mentioned above with corrected images I(x) obtained according to Steps 101 through 106. If the measurement data are acquired by means of a radial scan of k-space (Query 107, leftward arrow "rad."), for each radial projection i undistorted radial k-space F*.sub.rad,i(k)=E F'.sub.rad,i(k) can respectively be calculated according to Equation (8) (Step 111), instead of a calculation of an undistorted image according to Equation (7) or as in Steps 101 through 106. Since the radially acquired k-space points are for the most part not situated on a Cartesian grid in k-space, in a further Step 112 undistorted radial k-space F*.sub.rad,i(k) can be transferred via what is known as "gridding" or "regridding" to undistorted k-space F.sub.rad,i( ) comprising Cartesian k-space points. The Queries 105 and 107 (and 107*--see FIG. 4) separate the acquired measurement data before the calculation of a corrected image (thus hereby according to the manner in which they were acquired) into groups. Measurement data converted to undistorted k-spaces F.sub.rad,i(k) and F (k) in the course of a measurement can be combined in common undistorted k-space F(k). Common undistorted k-space F(k) corresponds to target k-space, which is composed of all excitations and measurements that have taken place. In Query 114 a query is made as to whether all radial measurements i for the desired 2D or 3D measurements have been implemented, and whether the acquired measurement data have been transferred into common undistorted k-space f(k). If this is not the case (Query 114, leftward arrow), the workflow continues with an additional excitation (Step 101) and acquisition (Step 102) of measurement data to be acquired radially, until all desired measurement data have been acquired (Query 114, downward arrow). In the latter case, a corrected image can now be calculated from completely filled common undistorted k-space F(k) (Step 115), which corrected image can be displayed in Step 116 and/or be stored for further use. If the acquisition of the measurement data after an excitation 101 takes place via full radial projections, and if the dependency of k in the excitation profile P(k,x) in Equation (3) is therefore omitted, the excitation profile is only a function of the location x, i.e. P(k,x)=P(x); Equation (3) corresponds to a convolution of k-space with P(x). Such a perturbation can be remedied simply in that distorted k-space F '(k) is brought into a distorted image space I '(x) (image domain) via Fourier back-transformation (analogous to Equation (2)). The relationship I x ( x ) = I x ' ( x ) P ( x ) . ( 10 ) ##EQU00003## then exists between the undistorted image space I[x] (x) and distorted image space I '(x). The undistorted image I (x) can therefore be calculated by division of the distorted image I '(x) by the excitation profile P(x) that is not dependent on the k-space point. For example, given a PETRA sequence such an acquisition of measurement data can take place after an excitation pulse. For example, given acquisition of measurement data at a second echo time after acquisition of measurement data after a first ultrashort echo time [sic]. For example, the measurement data acquired at Step 102' can thus be converted--by means of a perturbation calculated in Step 103' in the form of the excitation profile P(x)--directly into undistorted images I (x) by means of Equation (10) specified above, and can likewise be displayed in Step 116 and/or be stored for further use. In a further schematic workflow diagram, FIG. 4 illustrates an additional exemplary embodiment of a method according to the invention in which (as was already mentioned above) corrected images obtained from the individual measurement parts are combined into a common corrected image via complex multiplication. The workflow initially corresponds to the workflow of FIG. 3, so the same steps are designated with the same reference characters. Given a one-dimensional (1D) scanning of k-space, for example a radial projection, as described in FIG. 3 an undistorted image I(x)=D F'(k) is calculated in Step 106 using Equation (7), wherein here the query 107* is made as to whether the measurement data have been scanned one-dimensionally, for example in a radial projection (Query 107*, downward arrow, "1D rad.") or in a Cartesian manner (Query 107*, leftward arrow, "cart."). If multiple one-dimensional scans j take place in the course of the measurement, an undistorted image I , j '(k) is calculated in Step 106 for each of these scans. The Query 114 is made as to whether all such one-dimensional scans have taken place and associated corrected images have been calculated (Query 114*, downward arrow) or not (Query 114*, rightward arrow), after which the workflow begins again with an excitation 101. Given a Cartesian scanning of k-space, it is not undistorted k-space F (k) that is calculated according to Equation (8), as in Step 110 from FIG. 3; rather, in Step 110* a corrected image I (k) is calculated according to Equation (7). As described with reference to FIG. 3, an undistorted image I (x) is also possibly calculated in Step 106' [sic] acquired measurement data in which the excitation profile depends only on the location x (Steps 101, 102', 103' and 106'). The respective calculated, undistorted images I (x), I (x) and possibly I (x) are processed in Step 115* via complex multiplication into a complete undistorted image I(x) which can be displayed and/or stored for additional processing in Step 116*. Depending on the application, the undistorted images I (x) from Step 106' can also be additionally offset against the complete undistorted image I(x) to calculate difference images or the like. In both the exemplary embodiment described using FIG. 3 and the exemplary embodiment described using FIG. 4, calculated corrected images and/or additional calculated uncorrected images (calculated according to Equation (4)) can also be arbitrarily combined into intermediate images, for example respectively within the group of measurement data acquired in a Cartesian manner and within the group of radially acquired measurement data. For example, according to one of the exemplary embodiments shown using FIG. 3 or FIG. 4 only specific radial projections can be corrected (as corrected images I.sub.rad,i(x) and/or as corrected k-spaces F.sub.rad,i(k)) in order to save calculation time, and these corrected projections can be combined with uncorrected [sic] (in which, for example, only a slight perturbation is to be expected due to only weakly switched gradients) into a common corrected image I(x). Analogously, only specific measurement points acquired in a Cartesian manner can also be acquired in corrected k-space F (k) in which additional uncorrected measurement values are acquired in which only a slight perturbation is again expected. The embodiments described using FIG. 3 and FIG. 4 can also be combined in order to calculate a common corrected image I(x). For example, selection of measurement data acquired as radial projections and/or a selection of measurement data acquired in a Cartesian manner can be acquired by a method according to FIG. 3 in Step 113 in target k-space, from which a corrected image is calculated which is in turn offset (via complex multiplication) against corrected images (calculated according to the method described using FIG. 4) to form a common corrected image. Although modifications and changes may be suggested by those skilled in the art, it is the intention of the inventors to embody within the patent warranted hereon all changes and modifications as reasonably and properly come within the scope of their contribution to the art. Patent applications by Bjoern Heismann, Erlangen DE Patent applications by David Grodzki, Erlangen DE Patent applications in class Tomography (e.g., CAT scanner) Patent applications in all subclasses Tomography (e.g., CAT scanner) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130101198","timestamp":"2014-04-24T17:17:18Z","content_type":null,"content_length":"69194","record_id":"<urn:uuid:06b59178-604e-44c1-9900-55815fb69ff8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I am unable to watch the videos....!!! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506b30bde4b060a360fe0d10","timestamp":"2014-04-21T02:19:29Z","content_type":null,"content_length":"51143","record_id":"<urn:uuid:ef6af910-c053-4b8e-be35-5d7209e1e088>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathwords: Converse Switching the hypothesis and conclusion of a conditional statement. For example, the converse of "If it is raining then the grass is wet" is "If the grass is wet then it is raining." Note: As in the example, a proposition may be true but have a false converse. See also Contrapositive, inverse of a conditional, biconditional
{"url":"http://www.mathwords.com/c/converse.htm","timestamp":"2014-04-21T14:40:09Z","content_type":null,"content_length":"13136","record_id":"<urn:uuid:856ed615-7145-47a6-be28-1e4761b70042>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
axis of symmetry A line drawn through a geometric figure such that the figure is symmetrical (see symmetry) about it. The line may be considered as an axis of rotation: if the figure is rotated about it, there will be two or more positions which are indistinguishable from each other. If the letter Z, for example, is rotated about an axis drawn perpendicularly into the paper through the center of the diagonal stroke, there will be two correspondent positions: the letter has 2-fold rotational symmetry about that axis. In general, if a figure has n correspondent positions on rotation about an axis, it is said to have n-fold symmetry about that axis. Related category
{"url":"http://www.daviddarling.info/encyclopedia/A/axis_of_symmetry.html","timestamp":"2014-04-19T22:28:30Z","content_type":null,"content_length":"5827","record_id":"<urn:uuid:403498e4-53af-490f-96ac-d8f6a2dcfea1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
filter and joint tracking and classification in the TBM framework - J. Advances in Information Fusion "... We are interested in understanding the relationship between Bayesian inference and evidence theory. The concept of a set of probability distributions is central both in robust Bayesian analysis and in some versions of Dempster-Shafer’s evidence theory. We interpret imprecise probabilities as impreci ..." Cited by 4 (0 self) Add to MetaCart We are interested in understanding the relationship between Bayesian inference and evidence theory. The concept of a set of probability distributions is central both in robust Bayesian analysis and in some versions of Dempster-Shafer’s evidence theory. We interpret imprecise probabilities as imprecise posteriors obtainable from imprecise likelihoods and priors, both of which are convex sets that can be considered as evidence and represented with, e.g., DS-structures. Likelihoods and prior are in Bayesian analysis combined with Laplace’s parallel composition. The natural and simple robust combination operator makes all pairwise combinations of elements from the two sets representing prior and likelihood. Our proposed combination operator is unique, and it has interesting normative and factual properties. We compare its behavior with other proposed fusion rules, and earlier efforts to reconcile Bayesian analysis and evidence theory. The behavior of the robust rule is consistent with the behavior of Fixsen/Mahler’s modified Dempster’s (MDS) rule, but not with Dempster’s rule. The Bayesian framework is liberal in allowing all significant uncertainty concepts to be modeled and taken care of and is therefore a viable, but probably not the only, unifying structure that can be economically taught and in which alternative solutions can be modeled, compared and explained. Manuscript received April 20, 2006; released for publication April "... filtering through coherent lower previsions ..." "... The Transferable Belief approach to the Theory of Evidence is based on the pignistic transform which, mapping belief functions to probability distributions, allows to make “precise ” decisions on a set of disjoint hypotheses via classical utility theory. In certain scenarios, however, such as medica ..." Add to MetaCart The Transferable Belief approach to the Theory of Evidence is based on the pignistic transform which, mapping belief functions to probability distributions, allows to make “precise ” decisions on a set of disjoint hypotheses via classical utility theory. In certain scenarios, however, such as medical diagnosis, the need for an “imprecise ” approach to decision making arises, in which sets of possible outcomes are compared. We propose here a framework for imprecise decision derived from the TBM, in which belief functions are mapped to k-additive belief functions (i.e., belief functions whose focal elements have maximal cardinality equal to k) rather than Bayesian ones. We do so by introducing two alternative generalizations of the pignistic transform to the case of k-additive belief functions. The latter has several interesting properties: depending on which properties are deemed the most important, the two distinct generalizations arise. The proposed generalized transforms are empirically validated by applying them to imprecise decision in concrete pattern recognition problems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=6250911","timestamp":"2014-04-21T02:47:07Z","content_type":null,"content_length":"20705","record_id":"<urn:uuid:d894b732-bf00-4bd5-98da-bfda124580ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Circumcircle of triangle March 6th 2010, 09:39 AM #1 Mar 2010 Circumcircle of triangle Hi everyone there. I've got a question I cannot solve. Can anyone of you tell me its answer ?? In a figure, D is a point on circumcircle of triangle ABC in which AB=AC. If CD is produced to E such that BD = CE then prove that AD=AE. Last edited by mr fantastic; March 7th 2010 at 01:15 AM. Reason: Changed post title Hi everyone there. I've got a question I cannot solve. Can anyone of you tell me its answer ?? In a figure, D is a point on circumcircle of triangle ABC in which AB=AC. If CD is produced to E such that BD = CE then prove that AD=AE. Major hint: Prove that the triangles ABD, ACE are congruent. Also, if you want to upload a file from your computer, you need to click on "Manage attachments" under "Additional Options". Then, when you have browsed to select the file, you must click on the "upload" button. Thank you very much .... March 6th 2010, 10:40 AM #2 March 6th 2010, 06:06 PM #3 Mar 2010
{"url":"http://mathhelpforum.com/geometry/132308-circumcircle-triangle.html","timestamp":"2014-04-19T10:39:34Z","content_type":null,"content_length":"37414","record_id":"<urn:uuid:00486a05-ed0f-44e8-a05a-47a81f804fde>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: in bernoulli's equation at point 1 the pressure is positive (saying that is applied by the water), but at point 2 with the same condition pressure is negative (pressure is applied by the atmosphere......why is it so? At point 2 the pressure should also be positive.... • one year ago • one year ago Best Response You've already chosen the best response. Without a drawing, it is impossible to answer your question. Attach either a scan or a sketch of the situation. Best Response You've already chosen the best response. The positive and the negative signs indicate whether the pressure is assisting the the movement of the fluid (i.e. either helping to change it P.E or K.E) . For eg while using a pump, the pump will apply positive pressure and increase the fluid's kinetic energy while at the other end the atmospheric pressure will oppose the movement of water and will be termed as negative pressure.In deriving the equation, at one end we force the water to move by applying the pressure ( the means are unimportant ) while the other end is left open to the atmosphere or any other situation. Obviousy the pressure will oppose the motion and the work done by it will be negative and you must have seen while deriving the eqn. Best Response You've already chosen the best response. wow! Thank you very much Diwakar.... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e3e091e4b028291d74ccec","timestamp":"2014-04-16T10:27:48Z","content_type":null,"content_length":"33373","record_id":"<urn:uuid:f6382b4e-9783-4468-993c-9ee814db0102>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Curvature dependence of the Laplacian operator acting on a n-1 dimensional compact submanifold in the n-dimensional Euclidian space up vote 2 down vote favorite Possibly a simple question in differential geometry (maybe not accurate but understandable in mathematical terms): Given an compact surface $ \mathbf {R} $ in $n$ Euclidean space parameterized by $n-1$ variables $ (x_1,x_2,...,x_{n-1}) $ in the following: $ \mathbf {R} $={ $ X_1,X_2,X_3,...,X_n$ }, ($ X_i=X_i(x_1,x_2,...,x_{n-1}$ ) is the $i$-th Cartesian coordinate) Then, what is the result of Laplacian operator $∇^2=(1/(\sqrt{g})\partial_{μ}g^{μυ}\sqrt{g} \partial_{υ} $ acting on the $ \mathbf {R} $ as $∇^2 \mathbf {R}$ ? I think that it should be a result that purely depends on the extrinsic curvatures, and also a geometric invariant. Please offer me the result together with a reference which is accessible to a physicist. Thanks. add comment 2 Answers active oldest votes Its a pretty elementary computation (it's done in the Appendix of Klaus Ecker's book "Lectures on Regularity for Mean Curvature Flow" for instance) to see that if $f$ is a smooth function defined in a neighborhood of $R$, then $$ \Delta_R f=\Delta_{\mathbb{R}^n} f -\nabla^2_{\mathbb{R}^n} f (\mathbf{n}, \mathbf{n}) +\mathbf{H}_R \cdot \nabla _{\mathbb{R}^n} f $$ up vote 4 down vote where here $\Delta$ is the negative definite laplace beltrami operator, $\nabla^2$ is the Hessian, $\mathbf{n}$ is a choice of normal to $R$ and $\mathbf{H}_R$ is the mean curvature vector of $R$. Thank you for your answer. But in my question, the Laplacian(-Beltrami) operator takes a definite form, corresponding to your $\Delta$. Then what does the difference between $\Delta$ and $\Delta _ {\mathbb {R}^n}$? It appears a compact form of the result for the definite $\mathbf{R}=\{X_{1},X_{2},...,X_{n}\}$. – QHLIU Apr 15 '13 at 15:39 I'm not sure I completely understand your question. In any case, $\Delta_{\mathbb{R}^n}=\sum_{i=1}^n \partial_i^2$ is the usual Euclidean Laplacian. $\Delta_R$ is the Laplacian of the metric $g$ induced on $R$ from the euclidean metric and so is the operator you are interested in. – Rbega Apr 15 '13 at 19:34 my clarification sees in the form of answer below. – QHLIU Apr 16 '13 at 3:07 add comment Dear Rbega, Thank you! For a two dimensional surface, I can prove a much simpler relation by direct computations: \begin{equation*} 1/\sqrt{g}\partial _{\mu }g^{\mu \nu }\sqrt{g}\partial _ {\ nu }\mathbf{R}=2 \mathbf{\mathbf{H}_{R}}\text{.} \end{equation*} Now, Let me calculate explicitly in general according to your formula, with use of the Einstein summation convention. Since we deal with a $n-1$ dimensional surface \begin{equation*} {\mathbf {R}} = ({ X_ {1},X_ {2},...,X_ {n} } )= X_ {j}\mathbf{i}_{Xj} \end{equation*} with $\mathbf{i}_ {X_{j}}$ denoting the unit normal along $j$-th Cartesian coordinate, we would have $ \Delta _ {\ up mathbf{R}^{n}} $ $ \mathbf{R} $ $=\partial _ {X_{i}} $ $ \partial _ {X_{i}} \mathbf{R} =0, $ and \begin{equation*} \nabla _ {\mathbf{R}^{n}} \mathbf{R} \end{equation*} \begin{equation*} \equiv vote 0 (\mathbf{i} _ {X_{i}} \partial _ {X_{i}}) (X_{j}\mathbf{i} _ {X_{j}}) \end{equation*} \begin{equation*} = \mathbf{i} _ {X_{i}} \delta _ {ij} \mathbf{i} _ {X_{j}}, \end{equation*} and so \begin down {equation*} \mathbf{H}_{R}\mathbf{\cdot }\nabla _{\mathbf{R}^{n}}\mathbf{R=\mathbf{H} _{R}.} \end{equation*} If I\ am correct, please tell me what is $f(\mathbf{n},\mathbf{n})$ in our problem vote \begin{equation*} \mathbf{R}=X_{j}\mathbf{i}_{Xj}\text{,} \end{equation*} and what is $\nabla _{\mathbf{R}^{n}}^{2}$? In physics, we usually use two forms: one is the \begin{equation*} 1/\sqrt {g}\partial _{\mu }g^{\mu \nu }\sqrt{g}\partial _{\nu } \end{equation*} on the surface, and another is $ \partial _ {X_{i}} \partial _ {X_{i}}$ in the $n$ dimensional Euclidean space, what is the $\nabla _{\mathbf{R} ^{n}}^{2}$? add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/127623/curvature-dependence-of-the-laplacian-operator-acting-on-a-n-1-dimensional-comp","timestamp":"2014-04-24T08:52:34Z","content_type":null,"content_length":"58276","record_id":"<urn:uuid:b8aaf2dc-6500-4ca0-9883-2d52a1215521>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Alternatives for LPM or logit Urszula Topolska posted on Tuesday, March 26, 2013 - 10:08 am I look for a alternative regression to LPM and logit use in Stata with binary dependent variable. So far I tried to read about nonparametric or semi-parametric methods, but I cant find a matching The sample is quite complex, and the error term seems to be heteroskedastic. Thank you for your suggestions! Bengt O. Muthen posted on Tuesday, March 26, 2013 - 3:55 pm Mplus does very well with complex survey data and finite mixture data. Both can be combined with a binary dependent variable. J Owens posted on Thursday, March 28, 2013 - 8:14 am First, a clarification: It is my understanding that if I am estimating a path model with a combination of categorical and continuous dependent variables but do not declare the categorical (here, binary) variable as being categorical, then Mplus uses the MLR estimator, treats the categorical dependent variable as continuous, and I can think of this as a linear probability model (LPM). Is this Second, if I declare the categorical dependent variable as categorical, how would I get Mplus to use the polychoric correlation matrix with the ML/MLR estimator *instead of switching to the WLSMV estimator with theta parameterization* (note: I have covariates in my model but can leave them out if necessary to see how the use of polychorics changes the coefficient estimates)? Thank you very much. Linda K. Muthen posted on Thursday, March 28, 2013 - 10:46 am If you treat a dependent variable as categorical and use maximum likelihood estimation, you will obtain a logistic regression as the default. Using LINK-PROBIT will get you a probit regression. Using a polychoric correlation matrix as data would not be correct. J Owens posted on Thursday, March 28, 2013 - 10:49 am Thank you very much, Linda. Back to top
{"url":"http://www.statmodel.com/discussion/messages/23/12150.html?1364492965","timestamp":"2014-04-19T05:49:53Z","content_type":null,"content_length":"21680","record_id":"<urn:uuid:81eb5202-84b3-4a5f-ba77-5acf9bff5f17>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosslyn, VA Precalculus Tutor Find a Rosslyn, VA Precalculus Tutor ...Over the past ten years I have worked with students in varying capacities from volunteering, tutoring, and teaching summer courses, to full time teaching. My experience working with students has been in various private, public as well as charter schools. Growing up in south Texas as a young Ind... 24 Subjects: including precalculus, reading, calculus, geometry ...I have been tutoring since the 11th grade, and have tutored throughout college. I have worked with elementary school children on Math and English. I have also worked with high school students on Math and Science. 40 Subjects: including precalculus, English, reading, chemistry ...I was also a teaching assistant for American University's organic chemistry lab for a year, and a TA for general chemistry lab for a year. I generally like to teach by going through the class material currently being covered, or catching up a student if necessary so that I can reinforce what the... 11 Subjects: including precalculus, chemistry, geometry, algebra 2 ...I was very successful as their tutor. I enjoy math and I am very patient. Physics was always one of my favorite subjects. 15 Subjects: including precalculus, chemistry, calculus, ASVAB ...Geometry is the branch of mathematics that studies figures, objects, and their relationships to each other. Calculations for volume and area, as well as solutions for related problems are included. It also includes the theorem and proof aspects around triangles, their respective shapes and angles. 17 Subjects: including precalculus, chemistry, calculus, geometry Related Rosslyn, VA Tutors Rosslyn, VA Accounting Tutors Rosslyn, VA ACT Tutors Rosslyn, VA Algebra Tutors Rosslyn, VA Algebra 2 Tutors Rosslyn, VA Calculus Tutors Rosslyn, VA Geometry Tutors Rosslyn, VA Math Tutors Rosslyn, VA Prealgebra Tutors Rosslyn, VA Precalculus Tutors Rosslyn, VA SAT Tutors Rosslyn, VA SAT Math Tutors Rosslyn, VA Science Tutors Rosslyn, VA Statistics Tutors Rosslyn, VA Trigonometry Tutors Nearby Cities With precalculus Tutor Arlington, VA precalculus Tutors Avondale, MD precalculus Tutors Baileys Crossroads, VA precalculus Tutors Cameron Station, VA precalculus Tutors Chillum, MD precalculus Tutors Crystal City, VA precalculus Tutors Fort Hunt, VA precalculus Tutors Fort Myer, VA precalculus Tutors Landover, MD precalculus Tutors Lewisdale, MD precalculus Tutors Lincolnia, VA precalculus Tutors North Springfield, VA precalculus Tutors Pimmit, VA precalculus Tutors Seven Corners, VA precalculus Tutors Tysons Corner, VA precalculus Tutors
{"url":"http://www.purplemath.com/Rosslyn_VA_precalculus_tutors.php","timestamp":"2014-04-17T21:54:51Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:ac6f3a1e-454d-4cf5-93ab-042219b6754f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
International Kangaroo Mathematics Contest 2011 International Kangaroo Mathematics Contest 2011 42,000 Pakistanis in math contest Lahore, March 18: Around 42,000 students of more than 500 schools and colleges from all over Pakistan participated in the world’s largest International Kangaroo Mathematics Contest (IKMC) on Thursday. The students were from grade III to grade XII. The “International Association Mathematiques sans Frontiers” organised the contest which is held the world over every year on March 17. This year about 5.5 million students from 56 countries are reported to have appeared in the contest. Pakistan Kangaroo Commission President Dr A.D. Raza Choudary said in a statement that the main aim of the event was to encourage the young students to give their best performance in mathematics and compete with other students at the international level. It also helps identify creative talent among students. The association, which has its headquarters in Paris, has won several international awards including the prestigious Paul Eros Prize. Dr Choudary said ever since Pakistan started participating in the contest six years ago with a modest number of 8,000 students, the Kangaroo commission had observed a very positive response from the Pakistani students as was evident from the five-time increase in their number to about 42,000 this year. He said there was great enthusiasm among the students to compete at the international level and the fear of mathematics had been replaced by keen interest in the subject. “Now they look forward to challenges in Pakistan. This is an extremely healthy sign for the progress of education in the country as this sort of enthusiasm of the students in mathematics was not seen five years ago,” he added. Dawn 37 Responses to “International Kangaroo Mathematics Contest 2011” 1. March 18, 2011 at 2:26 pm # i love kangaroo…..i also paticipated in international kangaroo contest 2011…and i hope i will get good result… □ June 29, 2011 at 8:11 pm # so when u ll get ur result of kangroo ? and how u can find ur result? 2. March 19, 2011 at 9:24 am # that’s superb 3. March 20, 2011 at 11:34 am # Hi, I am very much excited in kangaroo contest and i participated in this contest 2011, hope that i will get good result. I love and joy for having this contest. 4. March 28, 2011 at 11:39 am # nadim halani i am better then you….i will get good result then you…….:) Nadim Halani: Hi, I am very much excited in kangaroo contest and i participated in this contest 2011, hope that i will get good result. I love and joy for having this contest. 5. March 29, 2011 at 3:40 pm # u all will get zero i will be the one who will top in the whole pak yall r losers thts it! □ May 6, 2011 at 10:04 am # this much proud is not good you are a big rascal fuked person □ May 10, 2011 at 3:24 pm # i am so happy by giving this test buttttttttt ………… iwant the results of kangaroo maths contest 2011 of pakistan karachi 6. March 30, 2011 at 10:16 am # nadim i think u r over confident…. 7. April 9, 2011 at 9:13 am # I participated in the contest, I want the Question paper of the cadet level of 2011. Can anyone help? 8. April 11, 2011 at 11:56 am # plz inform me result kangaroo math contest 2011 when out 9. April 16, 2011 at 12:12 pm # i dont like kangaroo maths at all because there are very easy questions 10. April 16, 2011 at 12:14 pm # i dont like kangaroo maths at all because there are very easy questions. 11. April 19, 2011 at 8:26 am # When will the result come? Does the kangaroo give scholarships? Plz answer. 12. April 23, 2011 at 12:04 pm # plz tell when did the results come???????? i shall be thankhfull to u 13. April 24, 2011 at 9:07 am # Plz tell the result I am excited………………………….MATH my only oxygen 14. April 24, 2011 at 9:09 am # I am very much excited 15. April 26, 2011 at 6:08 pm # i want to participate. iam participating first time 16. April 26, 2011 at 6:10 pm # pizzzzzzzzzzzzzzzz. give me a question 17. May 10, 2011 at 3:26 pm # tell how to find the results plzzzzzzzzzzzzzzzzz. 18. June 2, 2011 at 5:20 pm # hey i even wanna knw dat when iz the result gonna cum………………..plz can any one tell me □ June 23, 2011 at 8:50 am # i think so no one knows…!!! □ June 23, 2011 at 9:42 am # i think so no one knows..!! 19. June 5, 2011 at 6:10 pm # i wanna check answers of past papers 2011 plz help 20. June 5, 2011 at 6:12 pm # hey i also participated in this contest and i also wanna get good marks……………..:) 21. June 12, 2011 at 10:46 am # hi—- i also participated in this contest and waiting the result anxiously 22. June 12, 2011 at 10:47 am # will u plz let me know when the result going to be announced 23. June 21, 2011 at 3:49 pm # hey i wanna check reslt of kangaroo test 2011 plzzzzzzzzz help…………………………………………………………………………………………………………………………………………………………………………tell me d8 4 anouncemnt of it plzzzzzzzzzzzzzzzzzzzzzzzzzzzz:{ bcz i m tired of w8ng of da rslt……………….. 24. June 23, 2011 at 8:36 am # I also participated and i am wai8in 4 my result…. 25. June 23, 2011 at 9:21 am # hey i also participated n i am wai8in 4 my result… 26. June 23, 2011 at 9:22 am # it’s very interesting..!! 27. June 23, 2011 at 7:13 pm # i also participated n i am wai8in 4 my result.. 28. June 29, 2011 at 8:09 pm # when i can get my kangroo result. and how can i get it ? 29. August 7, 2011 at 6:39 am # when will i get my kangroo result………..I cant w8 more □ September 14, 2011 at 10:38 am # rzlt to agia hai dnt u get????????????????? 30. August 7, 2011 at 4:18 pm # hey gyxxxxxxxxxxxxxxxxxxxxxxx……. 31. September 14, 2011 at 10:35 am # I had participated in kangroo this time,last time and in all backward competitipns and i also had got 5th position in pakistan i am a silver medalist but now i jst hate kangroo………!When the rzlt is going yo be announced?
{"url":"http://pakedu.net/pakistani-education-news/international-kangaroo-mathematics-contest-2011/","timestamp":"2014-04-16T07:42:49Z","content_type":null,"content_length":"107772","record_id":"<urn:uuid:007bf95e-4af6-45b5-809a-c4505035b1e2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of Equilibria in Discontinuous Games Nessah, Rabia and Tian, Guoqiang (2008): Existence of Equilibria in Discontinuous Games. Download (300Kb) | Preview This paper investigates the existence of pure strategy, dominant strategy, and mixed strategy Nash equilibria in discontinuous games. We introduce a new notion of weak continuity, called weak transfer quasi-continuity, which is weaker than most known weak notions of continuity, including diagonal transfer continuity in Baye et al (1993) and better-reply security in Reny (1999), and holds in a large class of discontinuous games. We show that it, together with strong diagonal transfer quasiconcavity introduced in the paper, is enough to guarantee the existence of Nash equilibria in compact and convex normal form games. We provide sufficient conditions for weak transfer quasi-continuity by introducing notions of weak transfer continuity, weak transfer upper continuity, and weak transfer lower continuity. Moreover, an analogous analysis is applied to show the existence of dominant strategy and mixed strategy Nash equilibria in discontinuous games. Item Type: MPRA Paper Original Existence of Equilibria in Discontinuous Games Language: English Keywords: Discontinuous games, weak transfer quasi-continuity, pure strategy, mixed strategy, dominant strategy, Nash equilibrium, existence Subjects: C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C70 - General Item ID: 41206 Depositing Guoqiang Tian Date 12. Sep 2012 12:49 Last 12. Feb 2013 12:09 Aliprantis, C.B., Border, K.C. (1994): Infinite Dimensional Analysis. Springer-Verlag, New York. Athey, S. (2001): Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information. Econometrica, 69, 861–889. Bagh, A., Jofre, A. (2006): Reciprocal Upper Semicontinuity and Better Reply Secure Games: A Comment. Econometrica, 74, 1715–1721. Barelli, P., Soza, I. (2009): On the Existence of Nash Equilibria in Discontinuous and Qualitative Games, mimeo. Baye, M.R., Tian, G., Zhou, J. (1993): Characterizations of the Existence of Equilibria in Games with Discontinuous and Non-Quasiconcave Payoffs. The Review of Economic Studies, 60, 935– 948. Carmona, G. (2005): On the Existence of Equilibria in Discontinuous Games: Three Counterexamples. International Journal of Game Theory, 33, 181–187. Carmona, G. (2008): An Existence Result of Equilibrium in Discontinuous Economic Games, mimeo. Dasgupta, P., Maskin, E. (1986): The Existence of Equilibrium in Discontinuous Economic Games, I: Theory. The Review of Economic Studies, 53, 1–26. Debreu, G. (1952): A Social Equilibrium Existence Theorem. Proceedings of the National Academy of Sciences of the U. S. A., 38. Deguire, P., Lassonde, M. (1995): Familles S´electantes. Topological Methods in Nonlinear Analysis, 5, 261–269. Gatti, R.J. (2005): A Note on the Existence of Nash Equilibrium in Games with Discontinuous Payoffs. Cambridge Economics Working Paper No. CWPE 0510. Available at SSRN: http://ssrn.com Glicksberg, I.L. (1952): A Further Generalization of the Kakutani Fixed Point Theorem. Proceedings of the American Mathematical Society, 3, 170–174. Jackson, M. O. (2005): Non-existence of Equilibrium in Vickrey, Second-price, and English Auctions, working paper, Stanford University. Karlin, S. (1959): Mathematical Methods and Theory in Games, Programming and Economics, Vol. II (London: Pregamon Press). McLennan, A., Monteiro, P. K., Tourky, R. (2009): Games with Discontinuous Payoffs: a Strengthening of Reny’ s Existence Theorem, mimeo. Milgrom, P., and Roberts, H. (1990): Rationalizability, Learning, and Equilibrium in Games with Strategic Complementarities, Econometrica 58, 1255-1277. Milgrom, P., Weber, R. (1985): Distributional Strategies for Games with Incomplete Information, Mathematics of Operations Research 10, 619-632. Monteiro, P.K., Page, F.H.Jr. (2007): Uniform Payoff Security and Nash Equilibrium in Compact Games. Journal of Economic Theory, 134, 566–575. Morgan, J., Scalzo, V. (2007): Pseudocontinuous Functions and Existence of Nash Equilibria. Journal of Mathematical Economics, 43, 174–183. References: Nash, J. (1950): Equilibrium Points in n-Person Games. Proceedings of the National Academy of Sciences, 36, 48–49. Nash, J.F. (1951): Noncooperative Games. Annals of Maths, 54, 286–295. Nishimura, K., Friedman, J. (1981): Existence of Nash Equilibrium in n-Person Games without Quasi-Concavity. International Economic Review, 22, 637–648. Roberts, J. and Sonnenschein, H. (1977): On the Existence of Cournot Equilibrium Without Concave Profit Functions, Econometrica, 45, 101–113. Reny, P. J. (1999): On the Existence of Pure and Mixed Strategy Nash Equilibria in Discontinuous Games. Econometrica, 67, 1029–1056. Reny, P. J. (2009): Further Results on the Existence of Nash Equilibria in Discontinuous Games, mimeo. Robson, A. J. (1994): An Informationally Robust Equilibrium in Two-Person Nonzero-Sum Games. Games and Economic Behavior, 2, 233-245. Rosen, J.B. (1965): Existence and Uniqueness of Equilibrium Point for Concave n-Person Games. Econometrica, 33, 520–534. Rothstein, P. (2007): Discontinuous Payoffs, Shared Resources, and Games of Fiscal Competition: Existence of Pure Strategy Nash Equilibrium. Journal of Public Economic Theory, 9, Simon, L. (1987): Games with Discontinuous Payoffs. Review of Economic Studies, 54, 569–597. Simon, L., Zame. W. (1990): Discontinuous Games and Endogenous Sharing Rules. Econometrica, 58, 861–872. Tian, G. (1992a): Generalizations of the KKM Theorem and Ky Fan Minimax Inequality, with Applications to Maximal Elements, Price Equilibrium, and Complementarity, Journal of Mathematical Analysis and Applications, 170, 457–471. Tian, G. (1992b): “Existence of Equilibrium in Abstract Economies with Discontinuous Payoffs and Non-Compact Choice Spaces,” Journal of Mathematical Economics, 21, pp. 379-388. Tian, G. (1992c) “On the Existence of Equilibria in Generalized Games,” International Journal of Game Theory, 20, pp.247-254. Tian, G. (1993): Necessary and Sufficent Conditions for Maximization of a Class of Preference Relations. Review of Economic Studies, 60, 949–958. Tian, G. (2009): Existence of Equilibria in Games with Arbitrary Strategy Spaces and Payoffs: A Full Characterization, mimeo. Tian, G., Zhou, Z. (1995): Transfer Continuities, Generalizations of the Weierstrass Theorem and Maximum Theorem: A Full Characterization. Journal of Mathematical Economics, 24, 281– Topkis, D. M.(1979): Equilibrium Points in Nonzero-Sum n-Person Submodular Games, SIAM Journal on Control and Optimization 17(6), 773–787. Vives, X. (1990): Nash Equilibrium with Strategic Complementarities. Journal of Mathematical Economics, 19, 305–321. Yao, J.C. (1992): Nash Equilibria in n-Person Games without Convexity. Applied Mathematics Letters, 5, 67–69. URI: http://mpra.ub.uni-muenchen.de/id/eprint/41206
{"url":"http://mpra.ub.uni-muenchen.de/41206/","timestamp":"2014-04-17T00:53:50Z","content_type":null,"content_length":"28968","record_id":"<urn:uuid:b78c5f01-7751-4a78-bdd6-2d61902dbc52>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Poster submission: Coset Enumeration and Geometric Ordering D. Wright Oklahoma State University Limit sets of kleinian groups which contain a fuchsian subgroup of the first kind are closures of unions of circles. One strategy for drawing the limit set of such a group $G$ is to draw all the circles which are equivalent under the group to the limit circle $C$ of the fuchsian subgroup $H$. This is equivalent to enumerating all the left cosets $aH$ of $H$ in $G$. In a wide class of examples, McShane, Parker and Redfern created and used Finite Stata Automata to enumerate the right cosets $Ha$, with transition rules of the form $Ha \mapsto Hab$ where $b$ ranges over a set of generators of the group. Then the left cosets are enumerated at the same time with representatives $a^{-1}$, and we may plot the circles $a^{-1}(C)$, thus filling out the limit set. One problem with this approach is that if $a_1$ and $a_2$ are coset representatives which are ``close" in the lexicographical ordering then $a_1^{-1}$ and $a_2^{-1}$ need not be. However, visually there is often a clear geometric ordering to the plotting of the circles. In some cases, we show how to preserve this geometric ordering in the plot by using a specially constructed Finite State Automaton for the whole group $G$, with generators listed in geometric order, together with a plotting automaton or ``mask" that simply indicates whether or not the currently enumerated element of $G$ should be accepted as a coset representative or not. The first automaton indicates the tree of words that should be followed, while the second automaton identifies the accepted left cosets. The accepted states do not form a connected subset of the tree of words in $G$. Finally, we shall show how this method allows us to strikingly color the disks for certain cusp groups, revealing certain interesting structures in the limit
{"url":"http://www.newton.ac.uk/programmes/SKG/poster/wright.html","timestamp":"2014-04-19T02:01:58Z","content_type":null,"content_length":"2231","record_id":"<urn:uuid:8e8c7dc4-9c09-4a5f-812c-50dbbcfd2b1d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Volumes by Rotation about y-axis March 3rd 2009, 02:27 PM #1 Jan 2009 Find the volume of the solid that results where the shaded region is revolved about the y-axis. I'm not sure if you solve for x then integrate or if you just integrate the equation as it is. You want to rearrange your equation so you have $x = ...$. I'm assuming you can do that. Then integrate like this: $\int \pi x^2 .dy$ You can take $\pi$ outside the integral $\pi \int x^2 .dy$ Here you substitute in your rearranged equation, square it and then integrate between the intervals that you'll probably have to work out (I'm not sure since you've made no mention of them in the Hello, kinana18! Find the volume of the solid that results where the shaded region . Where? is revolved about the y-axis. . $y\:=\:3-2x$ I'm not sure if you solve for x then integrate or if you just integrate the equation as it is. I have no idea what you're talking about . . . do you? The only region that makes any sense looks like this: 3 * - + - - - * - - | 1½ We have a right circular cone with $r = \tfrac{3}{2},\;h=3$ Its volume is: . $V \:=\:\tfrac{1}{3}\pi r^2h \:=\:\tfrac{1}{3}\pi\left(\tfrac{3}{2}\right)^2(3) \:=\:\frac{9\pi}{4}$ If we must use Calculus, I suggest Cylindrical Shells. The formula is: . $V \;=\;2\pi\int^b_a\text{(radius)(height)}\,dx$ And we have: . $V \;=\;2\pi\int^{\frac{3}{2}}_0x(3-2x)\,dx$ . . . . etc. March 3rd 2009, 02:55 PM #2 Junior Member Feb 2009 March 3rd 2009, 03:02 PM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/calculus/76759-volumes-rotation-about-y-axis.html","timestamp":"2014-04-19T20:09:28Z","content_type":null,"content_length":"38219","record_id":"<urn:uuid:df73d900-ec3d-4161-b48f-19961990dc85>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Blog: Probabilistic models with unknown objects Results 1 - 10 of 109 - In ICDM , 2006 "... Entity resolution is the problem of determining which records in a database refer to the same entities, and is a crucial and expensive step in the data mining process. Interest in it has grown rapidly in recent years, and many approaches have been proposed. However, they tend to address only isolate ..." Cited by 76 (9 self) Add to MetaCart Entity resolution is the problem of determining which records in a database refer to the same entities, and is a crucial and expensive step in the data mining process. Interest in it has grown rapidly in recent years, and many approaches have been proposed. However, they tend to address only isolated aspects of the problem, and are often ad hoc. This paper proposes a well-founded, integrated solution to the entity resolution problem based on Markov logic. Markov logic combines first-order logic and probabilistic graphical models by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. We show how a number of previous approaches can be formulated and seamlessly combined in Markov logic, and how the resulting learning and inference problems can be solved efficiently. Experiments on two citation databases show the utility of this approach, and evaluate the contribution of the different components. 1 - In HLT/NAACL , 2007 "... Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximat ..." Cited by 57 (17 self) Add to MetaCart Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximations that make this approach practical, and apply our method to the ACE coreference dataset, achieving a 45 % error reduction over a comparable method that only considers features of pairs of noun phrases. This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently. 1 "... Inductive learning is impossible without overhypotheses, or constraints on the hypotheses considered by the learner. Some of these overhypotheses must be innate, but we suggest that hierarchical Bayesian models help explain how the rest can be acquired. To illustrate this claim, we develop models th ..." Cited by 54 (22 self) Add to MetaCart Inductive learning is impossible without overhypotheses, or constraints on the hypotheses considered by the learner. Some of these overhypotheses must be innate, but we suggest that hierarchical Bayesian models help explain how the rest can be acquired. To illustrate this claim, we develop models that acquire two kinds of overhypotheses — overhypotheses about feature variability (e.g. the shape bias in word learning) and overhypotheses about the grouping of categories into ontological kinds like objects and substances. - In UAI , 2008 "... Formal languages for probabilistic modeling enable re-use, modularity, and descriptive clarity, and can foster generic inference techniques. We introduce Church, a universal language for describing stochastic generative processes. Church is based on the Lisp model of lambda calculus, containing a pu ..." Cited by 54 (11 self) Add to MetaCart Formal languages for probabilistic modeling enable re-use, modularity, and descriptive clarity, and can foster generic inference techniques. We introduce Church, a universal language for describing stochastic generative processes. Church is based on the Lisp model of lambda calculus, containing a pure Lisp as its deterministic subset. The semantics of Church is defined in terms of evaluation histories and conditional distributions on such histories. Church also includes a novel language construct, the stochastic memoizer, which enables simple description of many complex non-parametric models. We illustrate language features through several examples, including: a generalized Bayes net in which parameters cluster over trials, infinite PCFGs, planning by inference, and various non-parametric clustering models. Finally, we show how to implement query on any Church program, exactly and approximately, using Monte Carlo techniques. 1 "... Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and m ..." Cited by 45 (18 self) Add to MetaCart Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory. - INT J COMPUT VIS , 2005 "... We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building i ..." Cited by 43 (6 self) Add to MetaCart We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes. - In Advances in Neural Information Processing Systems 22 , 2009 "... Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to defin ..." Cited by 39 (7 self) Add to MetaCart Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative, statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language. In experimental comparisons to Markov Logic Networks on joint segmentation and coreference, we find our approach to be 3-15 times faster while reducing error by 20-25%—achieving a new state of the art. 1 "... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..." Cited by 25 (7 self) Add to MetaCart Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1 - In Proceedings of the Proceedings of the Twenty-Second Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI06 "... Tasks such as record linkage and multi-target tracking, which involve reconstructing the set of objects that underlie some observed data, are particularly challenging for probabilistic inference. Recent work has achieved efficient and accurate inference on such problems using Markov chain Monte Carl ..." Cited by 22 (6 self) Add to MetaCart Tasks such as record linkage and multi-target tracking, which involve reconstructing the set of objects that underlie some observed data, are particularly challenging for probabilistic inference. Recent work has achieved efficient and accurate inference on such problems using Markov chain Monte Carlo (MCMC) techniques with customized proposal distributions. Currently, implementing such a system requires coding MCMC state representations and acceptance probability calculations that are specific to a particular application. An alternative approach, which we pursue in this paper, is to use a general-purpose probabilistic modeling language (such as BLOG) and a generic Metropolis-Hastings MCMC algorithm that supports user-supplied proposal distributions. Our algorithm gains flexibility by using MCMC states that are only partial descriptions of possible worlds; we provide conditions under which MCMC over partial worlds yields correct answers to queries. We also show how to use a context-specific Bayes net to identify the factors in the acceptance probability that need to be computed for a given proposed move. Experimental results on a citation matching task show that our general-purpose MCMC engine compares favorably with an application-specific system. 1 - Research Paper , 2004 "... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most well-understood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, general-purpose languages are beginning to emerge for which ..." Cited by 20 (8 self) Add to MetaCart Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most well-understood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, general-purpose languages are beginning to emerge for which the fundamental logical basis is probability. However, such languages have lacked a logical foundation that fully integrates classical first-order logic with probability theory. This paper presents such an integrated logical foundation. A formal specification is presented for multi-entity Bayesian networks (MEBN), a knowledge representation language based on directed graphical probability models. A proof is given that a probability distribution over interpretations of any consistent, finitely axiomatizable first-order theory can be defined using MEBN. A semantics based on random variables provides a logically coherent foundation for open world reasoning and a means of analyzing tradeoffs between accuracy and computation cost. Furthermore, the underlying Bayesian logic is inherently open, having the ability to absorb new facts about the world, incorporate them into existing theories, and/or modify theories in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on first-order Bayesian knowledge representation, and point the way toward Bayesian languages suitable for general-purpose knowledge representation and computing. Because first-order Bayesian logic contains classical first-order logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical first-order logic or subsets thereof.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.116.2131","timestamp":"2014-04-16T09:37:54Z","content_type":null,"content_length":"41638","record_id":"<urn:uuid:2572d5c9-e7e0-4bae-8787-cc08f0e89132>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting (asymptotic) complexity problem 01-18-2009 #1 Hi all, I have some problems with "asymptotic notations". I have already learned the thery about that but there is still something unclear for me "in practice". Let me show you some simple example: Let's say we have the "Selection" sorting algorithm. The fact is it has the time complexity of "O(n^2) (squared)". But what does it exactly mean? Does it mean that at worst case this algorithm makes n^2 comparisons or n^2 selections or n^2 assignments or n^2 "something else"? In other words, how can I simply and logically find out the complexity of some algorithm? Just n^2 operations - it could be anything, or any scalar number of anything. 3 compares, a move and an addition would count as one "operation". quicksort typically has more operations per iteration than say bubble sort. But because it's O(nLogn) rather than O(n^2), it soon starts winning over simpler (but more expensive) approaches. Last edited by Salem; 01-19-2009 at 12:08 AM. Reason: fix late night typos If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. The term O(n^2) means that if you double the number of elements the algorithm works on, it will be four times more work. This is because these algoorithms usually has a double for-loop (or some such) in the middle of the algorithm, where both loops (essentially) range from 0 to n-1. But the O(n^2) doesn't necessarily make a good comparison for the actual time it takes for one algorithm to solve a problem compared to another algorithm - they can vary dramatically even tho' they both have O(n^2) - one may be very dumb in the number of copies/compares or other operations it takes to do something. Of course, an algorithm that is O(log(n) * n) will (almost certainly) be dramatically quicker even if it is clumsier than some O(n^2) algorithm, if n is larger than a few dozen. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. Thanks to both. It is now much clearer to me. one other thing to note about big Oh notation is that it doesn't really describe the exact amount of time increase new elements will have, instead it describes more the shape of the curve (if that makes any sense). So for example, you won't see a O (n^3 + 4n^2 + 6n) algorithm, instead, it will be just written as O (n^3). That is one reason (along with others) that two n^3 algorithms with the same initial execution speeds will deviate from each other. The reason for that is that once you start talking less significant factors in the running time such as an x-squared part of something that is x-cubed, or the constant factor of something that is O(n), you start running into things that are varying solely due to implementation differences, i.e. how well optimised the code is. You might have two algorithms where you conclude that ones takes n*n + 3n + 9 operations, and another that is n*n + 4n + 5 operations, and decide that this is why the first one is faster. However you then switch compilers and the second one comes out faster for the exact same source code. Due to things like that, all you can really say is that since they are both O(n*n), they take approximately the same running time. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" I agree with what iMalc says. Another way to put it: consider O(n^2) compared with O(n^2 + n) when n is 1000 (1 million and 1 million one thousand) and then change to 10000 - It's now ten million for the first case, and ten million ten thousand for the second case. In the first case, it's 0.1%, in the second case the + n contributes less 0.01% - now, if you measure that precisely, you don't want O(n), you want "CPU clock-cycles" as the measure. And change your data ever so slightly (replace all Jones with Smith on the name list), and you will almost certainly change the overall execution time by more than 0.01%! Big-O notation is for "the things that change dramatically", not the details. It's a "big picture" tool. Consider a real-time OS where the task-switch is O(n), where n is number of tasks. Now change it the task-switch code so that it is O(1) - it doesn't take much to convince you that it is better - even if the ACTUAL task-switch code itself is 10% slower [assuming you have more than 1 task, that is]. Another real case is "open file" in FAT vs. NTFS - if you have a directory that is sorted by name, it take O(log(n)) to find the right file. FAT has files stored in the order they are created, so the time to find the right one is O(n) - because you may have to search through every one of them. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. Hi again, I would have another question. It should not be too hard to find out the best and the worst case of some algorithm. But how can I find out the average case of some algorithm? In other words, does there exist any general rule to finding out the average case of algoritms? Just give it several sets of data that isn't the worst and isn't the best - or you can statistically calculate it from the general behaviour of the code - for example if your best case is an already sorted array, and the worst case is one that is sorted in opposite order, then something with a random set of data will be "average", right? So if "every other element needs swapping", you get "half as many operations each outer loop". But often, worst case is the only one that really matters, because that is what you have to design to cope with, unless you know for some reason that you will never ever have the worst case. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. 01-18-2009 #2 01-18-2009 #3 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 01-18-2009 #4 01-18-2009 #5 Registered User Join Date Jun 2008 01-18-2009 #6 01-19-2009 #7 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 01-20-2009 #8 01-20-2009 #9 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England
{"url":"http://cboard.cprogramming.com/cplusplus-programming/111273-sorting-asymptotic-complexity-problem.html","timestamp":"2014-04-16T17:13:40Z","content_type":null,"content_length":"77212","record_id":"<urn:uuid:7a4bb2be-9f89-469c-92d1-1a09116c1f33>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Random metrics on compact orientable surfaces up vote 3 down vote favorite Hello everyone, Let $S_g$ be a compact orientable surface of genus $g \geq 2$, and let $\mathcal{A}$ be the set of $\mathcal{C}^{\infty}$ Riemanniann metric on $S_g$ endowed with the topology of uniform convergence. Let $h$ be in $\mathcal{A}$. Since $g \geq 2$, $S_g$ is covered by $D^2 \simeq \mathbb{R}^2$, and $h$ can be pulled back on $D^2$ to $\tilde{h}$ in such a way that the projection $(D^2, \tilde{h}) \ longrightarrow (S_g , h)$ is a local isometry, and $\pi_1(S_g) \simeq \Gamma \subset \text{Iso}((D^2,\tilde{h}))$ acts on $(D_2, \tilde{h})$ by isometry, so $S_g = D^2 / \Gamma$ My question is, how big is $\Gamma$ in $ \text{Iso}((D^2,\tilde{h}))$ ? For example if one restricts $\mathcal{A}$ to a relevant subset that can be endowed with a good probability measure (to be defined), can one say if $\Gamma = \text{Iso}((D^2,\tilde{h}))$ with probability $1$ ? Or if $[\Gamma :\text{Iso}((D^2,\tilde{h}))] < \infty $? More generally, can one compare $\Gamma$ and $ \text {Iso}((D^2,\tilde{h}))$ for a randomly chosen metric on $S_g$ in any way? I t's clear for example that in the hyperbolic case, $\Gamma$ is very small in $\textbf{PSl}_2(\mathbb{R})$, but in the general case, one can expect that $\text{Iso}(D^2,\tilde{h})$ is small since the geometry on a random Riemannian manifold has very few symmetries. Maybe this question is very classic, in that case I would be very gratefull if anyone could give me references on the topic. Thanks for the future answers ! riemannian-geometry surfaces pr.probability The earlier MO question, "Random manifolds," contains some related information and references: mathoverflow.net/questions/70714/random-manifolds – Joseph O'Rourke Jun 12 '13 at 13:10 add comment 2 Answers active oldest votes If you take any reasonable probability measure on $\mathcal{A}$, then with probability $1$ there should be a single point of maximum curvature ; then the isometry group of $\tilde h$ must preserve a $\Gamma$-orbit, namely the set of the maximum curvature points in the universal covering. You should also be able to single out one direction at the maximum curvature point by the same kind of consideration, showing that indeed the isometry group above is reduced to $\Gamma$. up vote 5 down vote accepted To make all this precise, you would have to define your random metrics, but the same kind of reasoning should work for generic metric in a reasonable topology. add comment It follows from Theorem 1.3 of the paper of Farb and Weinberger, "Isometries, rigidity and universal covers", MR2456886 that either $\text{Iso}((D^2,\tilde{h}))$ is discrete in which case it contains $\Gamma$ with finite index, or the metric $h$ has constant negative curvature in which case, after scaling the metric, we may take $(D^2,\tilde h)$ to be the Poincare metric up vote 4 and $\Gamma$ to be a Fuchsian group. down vote add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry surfaces pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/133497/random-metrics-on-compact-orientable-surfaces/133506","timestamp":"2014-04-24T12:14:18Z","content_type":null,"content_length":"55893","record_id":"<urn:uuid:558762d3-89a2-429e-a655-bd29f392b02e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Basketball Prospectus | Ken's Mailbag Once again, I'm clearing out my inbox, two e-mails at a time. At this rate, it will be empty by 2026... All right Ken, enough is enough. What is going on with Illinois' rating? I keep looking at them with every passing loss...I mean game...and they still remain a few notches above Purdue. I understand Illinois' preconference schedule was pretty tough. I guess what I'm wondering is does your formula put too much weight on strength of schedule? Because to be 10-12, I don't really care who you are, I don't think you should be rated 35 in any ranking system. They just look completely out of place in the context of the surrounding teams in your ratings. What gives? This was written before Purdue's win over Illinois on Saturday, which dropped Illinois to 10-13 overall and below Purdue in the power ratings. However, the idea that Illinois is so highly ranked by my system (and I should note, by other predictive systems) offers a conundrum that most people around college hoops want to ignore. A team with a losing record can actually be better than a team with a winning record. Any number used to describe a team's or player's performance need to be taken in context. A team's record requires the same burden. If Illinois had played Purdue's schedule, it's safe to say they'd have a better record. If the Illini had Purdue's lack of misfortune in close games, their record would be even better. To its credit, the selection committee seems to understand this better than just about anybody. They've been roundly criticized for giving a 26-2 George Washington team an eight-seed as they did in 2006, or leaving a 25-3 Utah State out of the field entirely in 2004. We'll hear it a bunch of times over the closing week of the season, that if a certain team gets to X amount of wins, they have to get an at-large or even be seeded highly. Of course, they don't. The sheer number of wins or losses are irrelevant without know the circumstances surrounding those games. Should Illinois be considered worse than every power conference team with a winning record? If you're looking for a system that orders teams based on how good their season has been, then my system is going to let you down. Illinois is not having a good season, but just because they've lost a bunch of games doesn't mean they're the equal of Michigan or Northwestern, either. Unlike those two teams, no Big Ten squad should take the Illini lightly. I am not really a basketball fan, so please forgive me if my question seems idiotic. After reading your Memphis free-throw article, I thought perhaps you would be the best person to answer it. In the final seconds of the Georgetown at West Virginia game on January 26, WVU held a 57-55 lead. GU had the ball and was trying to set for a final shot. Once it got under 10 seconds remaining with GU still not set up, my admittedly inexperienced instinct was that WVU should foul before GU could begin to shoot. They did not, and GU took a three-point attempt that fell with six seconds remaining, resulting in a 58-57 GU victory. My question is simply this: Which of the two strategies has a greater mathematical probability of success? 1. Allow GU to shoot from the floor and possibly hit a (virtual) game-winning 3-pointer, or 2. Intentionally foul GU before they start to shoot and once the clock got under 10 seconds Before we take a look at this, let's get one thing straight: no coach is going to employ such a strategy. The vast majority of coaches refuse to give a foul on the last possession when they're up by three, which is a strategy that vastly improves the leading team's chances of victory. With that said, the strategy posed by Shane is worth discussing from a theoretical point of view. One thing we shouldn't forget is that this question assumes that the players are able to carry out the strategy successfully, which in a free-flowing game like basketball is not a safe assumption. However, I'll make this assumption to simplify the analysis. To start the calculations, the Mountaineers would have most likely fouled Jonathan Wallace or Jessie Sapp, the last two Hoya players to touch the ball on the possession. Wallace is an 85% shooter this season in just 25 attempts, 83% on his career. In any case, Wallace is a better shooter than Sapp, so let's use Wallace's figure to test the worst case from WVU's perspective. There are three possibilities related to Wallace's hypothetical trip to the line. (For those averse to intense math, you might want to turn away.) Scenario A: Wallace makes both four shots. There's a 72% chance Wallace makes both free throws and ties the game. WVU would have had a few seconds for a final possession. I'll just guess and say they have a 40% of scoring. This may be a little high, but we saw their final possession, and there's a reasonable argument that they did score anyway. They only had six seconds then, but in the fouling scenario they'd have a little more time. Secnario B: Wallace makes the first and misses the second. There's a 13% chance of this happening. Georgetown's chances of winning then depend on them getting an offensive rebound and making the follow-up. I'll generously give them a 25% chance of the offensive rebound and a 50% chance of converting. Scenario C: Wallace misses the first. There's a 15% chance of this happening, in which case I'll assume that Wallace intentionally misses the second. The same percentages hold for scenario B from that point on, although in this case a conversion results in a tie. I'll throw in a 10% chance that the conversion results in an and-one, thus giving Georgetown the lead. To sort out what these options mean in terms of winning, let's put in table form. The percentages listed are absolute. For example, there's a two percent chance of Scenario C occurring and Georgetown converting the second missed free throw for a tying basket. Scenario A B C Sum Chance of... Occurring: 72% 13% 15% 100% Georgetown tying: 72% 0% 2% 73% Georgetown leading: 0% 2% <1% 2% Then WVU winning: 29% 11% 13% 53% Then WVU tying: 43% 0% 2% 45% From the WVU tying scenario, it seems reasonable to assume that overtime is a 50/50 proposition in this case. So I'll put half of that 45% into both the WVU and Georgetown win chances. This leaves us with roughly a 75% that WVU wins by employing this strategy. Then the question becomes whether the Mountaineers would have a better chance of winning by playing the final possession straight up. I'm not going to bore you with the details of that calculation, but it's a close call. The two most variable factors in this strategy are the free throw shooting of the opposing player and the chances of winning in overtime. If UTEP's Tony Barbee had encountered this scenario in last Saturday's surprisingly close finish against Memphis, it would have made a lot of sense for UTEP to foul. He would not have done so, of course. Given how far this type of strategy is from mainstream thinking in college hoops, it will only ever be employed in video games. Regardless, there are cases where it would come in handy in real life. Ken Pomeroy is an author of Basketball Prospectus. You can contact Ken by clicking here or click here to see Ken's other articles.
{"url":"http://www.basketballprospectus.com/article.php?articleid=140","timestamp":"2014-04-16T07:15:21Z","content_type":null,"content_length":"19051","record_id":"<urn:uuid:31f29ae5-db96-4198-a53d-de092d96a01f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Nahant Algebra 1 Tutor Find a Nahant Algebra 1 Tutor ...Thank you for your interest and preference(?), AlexanderI taught an Algebra College course for twelve participating (future) nurses. It was very successful. I have MS degrees in Physics (University of Stuttgart, Stuttgart - Germany and in Electrical Engineering (University of Florida,Gainesville - Florida. 6 Subjects: including algebra 1, physics, trigonometry, precalculus My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including algebra 1, chemistry, English, reading ...As a tutor I am highly adaptable and can accommodate students with busy schedules who need to absorb essential calculus concepts quickly, as well as those who want to take their time in order to really grasp the nuances of the field. I took a discrete math class as an undergrad at Umass Boston and got an A. I also periodically helped some of the other people in the class. 14 Subjects: including algebra 1, calculus, geometry, GRE ...I scored a 34 in math on my ACT, senior year of high school. After high school, I attended Boston College where I studied Business and Computer Science. Both fields rely heavily on math, and require the taking of courses such as: statistics, discrete mathematics, math for management, etc. 19 Subjects: including algebra 1, Spanish, writing, English ...My sessions ran parallel with the school teacher's instruction, meaning the material I used was predicated on the school day's lesson plan. We would practice the same concepts and principles but with different questions, figures, and activities. This insured that the student completed homework ... 49 Subjects: including algebra 1, reading, English, writing
{"url":"http://www.purplemath.com/Nahant_algebra_1_tutors.php","timestamp":"2014-04-17T04:54:13Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:1647a28e-155a-4b2d-a457-76ee5357d823>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Mc Lean, VA Algebra 1 Tutor Find a Mc Lean, VA Algebra 1 Tutor ...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way to learn.Have studied and scored high marks in econometric... 14 Subjects: including algebra 1, calculus, statistics, geometry ...Assembled, cleaned, and stocked laboratory equipment and prepared laboratory specimens. Evaluated papers and instructed students. I have given professional speeches since 2004 to populations of educators, childcare providers, and entrepreneurs. 64 Subjects: including algebra 1, reading, chemistry, English ...I have a bachelor's degree in Biology and a doctorate degree in cell and molecular biology. I'm a very patient person and would happy to help to share my knowledge and love of math and science with you.My educational background includes a bachelor's degree in General Biology from Cornell Univers... 14 Subjects: including algebra 1, reading, geometry, biology ...I do work full time, but would be available during evening hours and sometimes on the weekends on a case by case basis. I'd love to help you or your child succeed academically, be it in a school course or in preparing for an exam like the GRE. Please contact me if you are interested in working together. 25 Subjects: including algebra 1, chemistry, calculus, physics I am fluent in Portuguese and Spanish, having taken advanced coursework in both languages at the university level and having lived and worked in Latin America. In addition to being born in South America, I have lived worked and traveled to 12 Spanish speaking countries. I have also traveled extensively on my 6 trips to Brazil, visiting 11 Brazilian states. 10 Subjects: including algebra 1, Spanish, geometry, algebra 2 Related Mc Lean, VA Tutors Mc Lean, VA Accounting Tutors Mc Lean, VA ACT Tutors Mc Lean, VA Algebra Tutors Mc Lean, VA Algebra 2 Tutors Mc Lean, VA Calculus Tutors Mc Lean, VA Geometry Tutors Mc Lean, VA Math Tutors Mc Lean, VA Prealgebra Tutors Mc Lean, VA Precalculus Tutors Mc Lean, VA SAT Tutors Mc Lean, VA SAT Math Tutors Mc Lean, VA Science Tutors Mc Lean, VA Statistics Tutors Mc Lean, VA Trigonometry Tutors
{"url":"http://www.purplemath.com/mc_lean_va_algebra_1_tutors.php","timestamp":"2014-04-19T17:37:41Z","content_type":null,"content_length":"24140","record_id":"<urn:uuid:29bcabd3-e95d-4257-b33d-ac1eb6146e82>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Dublin, CA Prealgebra Tutor Find a Dublin, CA Prealgebra Tutor My passion lies within education. I was fortunate enough to be selected to manage my own laboratory courses in general biology and upper division biochemistry. Though both opportunities were extremely rewarding, my experience as a general biology instructor solidified my passion for teaching. 12 Subjects: including prealgebra, geometry, biology, algebra 1 ...For a student struggling with grammatical concepts, this may mean kicking a soccer ball back and forth in order to understand the difference between direct and indirect objects. For another student who struggles with reading comprehension, this may mean diagramming the elements of plot or making... 14 Subjects: including prealgebra, reading, algebra 1, grammar ...I have nearly forty scientific publications and book chapters from applying my chemistry knowledge to challenging problems in biochemistry and material science. I continue to use my chemistry background to solve difficult scientific challenges posted on open-innovation websites for financial awa... 19 Subjects: including prealgebra, chemistry, physics, calculus ...I am an effective and enthusiastic tutor seeking to continue this line of educational work, and can work with students ages 10 to adult, and ability levels from elementary school to college. Through my academic and personal experiences, I have developed a deep appreciation for the value of a goo... 11 Subjects: including prealgebra, reading, English, ESL/ESOL ...I never give up until we create the success the student needs. So many standardized tests are required these days – CAHSEE, PSAT, SAT I and SAT II subject tests, ACT and the many Advanced Placement tests [AP Literature, AP History, AP Chemistry, AP Math, AP Physics, AP Languages ... and more]. S... 44 Subjects: including prealgebra, English, chemistry, physics Related Dublin, CA Tutors Dublin, CA Accounting Tutors Dublin, CA ACT Tutors Dublin, CA Algebra Tutors Dublin, CA Algebra 2 Tutors Dublin, CA Calculus Tutors Dublin, CA Geometry Tutors Dublin, CA Math Tutors Dublin, CA Prealgebra Tutors Dublin, CA Precalculus Tutors Dublin, CA SAT Tutors Dublin, CA SAT Math Tutors Dublin, CA Science Tutors Dublin, CA Statistics Tutors Dublin, CA Trigonometry Tutors Nearby Cities With prealgebra Tutor Brentwood, CA prealgebra Tutors Castro Valley prealgebra Tutors Danville, CA prealgebra Tutors Fremont, CA prealgebra Tutors Hayward, CA prealgebra Tutors Lafayette, CA prealgebra Tutors Livermore, CA prealgebra Tutors Menlo Park prealgebra Tutors Piedmont, CA prealgebra Tutors Pleasanton, CA prealgebra Tutors Redwood City prealgebra Tutors San Leandro prealgebra Tutors San Ramon prealgebra Tutors Union City, CA prealgebra Tutors Walnut Creek, CA prealgebra Tutors
{"url":"http://www.purplemath.com/dublin_ca_prealgebra_tutors.php","timestamp":"2014-04-17T13:44:56Z","content_type":null,"content_length":"24196","record_id":"<urn:uuid:f8bdf7b5-f1b1-42a3-94fc-89bf561c2595>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Signature algorithms The naming scheme described here is different from that described in Sun's documentation for the Java Cryptography Architecture. Several name formats have been used up to now for signature • in JDK 1.1, "digest/signature-primitive" (e.g. "SHA-1/RSA" and "SHA/RSA"), • in Cryptix 3.0.x, "digest/signature-primitive/PKCS#1" (e.g. "SHA-1/RSA/PKCS#1" and "SHA/RSA/PKCS#1"), • in JDK 1.2, "digestwithsignature-primitive" (e.g. "SHA1withRSA"; note that this is a slightly irregular example because of the missing hyphen in SHA1). These names are now deprecated (but are still specified temporarily as aliases). The new preferred format is "signature-primitive/signature-encoding". For this format, the signature encoding will usually have a message digest name as a creation parameter, e.g. "RSA/PKCS1-1.5(SHA-1)". All of the algorithms defined here use either modular exponentiation or elliptic curve multiplication, which are potentially vulnerable to timing attacks. See the following paper for details and possible countermeasures: DSA[(outputFormat)] Signature The Digital Signature Algorithm, as defined in NIST FIPS PUB 186. (This is technically equivalent to the version of DSA defined in NIST FIPS PUB 186-2. However, Change Notice 1 to FIPS PUB 186-2 requires keys to have a modulus length of exactly 1024 bits. This algorithm does not impose that requirement, which would be an incompatible change.) The default outputFormat is "DER". This algorithm is separated from the generalisation, "DSA-1363", described below, in order to ensure that an implementation of DSA by a provider earlier in the priority list does not 'mask' an implementation of DSA-1363. □ "1.2.840.10040.4.3", "SHA1withDSA" □ "SHA/DSA", "SHA-1/DSA", and "1.3.14.3.2.12" (all deprecated) □ [Def] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186, U.S. Department of Commerce. http://www.itl.nist.gov/div897/pubs/fip186.htm and http://www.itl.nist.gov/div897/pubs/186chg-1.htm □ [Inf] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186-2 + Change Notice 1, U.S. Department of Commerce. □ [Inf] ANSI X9.30-1, "American National Standard, Public-Key Cryptography Using Irreversible Algorithms for the Financial Services Industry", 1993. □ [Inf] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] Bruce Schneier, "Section 20.1 Digital Signature Algorithm (DSA)," Applied Cryptography, Second Edition, John Wiley & Sons, 1996. □ [Patent] The United States of America, as represented by the Secretary of Commerce (assignee), "Digital signature algorithm," U.S. Patent 5,231,668, filed July 26 1991, issued July 27 1993. □ [An] Serge Vaudenay, "Hidden collisions on DSS," Advances in Cryptology - Crypto '96, Volume 1109 of Lecture Notes in Computer Science, pp. 83-88. Springer-Verlag, 1996. □ [An] Phong Q. Nguyen, "The Dark Side of the Hidden Number Problem: Lattice Attacks on DSA," Cryptography and Computational Number Theory, CCNT '99 (K. Lam, I. Shparlinski, H. Wang, and C. Xing, eds.) Progress in Computer Science and Applied Logic 20, pp. 321-330. Birkhäuser, 2001. □ [An] Phong Q. Nguyen, Igor E. Shparlinski, "The Insecurity of the Digital Signature Algorithm with Partially Known Nonces," Journal of Cryptology, Volume 15 (2002), pp. 151-176. □ [An] Daniel Bleichenbacher, "On the Generation of DSA One-Time Keys," Presented at ECC 2002. Some providers may implement a "RawDSA" algorithm, which takes a 20-byte input corresponding to the SHA-1 hash of the message to be signed. This is not formally defined as an algorithm name in SCAN; "DSA-1363(DER)/Raw" is similar but not identical (since it uses a generalisation of DSA). Security comments: □ FIPS 186 specifies that: ☆ the key parameter p may be between 512 and 1024 bits, in multiples of 64 bits; ☆ the key parameter q must be 160 bits; ☆ the encoding method must be that specified in the standard, using SHA-1 as the hash function. Values of p and q larger than 1024 and 160 bits respectively may be desirable for long term security, however implementations of this algorithm are not required to support such parameters. Applications that require a more general algorithm are encouraged to use "DSA-1363" instead. □ DSA SHOULD only be used with parameters that have been generated pseudo-randomly (as described in FIPS 186). If this is not the case, it may be possible for whoever generated the parameters to forge one or more signatures. □ The abstract of the paper by Vaudenay cited above is: We explain how to forge public parameters for the Digital Signature Standard with two known messages which always produce the same set of valid signatures (what we call a collision). This attack is thwarted by using the generation algorithm suggested in the specifications of the Standard, so it proves one always need to check proper generation. We also present a similar attack when using this generation algorithm within a complexity 2^74, which is better than the birthday attack which seeks for collisions on the underlying hash function. □ To prevent the attacks by Bleichenbacher, Nguyen, and Shparlinski, the random nonces generated for each signature must be independent and uniformly distributed on [0, q). DSA-1363[(outputFormat)][/encoding] Signature A generalisation of the Digital Signature Algorithm, as defined in IEEE Std 1363-2000. By default, the EMSA1 encoding method specified by IEEE Std 1363-2000 is used, with the SHA-1 message digest If an encoding method is explicitly specified, it is used instead of the default (this requires that the order of the base in the DSA parameters, usually denoted q, is large enough to accomodate message representatives generated by this encoding method). This algorithm also differs from the "DSA" algorithm, in having "1363" as the default outputFormat. The "DER" and "OpenPGP" output formats SHOULD normally also be supported. □ "dsa-sha1" is an alias to "DSA-1363/EMSA1(SHA-1)" (for SPKI support). □ "http://www.w3.org/2000/02/xmldsig#dsa" is an alias to "DSA-1363/EMSA1(SHA-1)" (for DSIG support). □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186, U.S. Department of Commerce. http://www.itl.nist.gov/div897/pubs/fip186.htm and http://www.itl.nist.gov/div897/pubs/186chg-1.htm □ [Inf] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186-2 + Change Notice 1, U.S. Department of Commerce. □ [Inf] Bruce Schneier, "Section 20.1 Digital Signature Algorithm (DSA)," Applied Cryptography, Second Edition, John Wiley & Sons, 1996. □ [Patent] The United States of America, as represented by the Secretary of Commerce (assignee), "Digital signature algorithm," U.S. Patent 5,231,668, filed July 26 1991, issued July 27 1993. □ [An] Serge Vaudenay, "Hidden collisions on DSS," Advances in Cryptology - Crypto '96, Volume 1109 of Lecture Notes in Computer Science, pp. 83-88. Springer-Verlag, 1996. □ [An] Phong Q. Nguyen, "The Dark Side of the Hidden Number Problem: Lattice Attacks on DSA," Cryptography and Computational Number Theory, CCNT '99 (K. Lam, I. Shparlinski, H. Wang, and C. Xing, eds.) Progress in Computer Science and Applied Logic 20, pp. 321-330. Birkhäuser, 2001. □ [An] Phong Q. Nguyen, Igor E. Shparlinski, "The Insecurity of the Digital Signature Algorithm with Partially Known Nonces," Journal of Cryptology, Volume 15 (2002), pp. 151-176. □ [An] Daniel Bleichenbacher, "On the Generation of DSA One-Time Keys," Presented at ECC 2002. □ [Test] IEEE, Test Vectors for Std 1363-2000. [for DSA-1363/EMSA1(SHA-1)] It is recommended that implementations make no practical restriction on the lengths of the key parameters p, q, g and x (in particular, values of p up to at least 4096 bits SHOULD be supported). Security comments: □ The security properties of DSA-1363 when used with an encoding method other than the default have not been extensively studied. □ DSA-1363 SHOULD only be used with parameters that have been generated pseudo-randomly (as described in FIPS 186). If this is not the case, it may be possible for whoever generated the parameters to forge one or more signatures. □ The abstract of the paper by Vaudenay cited above is: We explain how to forge public parameters for the Digital Signature Standard with two known messages which always produce the same set of valid signatures (what we call a collision). This attack is thwarted by using the generation algorithm suggested in the specifications of the Standard, so it proves one always need to check proper generation. We also present a similar attack when using this generation algorithm within a complexity 2^74, which is better than the birthday attack which seeks for collisions on the underlying hash function. These attacks apply to the default encoding method. □ To prevent the attacks by Bleichenbacher, Nguyen, and Shparlinski, the random nonces generated for each signature must be independent and uniformly distributed on [0, q). ECDSA[(outputFormat)][/encoding] Signature A generalisation of the Elliptic Curve Digital Signature Algorithm, as defined in IEEE Std 1363-2000. By default, the EMSA1 encoding method specified by IEEE Std 1363-2000 is used, with the SHA-1 message digest algorithm. If an encoding method is explicitly specified, it is used instead of the default (this requires that the order of the base point in the elliptic curve parameters, usually denoted n, is large enough to accomodate message representatives generated by this encoding method). The default outputFormat is "1363". "ecdsa-sha1" is an alias to "ECDSA(1363)/EMSA1(SHA-1)" (for SPKI support). "1.2.840.10045.4.1" is an alias to "ECDSA(DER)/EMSA1(SHA-1)". □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] X9.62-199x (draft), Public Key Cryptography For The Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA). □ [Inf] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186-2 + Change Notice 1, U.S. Department of Commerce. □ [An] Phong Q. Nguyen, Igor E. Shparlinski, "The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces," To appear in Designs, Codes and Cryptography. □ [Test] IEEE, Test Vectors for Std 1363-2000. It is recommended that implementations make no practical restriction on the lengths of the key parameters. Security comment: □ The security properties of ECDSA when used with an encoding method other than the default have not been extensively studied. □ To prevent the attacks by Nguyen and Shparlinski, the random nonces generated for each signature must be independent and uniformly distributed on [0, q). Patent status: ? ECNR(outputFormat)/encoding Signature Kaisa Nyberg, Rainer A. Rueppel The elliptic curve analogue of the Nyberg-Rueppel signature scheme, as defined in IEEE Std 1363-2000. This algorithm is specified by the ECSSA signature scheme used with the ECSP-NR signature primitive, and the ECVP-NR verification primitive. □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] Kaisa Nyberg, Rainer A. Rueppel, "A New Signature Scheme Based on the DSA Giving Message Recovery," 1st ACM Conference on Computer and Communications Security, Nov 3-5, 1993, Fairfax, Virginia. □ [Patent] r^3 Security Engineering AG (assignee), "Digital signature method and key agreement method," U.S. Patent 5,600,725, filed August 17 1994, issued February 4 1997. □ [An] E. El Mahassni, Phong Nguyen, Igor Shparlinski, "The Insecurity of Nyberg-Rueppel and Other DSA-Like Signature Schemes with Partially Known Nonces, To appear in Proceedings of Cryptography and Lattices Conference, CaLC 2001 (J.H. Silverman, ed.) Volume 2146 of Lecture Notes in Computer Science, pp. 97-109. Springer-Verlag, 2001. It is recommended that implementations make no practical restriction on the lengths of the key parameters. Security Comment: To prevent the attacks by Nguyen and Shparlinski, the random nonces generated for each signature must be independent and uniformly distributed. Patent status: r^3 Security Engineering (now merged with Entrust Technologies) is the assignee of a patent on the Nyberg-Rueppel signature scheme. Certicom Corp., in a letter to the IEEE P1363 Chair, claims to have the exclusive North American license rights to this patent. It is not clear whether or not the patent also applies to ECNR. ElgamalSig(outputFormat)/encoding Signature Taher Elgamal □ "MD2/ElGamal" and "MD2/ElGamal/PKCS#1" are deprecated aliases to "ElgamalSig(OpenPGP)/PKCS1-1.5(MD2)". □ "MD5/ElGamal" and "MD5/ElGamal/PKCS#1" are deprecated aliases to "ElgamalSig(OpenPGP)/PKCS1-1.5(MD5)". □ "SHA/ElGamal", "SHA-1/ElGamal", "SHA/ElGamal/PKCS#1", and "SHA-1/ElGamal/PKCS#1" are deprecated aliases to "ElgamalSig(OpenPGP)/PKCS1-1.5(SHA-1)". □ "RIPEMD160/ElGamal", "RIPEMD-160/ElGamal", "RIPEMD160/ElGamal/PKCS#1" and "RIPEMD-160/ElGamal/PKCS#1" are deprecated aliases to "ElgamalSig(OpenPGP)/PKCS1-1.5(RIPEMD-160)". □ [Def] Taher Elgamal, "A Public-Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms," IEEE Transactions on Information Theory, v. IT-31, n. 4, 1985, pp. 469-472, or Advances in Cryptology - CRYPTO '84, pp. 10-18, Springer-Verlag, 1985. □ [Inf] Bruce Schneier, "Section 19.6 ElGamal," Applied Cryptography, Second Edition, John Wiley & Sons, 1996. □ [Inf] D. Bleichenbacher, "Generating ElGamal signatures without knowing the secret key," Advances in Cryptology - EUROCRYPT '96 (corrected version), Volume 1070 of Lecture Notes in Computer Science, pp. 10-18. Springer Verlag, 1996. □ Taher Elgamal currently spells his name, and the name of the Elgamal algorithm with a lowercase 'g'. □ The reason for choosing separate names "ElgamalEnc" and "ElgamalSig", for Elgamal encryption and signatures respectively, is that ElgamalEnc keys can use the "DH" key family, while ElgamalSig requires its own key family (because Elgamal signature keys have additional security constraints). □ It is recommended that implementations make no practical restriction on the lengths of the key parameters p, g and x (in particular, values of p up to at least 4096 bits SHOULD be supported). Security comments: □ p SHOULD be a safe prime, i.e. such that (p-1)/2 is prime. □ The paper by Bleichenbacher referenced above shows that if g has only small prime factors, and if g divides the order of the group it generates, then signatures can be forged. × ESIGN/encoding Signature Eiichiro Fujisaki, Tatsuaki Okamoto Submission dated November 1998. The ESIGN signature algorithm, as defined in the IEEE P1363a draft standard. Note that P1363a only allows use of ESIGN with the EMSA5-MGF1 encoding method. □ [Def] IEEE, "Draft Standard Specifications for Public Key Cryptography Amendment 1: Additional Techniques," □ [Inf, An, Test, Impl, Patent] Nippon Telegraph and Telephone Corporation, ESIGN Signatures Homepage, http://info.isl.ntt.co.jp/esign and □ [An] J. Stern, D. Pointcheval, J. Malone-Lee, N. P. Smart, "Flaws in Applying Proof Methodologies to Signature Schemes," Advances in Cryptology - Proceedings of Crypto 2002, Volume 2442 of Lecture Notes in Computer Science, pp. 93-110. Springer-Verlag, 2002. □ [An] Eiichiro Fujisaki, Tatsuaki Okamoto, "Security of Efficient Digital Signature Scheme TSH-ESIGN," Manuscript, November 1998, available as appendix A of "TSH-ESIGN: Efficient Digital Signature Scheme Using Trisection Size Hash." □ [Inf] Tatsuaki Okamoto, Eiichiro Fujisaki, Hikaru Morita, "TSH-ESIGN: Efficient Digital Signature Scheme Using Trisection Size Hash," Submission to IEEE P1363a, November 1998. http://grouper.ieee.org/groups/1363/StudyGroup/contributions/esign.pdf □ [History] A. Fujioka, Tatsuaki Okamoto, S. Miyaguchi, "ESIGN: An efficient digital signature implementation for smart cards," Advances in Cryptology - Proceedings of EUROCRYPT '91, pp. 446-457. Springer-Verlag, 1991. □ [History] Tatsuaki Okamoto, "A Fast Signature Scheme Based on Congruential Polynomial Operations," IEEE Transactions on Information Theory, IT-36, 1, pp. 47-53 (1990). □ [History] Tatsuaki Okamoto, A. Shiraishi, "A Digital Signature Scheme Based on Quadratic Inequalities," Proceeding of Symposium on Security and Privacy, pp. 123-132. IEEE, April 1985. □ It is recommended that implementations make no practical restriction on the lengths of the key parameters. ? NR(outputFormat)/encoding Signature Kaisa Nyberg, Rainer A. Rueppel The Nyberg-Rueppel signature scheme, with message encoding as defined in IEEE Std 1363-2000. This algorithm is specified by the DLSSA signature scheme used with the DLSP-NR signature primitive, and the DLVP-NR verification primitive. □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] Kaisa Nyberg, Rainer A. Rueppel, "A New Signature Scheme Based on the DSA Giving Message Recovery," 1st ACM Conference on Computer and Communications Security, Nov 3-5, 1993, Fairfax, Virginia. □ [Patent] r^3 Security Engineering AG (assignee), "Digital signature method and key agreement method," U.S. Patent 5,600,725, filed August 17 1994, issued February 4 1997. □ [An] E. El Mahassni, Phong Nguyen, Igor Shparlinski, "The Insecurity of Nyberg-Rueppel and Other DSA-Like Signature Schemes with Partially Known Nonces, To appear in Proceedings of Cryptography and Lattices Conference, CaLC 2001 (J.H. Silverman, ed.) Volume 2146 of Lecture Notes in Computer Science, pp. 97-109. Springer-Verlag, 2001. It is recommended that implementations make no practical restriction on the lengths of the key parameters p, q, g and x (in particular, values of p up to at least 4096 bits SHOULD be supported). Security Comment: To prevent the attacks by Nguyen and Shparlinski, the random nonces generated for each signature must be independent and uniformly distributed. Patent status: r^3 Security Engineering (now merged with Entrust Technologies) is the assignee of a patent on the Nyberg-Rueppel signature scheme. Certicom Corp., in a letter to the IEEE P1363 Chair, claims to have the exclusive North American license rights to this patent. Ron Rivest, Adi Shamir, Leonard Adelman □ "RSASSA", "1.2.840.113549.1.1.1" □ "SHA1withRSA" is an alias to "RSA/PKCS1-1.5(SHA-1)" (for JCA 1.2 compatibility). □ "RIPEMD160withRSA" is an alias to "RSA/PKCS1-1.5(RIPEMD-160)" (for JCA 1.2 compatibility). □ "MD5withRSA" is an alias to "RSA/PKCS1-1.5(MD5)" (for JCA 1.2 compatibility). □ "MD2withRSA" is an alias to "RSA/PKCS1-1.5(MD2)" (for JCA 1.2 compatibility). □ "rsa-pkcs1-sha1" is an alias to "RSA/PKCS1-1.5(SHA-1)" (for SPKI support). □ "rsa-pkcs1-md5" is an alias to "RSA/PKCS1-1.5(MD5)" (for SPKI support). □ "http://www.w3.org/2000/02/xmldsig#rsa-sha1" is an alias to "RSA/PKCS1-1.5(SHA-1)" (for DSIG support). □ "SHA/RSA", "SHA-1/RSA", "SHA/RSA/PKCS#1", and "SHA-1/RSA/PKCS#1" are deprecated aliases to "RSA/PKCS1-1.5(SHA-1)". □ "RIPEMD160/RSA", "RIPEMD-160/RSA", "RIPEMD160/RSA/PKCS#1" and "RIPEMD-160/RSA/PKCS#1" are deprecated aliases to "RSA/PKCS1-1.5(RIPEMD-160)". □ "MD5/RSA" and "MD5/RSA/PKCS#1" are deprecated aliases to "RSA/PKCS1-1.5(MD5)". □ "MD2/RSA" and "MD2/RSA/PKCS#1" are deprecated aliases to "RSA/PKCS1-1.5(MD2)". □ [Def] Ron Rivest, Adi Shamir, Leonard Adelman, "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems," MIT Laboratory for Computer Science and Department of Mathematics. Communications of the ACM, February 1978, Volume 21, Number 2, pp. 120-126. □ [Def] PKCS #1: RSA Encryption Standard, An RSA Laboratories Technical Note, Version 1.5. Revised November 1, 1993. □ [Inf] Bruce Schneier, "Section 19.3 RSA," Applied Cryptography, Second Edition, John Wiley & Sons, 1996. □ [Patent] R. Rivest, A. Shamir, L.M. Adelman, "Cryptographic Communications System and Method," U.S. Patent 4,405,829, filed December 14 1977, issued September 20 1983. □ [Test] IEEE, Test Vectors for Std 1363-2000. [for RSA/EMSA2(SHA-1)] It is recommended that implementations make no practical restriction on the lengths of the key parameters n and e (in particular, values of n up to at least 4096 bits SHOULD be supported). Patent status: RSA was previously patented in the United States and Canada; the patent has now expired. Ron Rivest, Adi Shamir, Leonard Adelman The variant of RSA defined by the IFSP-RSA2 and IFVP-RSA2 primitives from IEEE Std 1363-2000. If the modulus is n and the output of a normal RSA private key operation is t, then the output of the corresponding operation for this algorithm is min(t, n-t). This variant of RSA is normally used only with the EMSA2 encoding method, and only for compatibility with ISO/IEC 9796:1991. □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] ISO/IEC 9796:1991, "Information Technology - Security Techniques - Digital signature scheme giving message recovery." □ [see references for RSA] [see comment for RSA] Patent status: RSA was previously patented in the United States and Canada; the patent has now expired. Michael O. Rabin, Hugh C. Williams The Rabin-Williams signature scheme as defined in IEEE Std 1363-2000. □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] Michael O. Rabin, "Digitalized Signatures and Public Key Functions as Intractable as Factorization," MIT Laboratory for Computer Sciences Tech Report 212, January 1979. □ [Inf] Hugh C. Williams, "A Modification of the RSA Public-key Encryption Procedure," IEEE Transactions on Information Theory 26, pp. 726-729, 1980. It is recommended that implementations make no practical restriction on the length of the key parameter n (in particular, values of n up to at least 4096 bits SHOULD be supported). Note that any parameters required by Signature Encoding Methods are set and retrieved by calling set/getParameter on the Signature object, since there is not necessarily any object explicitly representing the encoding method. EMSA1(digest) Signature Encoding Method The encoding scheme described as "EMSA1" in IEEE Std 1363-2000. □ String digest [creation, no default] - the name of the message digest that is to be used to calculate the message representative. The only message digest algorithms for which this encoding method is defined are SHA-1 and RIPEMD-160. EMSA1(SHA-1) is compatible with the encoding used for DSA in FIPS 186, and for ECDSA in X9.62 and FIPS 186-2. Security comment: The message representatives output by this encoding method do not contain any specification of which message digest algorithm was used. Therefore, unless public keys are certified in such a way that each key is tied to use of only one digest algorithm, there is the risk of a collision between different algorithms (i.e. Hash1(X) == Hash2(Y) for distinct algorithms Hash1 and Hash2, and messages X and Y). To reduce the possibility of such collisions, implementations of this encoding MUST NOT support message digests other than SHA-1 and RIPEMD-160, and application designers are strongly advised to use only SHA-1 for the digest, if they use this encoding method. EMSA2(digest) Signature Encoding Method The encoding scheme described as "EMSA2" in IEEE Std 1363-2000, based on ANSI-X9.31. This should be capable of being used with both the RSA and RSA2 signature primitives. □ [Def] IEEE, IEEE Std 1363-2000: Standard Specifications For Public Key Cryptography. □ [Inf] Accredited Standards Committee X9, American Bankers Association, ANSI X9.31-1998: Digital Signatures Using Reversible Public Key Cryptography for the Financial Services Industry (rDSA). □ [Inf] U.S. National Institute of Standards and Technology, "Digital Signature Standard (DSS)," NIST FIPS PUB 186-2, U.S. Department of Commerce. □ [Test] IEEE, Test Vectors for Std 1363-2000. [for RSA/EMSA2(SHA-1)] □ String digest [creation, no default] - the name of the message digest that is to be used to calculate the message representative. The only message digest algorithms for which this encoding method is defined are SHA-1 and RIPEMD-160. × EMSA5-MGF1(digest) Signature Encoding Method Eiichiro Fujisaki, Tatsuaki Okamoto The "EMSA5" encoding defined in the IEEE P1363a draft standard, with the MGF1 Mask Generation Function. This encoding method is intended only for use with ESIGN. □ String digest [creation, no default] - the name of the message digest to be used. This is used both to calculate the message representative, and as the underlying digest for MGF1. PKCS1-1.5(digest) Signature Encoding Method Block type 01, described in section 10.1 of PKCS #1 v1.5. "PKCS#1", "EMSA-PKCS1-v1_5", "EMSA3" □ [Def] PKCS #1: RSA Encryption Standard, An RSA Laboratories Technical Note, Version 1.5. Revised November 1, 1993. □ [Inf] RSA Security, Inc., PKCS #1: RSA Cryptography Standard, version 2.0. □ String digest [creation, no default] - the name of the message digest that is to be used to calculate the message representative. Only message digests for which an ASN.1 OBJECT IDENTIFIER has been defined, may be used (see comment below). □ Some existing implementations of PKCS #1 only support moduli that are a multiple of 8 bits in length. The standard in fact makes no such restriction, and SCAN requires that bit lengths that are not a multiple of 8 MUST be supported. □ The DER encoding of an ASN.1 DigestInfo object used to construct message representatives can be found by prepending a fixed sequence of bytes to the digest result (this is much simpler than implementing generalised DER encoding). For commonly used message digest functions, the byte sequences to be prepended (in hexadecimal) are as follows: │MessageDigest│Sequence │ │ MD2 │30 20 30 0C 06 08 2A 86 48 86 F7 0D 02 02 05 00 04 10 │ │ MD5 │30 20 30 0C 06 08 2A 86 48 86 F7 0D 02 05 05 00 04 10 │ │ SHA-1 │30 21 30 09 06 05 2B 0E 03 02 1A 05 00 04 14 │ │ RIPEMD-160 │30 21 30 09 06 05 2B 24 03 02 01 05 00 04 14 │ │ Tiger(24,3) │30 29 30 0D 06 09 2B 06 01 04 01 DA 47 0C 02 05 00 04 18 │ │ SHA-256 │30 31 30 0D 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20 │ │ SHA-384 │30 41 30 0D 06 09 60 86 48 01 65 03 04 02 02 05 00 04 30 │ │ SHA-512 │30 51 30 0D 06 09 60 86 48 01 65 03 04 02 03 05 00 04 40 │ An implementation of PKCS1-1.5 encoding MUST allow for at least the message digests listed above to be used (although the same provider need not implement these digests). This is an incompatible change from SCAN 1.0.12-14, where Tiger(24,3) and SHA-{256,384,512} were not required. There was an error in the byte sequence for SHA-1 in SCAN 1.0.12. □ The EMSA alias for this encoding method changed from EMSA4 in earlier drafts of IEEE P1363a, to EMSA3. × PSS-MGF1(digest) Signature Encoding Method Probably, the "EMSA-PSS" encoding defined in PKCS #1 v2.1. Note that there are several incompatible versions of PSS, and it is not clear precisely which version will become standard. □ [Def] RSA Security, Inc., PKCS #1: RSA Cryptography Standard, version 2.1 (draft). □ [Inf] IEEE, IEEE P1363a draft version 6 (D6). □ [Inf] Mihir Bellare, Phillip Rogaway, PSS: Provably Secure Encoding Method for Digital Signatures, Submission to IEEE P1363a, August 1998. □ [Inf] Mihir Bellare, Phillip Rogaway, "The Exact Security of Digital Signatures: How to Sign with RSA and Rabin," □ [An] Burt Kaliski, "Hash Function Firewalls in Signature Schemes," Slides presented at IEEE P1363 Working Group Meeting, June 2, 2000 (revised June 8, 2000). http://grouper.ieee.org/groups/1363/Research/Presentations.html#hash □ String digest [creation, no default] - the name of the message digest that is to be used by the MGF1 mask generation function. □ [[TODO: check for differences between PKCS #1 v2.1 draft and P1363a.]] Security comment: [[Talk about hash function substitution attacks, and difference between P1363a D3 and D4.]] Patent status: The University of California has a patent pending on PSS. It has stated, in a letter to the IEEE, that: If PSS is included in an IEEE standard, the University of California will, when that standard is adopted, FREELY license any conforming implementation of PSS as a technique for achieving a digital signature with appendix. No registration fee or other administrative procedure will be required. Note that this is different to the licensing position for PSSR. × PSSR-MGF1(digest) Signature Encoding Method Probably, the "EMSA-PSSR" encoding defined in PKCS #1 v2.1. Note that there are several incompatible versions of PSS, and it is not clear precisely which version will become standard. □ [Def] Mihir Bellare, Phillip Rogaway, PSS: Provably Secure Encoding Method for Digital Signatures, Submission to IEEE P1363a, August 1998. □ [Inf] Mihir Bellare, Phillip Rogaway, "The Exact Security of Digital Signatures: How to Sign with RSA and Rabin," □ String digest [creation, no default] - the name of the message digest that is to be used by the MGF1 mask generation function. Patent status: The University of California has a patent pending on PSSR. It has stated, in a letter to the IEEE, that use of this technique will require a license to be acquired "under very reasonable terms and conditions". Note that this is different to the licensing position for PSS. ? SSL3 Signature Encoding Method Netscape Communications Corp. The signature encoding method used for RSA in SSL version 3.0, consisting of MD5 and SHA-1 hashes, encoded using PKCS #1 v1.5 block type 0. Missing information: Test vectors. "RSA/SSL3" is equivalent to "RSA/EMSA1(Parallel(MD5,SHA-1))". It is defined as a separate algorithm because EMSA1 is not normally used with RSA. ? TLS Signature Encoding Method Netscape Communications Corp., IETF Transport Layer Security Working Group The signature encoding method used for RSA in TLS, consisting of MD5 and SHA-1 hashes, encoded using PKCS #1 v1.5 block type 1. Block type 0 MUST also be accepted when verifying a signature. □ [Def] T. Dierks, C. Allen, "The TLS Protocol Version 1.0," RFC 2246, January 1999. Missing information: Test vectors. For generation of signatures, "RSA/TLS" is equivalent to "RSA/PKCS1-1.5(Parallel(MD5,SHA-1))". It is defined as a separate algorithm because it also accepts PKCS #1 block type 0 on verification. Raw Signature Encoding Method A "null" encoding method, that passes its input directly to the underlying primitive. The block length is as large as necessary to ensure that all inputs to the public key primitive are possible (and no larger). This usually means that some block contents will not be valid; these will cause the signature to be rejected when the Signature object's verify method is called, or an IllegalArgumentException to be thrown when the sign method is called. Security comment: There are many attacks possible on public key signature algorithms when this encoding method is used. It is intended only as a way to obtain access to a public key primitive (for those providers that support it), in order to implement encoding methods at the application rather than the provider level, or to maintain compatibility with legacy protocols. Where there are several possible output formats for a signature algorithm, this name indicates that the alternative consistent with IEEE Std 1363-2000 Annex E must be used. The convention used by 1363 for formatting more than one arbitrary-length integer, is to concatenate their big-endian unsigned representations. Each integer is padded on the left with zeroes, to the length defined by the algorithm parameters (for example if an integer is in the range 0..n-1, the result will have the same number of bytes as is needed to represent n-1). The signature algorithm is assumed to specify a canonical order for the integers. To parse this format, the receiver must split it into blocks of the correct lengths (usually equal), one for each integer. If this cannot be done, the signature MUST be treated as invalid. Where there are several possible output formats for a signature algorithm, this name indicates that the DER-encoded alternative must be used. The type used to DER-encode more than one arbitrary-length integer, is SEQUENCE { INTEGER a, INTEGER b, ... }. The signature algorithm is assumed to specify a canonical order for the integers. To parse this format, the receiver must always interpret it as DER, not BER. If the signature is not a DER encoding of the correct type, it MUST be treated as invalid. Where there are several possible output formats for a signature algorithm, this name indicates that the alternative specified by OpenPGP must be used. The convention used by OpenPGP for formatting more than one arbitrary-length integer, is to encode each integer as a two-byte big-endian length field indicating the bit length of the integer, followed by the bytes of the integer in big-endian order, with no leading zeroes (see section 3.2 of RFC 2440). The signature algorithm is assumed to specify a canonical order for the integers. When parsing this format, if the length fields are inconsistent with the total length of the signature, it MUST be treated as invalid. □ [Def] Jon Callas, Lutz Donnerhacke, Hal Finney, Rodney Thayer "OpenPGP Message Format," RFC 2440, November 1998.
{"url":"http://www.weidai.com/scan-mirror/sig.html","timestamp":"2014-04-18T23:32:13Z","content_type":null,"content_length":"66501","record_id":"<urn:uuid:99a00a60-9cef-4f33-897b-8dad8e60d532>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying an Expression After going through these steps you should know how to simplify an expression and use a calculator to check your answer. Step 2: First Step Because of the exponent we must compute it first. Notice that we are really just multiplying 2, six times. Step 3: Step 2 Now we multiply more. In general we should be starting from the left and then working our way to the right, but because of notation we start with the exponent. Step 5: The Calculator uses order of operation So we always can check our answer by using a calculator. nice 'ible! are you a math teacher?
{"url":"http://www.instructables.com/id/Simplifying-an-Expression/","timestamp":"2014-04-20T03:56:38Z","content_type":null,"content_length":"136155","record_id":"<urn:uuid:9b4d1a38-36b7-46af-bf0f-c080363f5195>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Study Of Solubility Equilibrium Biology Essay The solubility product constant of potassium hydrogen tartrate in water and it dependence of temperature were investigated in this experiment. The solubility product constant was determined at different temperature through acid-base titration against NaOH. A linear graph was obtained by plotting ln Ksp against 1/T and positive correlation between temperatures and solubility product constant was observed. This study concluded that solubility product constant of potassium hydrogen tartrate is dependent only on temperature. The aim of this experiment is to investigate the solubility product constant of potassium hydrogen tartrate in water and it dependence on temperature. Solubility is often defined the amount of substance required in obtaining a saturated solution. Therefore, only a small amount of potassium hydrogen tartrate (KHC4H4O6) is needed to produce a saturated solution as it has limited solubility in water. In the saturated solution, the rate of the dissociation of the solid is the same as the rate of the aqueous ions forming the solid compound; the solution is known to be at equilibrium. The equilibrium equation for KHC4H4O6 in the solution can be written as: The constant for the equilibrium equation can be expressed as: Ksp = [K+] [HC4H4O6-]. This constant is also known as the solubility product constant (Ksp) which has a fixed value for a given system at constant temperature. Thus, by finding out the concentration of the ions dissolved, the solubility product constant for KHC4H4O6 can be determined. From the equation above, the dissociation of KHC4H4O6 will produce equal amount of potassium ions (K+) and hydrogen tartrate ions (HC4H4O6-). Thus, by obtaining the concentration of one of the ions, the concentration of the other ion can be derived and the solubility product constant can be calculated. As HC4H4O6- behaves like a weak acid, its concentration can be determined by acid-base titration using NaOH, a strong base as the titrant, with phenolphthalein as the indicator. As NaOH and HC4H4O6- react with each other in 1:1 ratio, the amount of NaOH used in the titration will be equal to the amount of HC4H4O6- present in the solution. While Ksp is fixed at a certain condition, changes in temperature will affect the value of Ksp. According to the van’t hoff equation, the value of Ksp is related to the change in Gibbs free energy and can be expressed as: From the equation, the solubility product constant depends on three variables which are the change in enthalpy, the change in entropy and the temperature. The change in entropy and enthalpy with respect to temperature were stated to be insignificant due to the similar heat capacities of the product and reactants. This suggests a linear trend between the remaining variable and Ksp [1]. Therefore, a graph of natural logarithm of Ksp versus the reciprocal of temperature can be plot which the gradient of the graph can be used to calculate the enthalpy change and the y-intercept for the entropy change. Thus, the relationship between Ksp and temperature can be observed. Experimental Procedure Dried KHC8H4O4 (0.5002 g) was prepared in a 250 mL conical flask with the help of an analytical balance. Deionized water (25.0 mL) was added into the flask and a standard solution of KHC8H4O4 was obtained. The prepared solution was then titrated against an unknown concentration of NaOH to the endpoint, with phenolphthalein as the indicator. The volume of NaOH used was recorded. The entire procedure was then repeated with different masses KHC8H4O4 (0.5039 g, 0.5033 g). The concentration of the NaOH was calculated from the volume of NaOH used and tabulated in Table 1. A saturated KHC4H4O6 solution was prepared by adding one gram of KHC8H4O4 into a 250 mL conical flask, containing 100.0 mL of deionized water. The flask was swirled for five minutes and put to rest with occasional swirling for another five minutes at room temperature. At the end of ten minutes, the solution was then filtered and the supernatant was collected in a dry 250 mL conical flask. Concurrently, the temperature of the solution in the filter funnel was recorded. Two portions of 25.0 mL of the filtered solution were then pipetted into two separate 250 mL conical flasks. The two solutions were titrated against the 0.7070M NaOH solution to the endpoint, with phenolphthalein as the indicator. The volume of the NaOH used was recorded. The procedure was then repeated for different temperatures. For temperature above room temperature, a hot water bath was prepared in a one litre beaker on a hotplate stirrer. The saturated KHC4H4O6 solution was prepared in the same way but was placed in a hot water bath with constant stirring, using a stir bar. The solution was put aside with occasional monitoring until a constant temperature was observed. Next, the solution was decanted in small amount into a dry conical flask. The temperature of the solution in the filter funnel was recorded concurrently. Three portions of 25.0 mL of the filtered solution were then pipetted into three separate 250 mL conical flasks. For temperature below room temperature, an ice-water bath was prepared in a one litre beaker. The solution was also prepared in the same way as the previous procedure and was placed into the ice-water bath. The solution was cooled until the solution stabilized at a certain temperature. The solution was then filtered and the temperature of the solution in the filter funnel was recorded. Three portions of 25.0 mL of the filtered solution were then pipetted into three separate 250 mL conical flasks similar to the above room temperature setup. The six solutions were then placed aside for it to return to room temperature and then titrated against the standardized NaOH. The solutions were titrated the same way as the titration done at room temperature. The volume of NaOH used was recorded for the different solutions were recorded. The average volume of NaOH used for the same temperature was then calculated and tabulated in Table 2. Data Treatment and Analysis The calculations of [HC4H4O6-], [K+] and Ksp at 302.15K: [NaOH] = 7.070 x 10-2 mol L-1 Amount of NaOH used = (7.070 x 10-2 mol L-1) (1.2825 x 10-2 L) = 9.067 x 10-4 mol Amount of HC4H4O6- = Amount of NaOH used = 9.067 x 10-4 mol [HC4H4O6-] = [K+] = 9.067 x 10-4 / (0.0250 L) = 3.63 x 10-2 mol L-1 Ksp = [K+] [HC4H4O6-] = (3.63 x 10-2 mol L-1)2 = 1.32 x 10-3 The calculated value of [K+], [HC4H4O6-] and Ksp were tabulated into the table below: Table 2: Determination of Ksp of KHC4H4O6 at different temperature Temperature / K Average Vol. of NaOH used / L Amount of NaOH used / mol [HC4H4O6-] / mol L-1 [K+] / mol L-1 Kspof KHC4H4O6 7.4750 x 10-3 5.327 x 10-4 2.13 x 10-2 2.13 x 10-2 4.54 x 10-4 1.0075 x 10-2 7.180 x 10-4 2.87 x 10-2 2.87 x 10-2 8.25 x 10-4 1.2825 x 10-2 9.067 x 10-4 3.63 x 10-2 3.63 x 10-2 1.32 x 10-3 1.6375 x 10-2 1.158 x 10-3 4.63 x 10-2 4.63 x 10-2 2.14 x 10-3 2.2375 x 10-2 1.582 x 10-3 6.33 x 10-2 6.33 x 10-2 4.00 x 10-3 Based on the temperature and Ksp value obtained in Table 1, values of 1/T and ln Ksp were calculated and tabulated in Table 3. A graph was plotted based on the values: Figure 1: Graph of Ksp versus 1/T From Figure 1, the gradient and y-intercept was obtained as shown in Table 4. The enthalpy change and entropy change was calculated based on the van’t hoff equation: Gradient = - (/ R) = -5692.06 Standard deviation of gradient: ± 99.87 = - (-5692.06 x 8.314) ± (99.87 x 8.314) = (47.32 ± 0.83) kJ K-1 mol-1 Y-intercept = (/ R) = 12.25 ± 0.33 Standard deviation of Y-intercept = ± 0.33 = (12.25 x 8.314) ± (0.33 x 8.314) = (101.85 ± 2.74) J K-1 mol-1 The standard error of regression was found to be 0.0295. (Number of measurements = 6, Degree of Freedom = 4) Results and Discussion From the data obtained, the calculated values of and were (47.3 ± 0.83) kJ K-1 mol-1 and (101.85 ± 2.74) J K-1 mol-1 respectively. Ksp of KHC4H4O6 was found to be 1.32 x 10-3 at 302.15K. It was observed that a linear graph was obtained upon plotting ln Ksp against the reciprocal of T. The increase in temperature was also found to correlate with the increase of Ksp values. The literature Ksp value for KHC4H4O6 is 3.8 x 10-4 at 291.15K. [2] The approximated Ksp value that corresponds to 291.15k based on experimental data was calculated to be 6.755 x 10-4 as shown in the Appendices. Linear Relationship between T and Ksp Based on figure 1, a linear model was observed between the reciprocal of T and the natural logarithm of Ksp. This was supported by the R-square value of 0.99 which greatly suggests a linear trend from the experimental data plotted. The standard error of regression obtained from the experiment was found to be 0.0295, which indicates a good fit among the experimental values obtained, corresponding to a good precision of the experimental data. Thus from the linear trend, the claim of insignificant changes of enthalpy and entropy due to temperature changes was valid. Therefore, the assumption that the value of Ksp is dependent only on temperature at which the dissolution occurs can be established. Comparison of Literature values The estimated Ksp value based on experimental data was 6.755 x 10-4 at 291.15K and was found to be 43.75% higher than the literature value (3.8 x 10-4) [2]. The difference could be accounted to the limitation of this experiment. As the experiment was carried out in different temperature, one of the limitations was due to the apparatus used. The volumetric glass pipette used was calibrated at 20 , thus at other temperature, expansion or contraction might occur leading to the inaccurate volume transferred for titration after the filtering process. Another source of error was noted to be the temperature fluctuation during the filtering process. Although the solution were decant in small portions to minimize errors, rapid increase of the temperature for the cold temperature reading was observed. This corresponds to the increase in the ions concentration dissolved in the solution, thus resulting in a higher value of Ksp. Despite the percentage difference of 43.75%, the difference between both values was actually small due to the fact that the Ksp of KHC4H4O6 is a very small value. When the uncertainty of the enthalpy change and entropy change was taken into account, the experimental Ksp value was assumed to be between 3.446 x 10-4 and 1.324 x 10-3(Refer to Appendices). The literature value was noted to be within this range, thus the experimental data do agree with the theoretical value of KHC4H4O6. Change of Enthalpy and Entropy The change of enthalpy from the reaction was found to be (47.3 ± 0.83) kJ K-1 mol-1. The positive enthalpy change means that the dissolution of KHC4H4O6 was an endothermic process where heat was absorbed during the process. This was expected as the dissolution breaks up the stronger ionic bonds within KHC4H4O6 and weaker bonds between the water molecules and the ions was formed. These resulted in a positive net change for enthalpy for the reaction, which is consistent with the positive enthalpy change derived from the experimental data. The change of entropy was found to be (101.85 ± 2.74) J K-1 mol-1. As entropy was often defined as a measure of disorder, the positive entropy can be explained with the increased disorder brought about when the when KHC4H4O6 dissolved into ions. As the value of enthalpy change was much larger than the entropy change, in order to get a larger value of ln K based on the van’t hoff equation, higher temperature was required. This coincide with high temperature favors endothermic process such as dissolution of KHC4H4O6, thus it can be concluded that temperature have a positive correlation with Ksp. Ksp have a linear relationship with temperature for KHC4H4O6. The temperature dependent of enthalpy change and entropy change was found to be insignificant for the dissolution of KHC4H4O6. As dissolution is an endothermic process, temperature has a positive correlation with Ksp, thus higher temperature allow more KHC4H4O6 to dissolve. This concluded that solubility product constant of potassium hydrogen tartrate is dependent only on temperature. Share This Essay Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay. More from UK Essays Need help with your essay? We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay, you will see an instant price based on your specific needs before the order is processed:
{"url":"http://www.ukessays.com/essays/biology/study-of-solubility-equilibrium-biology-essay.php","timestamp":"2014-04-20T13:35:16Z","content_type":null,"content_length":"34456","record_id":"<urn:uuid:fcce08d6-a1e3-4d6b-bc40-2a9533190547>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
rms current in circuit with capacitor resistor and rms output 1. The problem statement, all variables and given/known data A 39.5 uF capacitor is connected to a 47.0 ohm resistor and a generator whose rms output is 25.2 V at 60.0 Hz. Calculate the rms current in the circuit. (also asks for voltage drop across resistor, capacitor and the phase angle for the circuit, but I mostly want to get the first one first) 2. Relevant equations I=current, V=voltage, R=resistance, f=frequency Irms=delta Vc, rms/Xc delta Vc,rms=Irms*Xc and then I start going in circles Other formulas that might be appropriate: L=?? not given in this problem Power average=Irms^2*R 3. The attempt at a solution combining lots of these formulas trying to come up with the correct Vrms has proven unsuccessful. I came up with 17.819 A, .37515 A, .21739 A, none of which were right. None of them were successful and I don't remember or care to explain how I got them precisely because I wasn't confident in those anyway. Overall I still haven't been able to link my given information directly to the answer I need
{"url":"http://www.physicsforums.com/showthread.php?t=448247","timestamp":"2014-04-20T23:44:27Z","content_type":null,"content_length":"35279","record_id":"<urn:uuid:60493de3-a6b5-4fdc-901b-69e53ac97916>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
50 projects tagged "Statistics" Yalst ("yet another live support tool") is a powerful chatting tool that integrates easily with any Web site. The highlights are visitor and operator initiated chats, audio and video chats, visitor monitoring and tracking with alarm functionality, form monitoring, file transfers in both directions during chats, plugin-free co-browsing, marketing tools (push banners, URLs, messages, and customized surveys), ad-tracking of campaigns, conversion tracking, departments, a FAQ database, a customized contact form (if chat is offline), chat between operators, and an application programming interface for deep Web site integration.
{"url":"http://freecode.com/tags/statistics?page=1&with=&without=771","timestamp":"2014-04-16T05:58:36Z","content_type":null,"content_length":"98304","record_id":"<urn:uuid:0345a867-e772-4f39-a0ae-d1091293a20a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Mathematics Please read the syllabus for more information. The textbook is available at here . 1. Midterm exam 1: October 11th. Section 3-12, 14-17, 20-22 will be covered. Click here to see the format of the exam. The solution of midterm exam 1 is here 2. Midterm exam 2 is postponed to Nov 20th. Section 17,19, 24, 25 (and cardinality of real number), 30-34 will be covered. Click here to see some helpful information of the exam. Solution of midterm exam 2 is Here (Link will be available after 3:50pm.) 3. Final Exam: 2:00 -- 3:50, December 18th (in our class room). All materials in this semester will be covered (50% graph theory, 50% other) . 20 sample problems will be posted online 9:00 am, December 12th. 4 of them will be selected as problems in the final exam. (13 pts each). Other problems are 6 multiple choice and 10 T/F. (3 pts each) Click here to see the sample problems. 4. Click Here to see the solution of final exam. Link will be available after 3:50pm.
{"url":"http://www.cims.nyu.edu/~yaoli/dismath.html","timestamp":"2014-04-17T15:28:32Z","content_type":null,"content_length":"3231","record_id":"<urn:uuid:bf28725a-7fa0-494b-896e-0680f1ff2036>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
As easy as it seems? This is a quadratic equation. Calling the short side 'x', you've got x(x+3)=88. Multiplying out of brackets gives x²+3x=88. Put it all on one side, because with quadratics that's just what you do: x²+3x-88=0 Factorise it: (x+11)(x-8)=0 There's no short way to factorise things, you just get a knack eventually. Anyway, as (x+11)(x-8)=0, that means that either x-11 or x-8=0, because anything times 0=0. So, if x+11=0, rearranging gives x=-11. This would be a valid solution if it was a pure question, but as the question is applied to measurements, negative values are ignored. This leaves x-8=0, which when rearranged gives x=8, which is the answer. Your room is 8m wide and 11m long. Sorry if I've been too complicated or patronising.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=11726","timestamp":"2014-04-17T01:01:53Z","content_type":null,"content_length":"14778","record_id":"<urn:uuid:9fbcec1c-5c35-4662-bff5-9d7a1d82ed0b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Catasauqua Algebra Tutor ...Contact me if you have any questions at all and I will be more than happy to answer them!I have a Bachelor of Science degree in Physics with a minor in mathematics. I spent one year as a one-on-one tutor for introductory physics courses at Kutztown University and have one semester of experience ... 16 Subjects: including algebra 1, algebra 2, chemistry, calculus ...I have received formal training from the Community Music School in Allentown for a combined total of 3 years. I have experience teaching others to play piano and have had great success doing so. I am CompTIA Network + Certified. 17 Subjects: including algebra 2, algebra 1, reading, English ...Algebra 1 has become a very important subject in Pennsylvania with the advent of the Keystone Exams, and it is a requirement if a student will progress successfully throughout High School. Trigonometry is one of those subjects that takes a student out of their comfort zones. I have seen many very good Algebra 2 and Pre-Calculus students flounder in Trigonometry. 12 Subjects: including algebra 2, algebra 1, calculus, geometry ...It is so rewarding to see someone learn so rapidly. My tutoring methods include explanation, assistance, and guided coaxing. I like to help people figure things out on their own--that is the best way to learn. 44 Subjects: including algebra 2, algebra 1, reading, English ...Also, degrees of angles, parallel lines with a transversal and other figures. The proper use of the compass and protractor to develop figures without a ruler might be introduced. And like I stated proofs of theorems are simply introduced in conjunction with the algebra. 11 Subjects: including algebra 1, geometry, algebra 2, ASVAB
{"url":"http://www.purplemath.com/Catasauqua_Algebra_tutors.php","timestamp":"2014-04-19T02:07:19Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:178d065a-7eea-4542-a660-24eedf655917>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Catasauqua Algebra Tutor ...Contact me if you have any questions at all and I will be more than happy to answer them!I have a Bachelor of Science degree in Physics with a minor in mathematics. I spent one year as a one-on-one tutor for introductory physics courses at Kutztown University and have one semester of experience ... 16 Subjects: including algebra 1, algebra 2, chemistry, calculus ...I have received formal training from the Community Music School in Allentown for a combined total of 3 years. I have experience teaching others to play piano and have had great success doing so. I am CompTIA Network + Certified. 17 Subjects: including algebra 2, algebra 1, reading, English ...Algebra 1 has become a very important subject in Pennsylvania with the advent of the Keystone Exams, and it is a requirement if a student will progress successfully throughout High School. Trigonometry is one of those subjects that takes a student out of their comfort zones. I have seen many very good Algebra 2 and Pre-Calculus students flounder in Trigonometry. 12 Subjects: including algebra 2, algebra 1, calculus, geometry ...It is so rewarding to see someone learn so rapidly. My tutoring methods include explanation, assistance, and guided coaxing. I like to help people figure things out on their own--that is the best way to learn. 44 Subjects: including algebra 2, algebra 1, reading, English ...Also, degrees of angles, parallel lines with a transversal and other figures. The proper use of the compass and protractor to develop figures without a ruler might be introduced. And like I stated proofs of theorems are simply introduced in conjunction with the algebra. 11 Subjects: including algebra 1, geometry, algebra 2, ASVAB
{"url":"http://www.purplemath.com/Catasauqua_Algebra_tutors.php","timestamp":"2014-04-19T02:07:19Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:178d065a-7eea-4542-a660-24eedf655917>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Alviso Algebra Tutor Find an Alviso Algebra Tutor ...I am committed to creating a positive tutoring environment for your child where he or she will feel comfortable asking questions, thinking critically and growing as a reader and writer. Together, we will work to clearly identify learning objectives for your child, and I will tutor your child in ... 14 Subjects: including algebra 1, reading, English, grammar ...I'm told I make physics easy and fun to learn. Precalculus, in essence, is just a review of algebra 2 with some elaborations and a few extensions and additions (and applications) I've tutored students in almost all bay area school districts and high schools, with their various textbooks, in prec... 9 Subjects: including algebra 1, algebra 2, physics, geometry ...I employ the proven scientific principles of ABA (Applied Behavior Analysis). I recommend that you limit the amount of time spent on video games, particularly just prior to our sessions. You will see results in terms of better grades and enjoyment in learning. Several children I have worked with are now college bound! 27 Subjects: including algebra 1, algebra 2, reading, English ...Most of my career was spent in corporate engineering departments doing structural dynamics (including the dynamics of rotating structures),acoustics, and psychoacoustics. I've been teaching and tutoring students and engineers in all these topics over the years, including full-fledged physics cou... 17 Subjects: including algebra 1, algebra 2, calculus, physics I have a strong background in math and computers, including a MS in Computer Science and a MST in Math. I spent 20+ years working as a Software Engineer, and about 12 years teaching. Most of the teaching was secondary math up through second year at the university. 11 Subjects: including algebra 2, algebra 1, calculus, geometry
{"url":"http://www.purplemath.com/Alviso_Algebra_tutors.php","timestamp":"2014-04-18T06:07:18Z","content_type":null,"content_length":"23742","record_id":"<urn:uuid:deff65c9-166f-4a44-8476-223147099e63>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
A uniqueness result for a semilinear reaction-diffusion system Escobedo, M. and Herrero, Miguel A. (1991) A uniqueness result for a semilinear reaction-diffusion system. Proceedings of the American Mathematical Society, 112 (1). pp. 175-185. ISSN 0002-9939 Restricted to Repository staff only until 31 December 2020. Official URL: http://www.ams.org/journals/proc/1991-112-01/S0002-9939-1991-1043410-9/S0002-9939-1991-1043410-9.pdf Let (u(t, x), v(t, x)) and (uBAR(t, x), vBAR(t, x)) be two nonnegative classical solutions of (S)[GRAPHICS:{ut=Δu+vp, p>0 ; vt=Δv+uq, q>0] in some strip S(T) = (0, T) x R(N), where 0 < T ≤ ∞, and suppose that u(0, x) = uBAR(0, x), v(0, x) = vBAR(0, x), where u(0, x) and v(0, x) are continuous, nonnegative, and bounded real functions, one of which is not identically zero. Then one has u(t, x) = uBAR(t, x), v(t, x) = vBAR(t, x) in S(T). If pq ≥ 1, the result is also true if u(0, x) = v(0, x) = 0. On the other hand, when 0 < pq < 1, the set of solutions of (S) with zero initial values is given by u(t; s) = c1(t - s)+(p+1)/(1-pq), v(t; s) = c2(t - s)+(q+1)/(1-qp), where 0 ≤ s ≤ t, c1 and c2 are two positive constants depending only on p and q, and (ξ)+ = max{ξ,0}. Item Type: Article Uncontrolled Keywords: Reaction diffusion systems; uniqueness Subjects: Sciences > Mathematics > Differential equations ID Code: 17116 References: J. Aguirre and M. Escobedo, A Cauchy problem for ut - Δu = uP with 0 < p < 1 : Asymptotic behaviour of solutions, Ann. Fac. Sei. Toulouse 8 (1986-87), 175-203. D. G. Aronson and H. F. Weinberger, Multidimensional nonlinear diffusion arising in population genetics, Adv. in Math. 30 (1978), 33-76. M. Escobedo and M. A. Herrero, Boundedness and blow-up for a semilinear reaction diffusion system, J. Differential Equations (to appear). M. Floater and J. B. McLeod, in preparation. A. Friedman and Y. Giga, A single point blow up for solutions of semilinear parabolic systems, J. Fac. Sei. Univ. of Tokyo, Sect. I 34 (1987), 65-79. H. Fujita, On the blowing up of solutions of the Cauchy problem for ut-Δu = u(1+α) , J. Fac. Sei. Univ. of Tokyo, Sect. I 13 (1960), 109-124. V. A. Galaktionov, S. P. Kurdyumov and A. A. Samarskii, A parabolic system of quasilinear equations I, Differential Equations 19 (1983), 2123-2143. ___, A parabolic system of quasilinear equations II, Differential Equations 21 (1985), 1544-1559. F. B. Weissler, Existence and nonexistence of global solutions for a semilinear heat equation, Israel J. of Math. 38 (1981), 29-40. Deposited On: 16 Nov 2012 09:03 Last Modified: 07 Feb 2014 09:42 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/17116/","timestamp":"2014-04-18T23:28:58Z","content_type":null,"content_length":"29862","record_id":"<urn:uuid:92332408-d4ee-427f-b924-c407b1c207c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How to substitute decimal dot with decimal comma in Replies: 2 Last Post: May 21, 2013 12:03 AM Messages: [ Previous | Next ] Re: How to substitute decimal dot with decimal comma in Posted: May 21, 2013 12:02 AM Mathematica | Preferences... | Appearance | Numbers | Formatting | Enable automatic number formatting: checked; Decimal Point Character: Comma "9.0 for Mac OS X x86 (64-bit) (January 24, 2013)" Note that the inputs shown below are shown as InputForm; if the cells are either StandardForm or TraditionalForm, the periods display as commas. Plot[x^2, {x, 0.001, 0.1}] LogPlot[x^2, {x, 0.001, 0.1}] However, x axis number formatting fails in LogLogPlot (note that this also fails for LogLogPlot using Mathematica 8.0.4.0 on my Mac). LogLogPlot[x^2, {x, 0.001, 0.1}] On Mon, May 20, 2013 at 5:05 AM, Igor A. Kotelnikov < igor.kotelnikov@gmail.com> wrote: > Is it possible to substitute decimal dots by decimal commas on Plot axes? > I need to make that substitution in all routines (Plot, LogPlot, > LogLogPlot, e.t.c.) at ones since in my native language decimal comma is > used instead of decimal dot as it is in English. Date Subject Author 5/21/13 Re: How to substitute decimal dot with decimal comma in Bob Hanlon 5/21/13 Re: How to substitute decimal dot with decimal comma in Tomas Garza Hernandez
{"url":"http://mathforum.org/kb/message.jspa?messageID=9125461","timestamp":"2014-04-17T07:53:30Z","content_type":null,"content_length":"18127","record_id":"<urn:uuid:bed31ab4-5e5c-45fa-8796-6e449d086b60>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Winchester, MA Algebra 2 Tutor Find a Winchester, MA Algebra 2 Tutor ...I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics. 15 Subjects: including algebra 2, calculus, physics, statistics ...I also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors. I have extensive coursework and research experience in the area of Physiology and completed my Master's degree in Physiology in May 2013. I was also the TA for a graduate Physiology course and tutored groups of healthcare and graduate students 10 Subjects: including algebra 2, chemistry, geometry, biology ...Being able to solve problems means the student can apply what they have learned in useful ways. As a tutor, my role is to insure that the student is proficient in both parts of learning the subject, using a systematic approach which is part of the general learning processes. The first step is to identify the student's strengths and weaknesses in the subject. 13 Subjects: including algebra 2, physics, calculus, SAT math ...I excel in both motivating and encouraging the reluctant or challenged learner, as well as in challenging the gifted or talented student. I am happy to either work with a student using materials provided or else to assess a student's needs and individualize a program of instruction that will mos... 33 Subjects: including algebra 2, reading, writing, English ...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences. 11 Subjects: including algebra 2, geometry, algebra 1, precalculus Related Winchester, MA Tutors Winchester, MA Accounting Tutors Winchester, MA ACT Tutors Winchester, MA Algebra Tutors Winchester, MA Algebra 2 Tutors Winchester, MA Calculus Tutors Winchester, MA Geometry Tutors Winchester, MA Math Tutors Winchester, MA Prealgebra Tutors Winchester, MA Precalculus Tutors Winchester, MA SAT Tutors Winchester, MA SAT Math Tutors Winchester, MA Science Tutors Winchester, MA Statistics Tutors Winchester, MA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Arlington Heights, MA algebra 2 Tutors Arlington, MA algebra 2 Tutors Belmont, MA algebra 2 Tutors Burlington, MA algebra 2 Tutors Charlestown, MA algebra 2 Tutors Everett, MA algebra 2 Tutors Lexington, MA algebra 2 Tutors Malden, MA algebra 2 Tutors Medford, MA algebra 2 Tutors Melrose, MA algebra 2 Tutors Reading, MA algebra 2 Tutors Stoneham, MA algebra 2 Tutors Wakefield, MA algebra 2 Tutors West Medford algebra 2 Tutors Woburn algebra 2 Tutors
{"url":"http://www.purplemath.com/winchester_ma_algebra_2_tutors.php","timestamp":"2014-04-17T07:42:28Z","content_type":null,"content_length":"24471","record_id":"<urn:uuid:6819b928-8802-4115-9f71-b9738c3f2087>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Math = Love This year, I had the privilege to teach a non-AP statistics class for high school juniors and seniors who had finished Algebra 2 and were not enrolled in upper level math courses through our local technology center. As a small high school, our math department offers 10 sections of math: 4 sections of Algebra 1, 3 sections of Geometry, 2 sections of Algebra 2, and 1 section of an advanced math elective. Last year, the elective was College Algebra. This year, it is Statistics. Next year, it will be Trig/Pre-Calculus. Our local technology center offers Pre-Calculus and AP-Calculus since many of the schools in our area are too small to offer those classes. I had 5 juniors enrolled in my statistics class this year. As one of our end-of-year projects, I asked students to ask a question about the population of students (168 students) at Drumright High School. After getting their questions approved, students had to randomly select 35 students using the random number generator on their calculator and a list of all the students in the school that our school secretary kindly printed off for us. Next, they found a way to find out how that student would answer their question. Proof that we are a small school: my students did not have to actually come in contact with all of the students they randomly selected. For example, one student wanted to know what proportion of DHS students play school-sponsored sports. After doing his random selection, he could look at the list and instantly know which students were and were not enrolled in athletics. For those students he was unfamiliar with, a quick question to the rest of the class gave him the information he needed. The collected data was used to find their p-hat value. I asked them to make sure the conditions were met to form a confidence interval. And, I dictated that we would be finding a 95% confidence interval. If the conditions were met, their task was to find the confidence interval and express their results on a mini-poster. These posters contain only a summary of their work. They showed all of their work in detail using the EMCCC model on a separate sheet of notebook paper. I loved seeing how invested the students were in their projects. It was great to see them come up with their own questions, generate their own random samples, survey the students, perform the necessary calculations and analysis, and summarize their findings. This has been my first time ever teaching statistics, so there are definitely a lot of things I would like to change in the future. But, I'm so glad I got the chance to teach this course this year. It's been awesome to be able to expose my students to a new field of mathematics! Four out of my five students completed the project. Here are their questions and findings: What percent of DHS students own an iphone? What is the proportion of DHS students that have at least one full sibling? What proportion of DHS students participate in school athletics? What percent of DHS students plan on attending college in the near future? This one is hard to read because the information was written so small. You can be 95% confident that the actual proportion of Drumright students planning to attend college is somewhere between 0.5132 and 0.8268. Last week, I finally took down the snowflakes that have been decorating my bulletin board for months. After all, it was post-Spring Break. It was April. Leaving them up would be just asking for it to Apparently, Mother Nature didn't get the memo. I guess she didn't realize that I took down my snowflakes. Last weekend, it was 80 degrees. This Monday, it snowed. And, I'm not just talking a flurry or two. It snowed for two-three solid hours! It melted on contact, but still - snow in mid-April?!? Craziness! Just last weekend, I was taking pictures of all the beautiful spring flowers at my parents' house. Spring Flowers How did I know it was snowing? Because I got to watch it snow for hours. When you're administering a state test from 8:30-11:30 or so, there are only so many things to look at. I took students by bus to test at our middle school's library, and the back of our lab looked outside. The snow flurries were a beautiful sight, but I'm ready for the warmer weather to return! Can you tell what these are? Origami Letters to Students They are origami, Easter egg colored, letters. Every January, I have my students celebrate Universal Letter Writing Week. I've blogged about it before, and it's one of my favorite projects. I truly believe there is power to a written note. Since my Algebra 1 students were testing, I decided to write a note to each one of them - all 41 of them. To make life slightly easier on me, I wrote each letter on a half-sheet of paper. This made it easier to write a note to fill up the paper, but it made it a tad trickier to fold the note. Bundles of encouraging letters to hand out right before state testing I didn't time the project exactly, but it took me between 4-5 hours to write and fold and address and pick out stickers to place on each letter. I watched live tornado coverage, a special on the Boston Marathon Bombing, The Amazing Race, The Good Wife, and an episode of M*A*S*H in the time it took me to write a note to each student. By the end of my marathon writing session, my hand was about to fall off! I had to alternate between writing letters and folding letters since they used different muscles. (I also shouldn't have waited to write these Sunday night for Monday morning's Since I would only be administering the test to one group of students, I wanted to be able to give a pep talk to all of my students right before they tested. I've yet to master the art of cloning myself to be in three places at once, so I decided to write letters that could be handed out by the test administrator right before they tested. This allowed me to be there and encourage them without physically being there. After all, who doesn't like to receive a letter? I know I love getting letters and mail. I got the sweetest letter from a student earlier this month. It's now hanging on my wall as a daily reminder. I doubt my students realize this, but I keep all the letters they write me. When I'm having a bad day, it's an awesome thing to be able to be reminded of the positive difference you have made in the lives of your students. So many of them wrote me letters in January. It was time for me to write letters in return. Letter From A Student (I love that she's a freshman, and she's already thinking about coming to visit me after she graduates!) My students LOVED the letters! Earlier this semester, I posted about how I felt convicted to write more notes to my students. I had the best of intentions, but life soon got busy. And, my never-ending to-do list became my priority. Most students were shocked to see that I had written them a letter. You mean, you wrote this? Of course, they wanted to know if all the letters said the same thing. No, I wrote you each a personal letter. Though, some were a lot more personalized than others. To the student who had to reschedule getting their driving pemit to take their EOI, I ended the letter with "Good luck on your test today! I know you'd rather be taking your driving test, but I'm confident that you are going to rock your EOI today!" To the student who thought it would be funny to throw a cricket at me back at the beginning of the year, I wrote, "I've enjoyed having you in class this year. But, I'm still not sure if I've entirely forgiven you for the cricket incident! Note to self: when a student tells you that they ate cricket pizza at the state fair, don't mention that you hate crickets. Two of your freshman boys will fill an empty water bottle with bugs. You will get an e-mail from the science teacher warning you about said water bottle. You will think to yourself, it's okay. I know the bugs are coming. They're in a water bottle. These boys think they are going to scare me. But, it's going to be okay. The bugs are in plastic. They can't hurt me in plastic. (Can you tell I HATE bugs?!?) When they come in your room at lunch, you will remain calm. And, you'll stay calm until the lid comes off the water bottle. One minute, you're sitting at your computer, minding your own business. The next minute, there is a cricket in your face. Then, it's in your lap. Then, you're standing up. You're screaming and hysterical. The students eating lunch in your classroom find this whole thing hilarious. You are being videotaped as you threaten the student that he better remove the crickets from your room if he wants to remain living. He eventually acquiesces. The bugs are rebottled. The bottle and the freshman boys leave your room. The video is dubbed as hilarious and shown by this one student to every other student in the school who will take the time to watch their math teacher scream and go into a tizzy over a cricket. You hope that the video doesn't get posted on the Internet. After all, you did threaten the student. But, then again, he did shove a cricket in your face and drop it on your lap. Later in the year, you will laugh at this incident. But, it will take a while. As I wrote the letters, it became clear to me that there were a number of students that I never formed that personal connection with this year. And, that's sad. Their letters were kinda generic, and they could tell. To all my students, their letters contained a variant of the following: "Good luck on your Algebra 1 EOI today! You've worked hard this year, and I am positive that you are going to rock this test! Take your time on the test. Answer every single question! Pay special attention to your positives and negatives! Check and double check your work! I've enjoyed having you in class this year. Now, go and show this test who's boss!" For some students, I would add reminders of things they often struggled with. For example, I have a few students who have a terrible time remembering the formula for point-slope form. So, I added a reminder of that in their letter. The students in my testing session thanked me for the letters right away. I could really tell that my gesture meant a lot to them. Others who had other testing sessions hunted me down later in the day to tell me how much they appreciated the note. One student stopped by my room and said, "I didn't think I was going to be able to pass my EOI, then I read the letter you wrote me. And, I knew that I could pass the test after reading it. Thanks for the letter!" Then, the same student thanked me again when his class period came around. My kiddos deserve to know how proud I am of them! And, I need to make an effort to tell them that more often! Letters truly do make a difference! I also treated my kiddos to peppermints and a last minute pep-talk before the test. Pre-Test Peppermints Maybe pep-talk isn't the best description for what went down before we started testing. I explained that a standardized test means standardized instructions. Once I start reading out of the green book, I can't deviate from the script. So, anything they want to know or discuss has to happen well before the book is opened and anything is passed out. Take your time. This is not a timed test. Do not leave any questions blank. If you leave a question blank, I will kill you. Yes, I threatened my students, but I did it out of love. Apparently, I threaten my students quite frequently. Deal with it. A few students asked last minute math questions. What's point-slope form again? How do I know if I should use an open circle or a closed circle? I could tell that they were stressed. Algebra 1 is their first high-stakes standardized test. The state of Oklahoma says that they must pass the Algebra 1 test to graduate. They've been taking standardized tests since third grade, but this is the first one that really and truly counts. Though, if they can't get a driver's license without passing their 8th grade reading test. So, that's a pretty important test for them, too. I was stressed, too, but I tried to hide it. I'm not sure how great of a job I did of that. Since I was administering a test all morning, my morning classes had to report to the gym. They were excited for this, but after a week of testing, hanging out in the gym has lots its appeal. Earlier this week, I wrote that pass rates don't tell the whole story. So, I'm going to share my Algebra 1 pass rate this year, and then I'm going to try to share the amazing stories that the pass rate doesn't communicate. I have 26 regular education and 15 special education students in 3 sections of Algebra 1. All of my regular education students tested on Monday. Some of my special education students have tested, and others won't test until next week. What I'm about to write applies only to my regular education students. I have yet to get scores for my special education students. I teach in a school district that does not have the manpower or resources to offer remediation classes. Every 9th grader is automatically enrolled in Algebra 1. There is no Remedial Algebra 1. There is no Honors Algebra 1. We have no Pre-AP classes. There is no Pre-Algebra. There is no Algebra Concepts. There is no extra Math Lab. We simply have Algebra 1. If a student failed 8th grade math, they take Algebra 1. If they made a 100% in 8th grade math, they still take Algebra 1. If a student isn't prepared for Algebra 1, there's no class to send them to. They get to stay. If I could change things at my school, this would be among the first of my changes. 22 out of 26 passed. Of course the math teacher in me has to convert that to a percent. 85% of my students passed. Last year, 90% of my regular education students passed. So. that hurts. One student missed passing by one question. One more question right would have changed his label from "Limited Knowledge" to "Proficient." That hurts. Another student who didn't pass missed 9 weeks of school. Still, I feel like I could have and should have done more to help her. One of my students who didn't pass was the one I was hoping more than anybody else would pass. She moved into the district part-way through the year last year. She was in my Algebra 1 class. She tried to follow along with what we were doing, but she was so far behind that she never could quite catch up. She scored "unsatisfactory" on the EOI, answering only a fourth of the questions correctly. This year, she retook Algebra 1. She had our other math teacher for the first semester, but she ended up transferring into my class second semester. She still struggled greatly, but I could see glimpses of understanding. She worked hard. She asked a ton of questions. And, she decided, I think, to take charge of her own learning. When we were reviewing point-slope form, she would ask for more and more practice questions. The expression on her face when she got the equation correct was PRICELESS! I can remember thinking, "This is why I teach. These moments are the reason why I put up with all the not-so-nice things that come with teaching." She worked hard during the EOI. She took her time. She did everything I asked her to do. When her score came up on the screen at the end of the test, she covered it with her hands because she was too afraid to look. I pulled her hands back to take a peek; I was ready to congratulate her on passing. The number on the screen made my heart sink. She had gotten just under half the questions right. The screen said "Limited Knowledge." She was oh so close to passing. Yes, I share in her disappointment. But, more than that, I am incredibly proud of her. Our instinct is to look at the score and say she failed. We should be looking at the fact that her score almost doubled in one year. Doubled! That's something we should be jumping up and down about. That deserves to be celebrated! Instead of looking at my pass rate, I will picture the face of my student who went from Unsatisfactory to Limited Knowledge in one year. I think about another student who scored Unsatisfactory on her 8th grade math test. This week, she surprised herself by scoring Advanced on her Algebra 1 EOI. That calls for a happy dance! Her hard work paid off big time! This student and I started out the year as enemies. It was a hassle to get her to do anything. She would much rather talk to her friends and play on her phone than do algebra. And, we battled over this daily. She made it clear that she didn't think she was good at math, and apparently I wasn't a very good teacher because she still didn't understand. I bit my tongue and held back words so many times. Then, something happened, and she started participating. She started asking questions. She changed seats to get away from people who distracted her. She started pairing up with the students who excelled in class. Instead of copying their work, she would pick their brains and ask for help. Our relationship took a complete 180. When I would unsuccessfully try to quiet the class (something I'm working on!), she would yell at the class to be quiet so we could learn. Somehow, they would always listen to her. I can't tell you how many times I've heard her say, "Guys! We're trying to learn!" And, learn she did! (She also had my back when other students would say not so nice things about me. "Ms. Hagan, someone said you were a snob, but I stood up for you and told them that you were not a snob!" I kinda doubt that my students even really know what it means to be a snob...) Though, she still makes it clear that my jokes are NOT funny. That itself is funny because I have her sister in Algebra 2, and her sister thinks my jokes are hilarious! I also can't help but picture the three students who scored Limited Knowledge on their 8th grade math test and pulled off a Proficient this year in Algebra 1. They worked hard this year, and they passed. I'm so happy for them! And, I can't forget the six students who scored Proficient in 8th grade math and scored Advanced in Algebra 1. Growth happens when you push students beyond what they think is possible. It's been a year with a lot of success stories. My freshman have matured so much over the course of this year. I was worried earlier this year that I hadn't made the type of connections with them that I made with my freshman last year. But, the connections and relationships came. I'm already looking forward to next year. I'm thinking about things to change. Things to keep. Things to do differently. Things to stop doing altogether. As the rest of the year winds to a close (less than a month left!), I'll be sure to blog about my reflections and ideas for next year. Thanks for reading this! And, thanks for giving me an opportunity to share the stories behind the testing. It means a lot to me. My students are more than the label given to them by the test, and, as a teacher, I am more than my pass rate. Since I started teaching at Drumright, there has been this poem written on the chalk board in the teacher's lounge. One can't help but look at it while making copies. I've googled phrases from it and found nothing, so I'm guessing it's an original poem. Teaching Poem My school starts state testing Monday. And, my Algebra 1 students will be the first students to test. I actually asked for Algebra 1 to test first. That sounds like an insane thing to do, but once testing starts, it is possible to go a week or more without seeing certain students. I wanted there to be as little time as possible for students to forget what we have learned between the last time I see them and the time they test. I am fighting a "war of numbers, letters, and EOI scores." As panic and dread start to set in on my part, I have to remind myself that I've done my very best. As my mother often tells me, "If you've done your best, that's all you can do." I can't tell you how many times she repeated that to me when I was in high school and college. And, it didn't stop after I walked across that stage and received my diploma. She still has to remind me that if I've given my best, then there's nothing more that I can do. Stressing over what may or may not happen after that is just torturing yourself. I've done my best. I've taught my heart out this year. And, I certainly hope my test scores reflect that. Last year, 12/12 (100%) of my Algebra 2 students passed their EOI exam. Last year, 35/41 (85%) of my Algebra 1 students passed their EOI exam. I fear that this year's scores will compare poorly to last year's scores. There is no doubt in my mind that my Algebra 2 pass rate this year will drop severely. But, I don't think that's necessarily a bad thing. Oh, I'm sure some people will look at it and think that I dropped the ball, that I didn't do my job this year. But, a pass rate doesn't tell the whole story. Our math program is slowly improving after being neglected for many years. We had a 167% increase in the number of students taking Algebra 2 in one year. That's major. Major. That means more students are passing Algebra 1 and Geometry. Yes, many have struggled this year. They've struggled greatly. Probably over half of my Algebra 2 students lack a strong Algebra 1 foundation. I can't do anything about the past, though. I wasn't here then, and the math program lacked rigor. The Algebra 2 pass rate before I came here hovered between 39 and 43 percent. More and more, I'm realizing that test scores don't tell the whole story. Pass rates don't tell the whole story. They are simply part of the story. The EOI is a battle. But, it isn't the war. The war is greater than a single test. What is the war? Getting students to graduate? Is that my end-goal? Shouldn't it be something loftier? Preparing them for higher education? College and career readiness? Creating productive citizens? In college, one of our assignments was to write our Philosophy of Education. Looking back over mine, it seems all over the place. I wrote about habits of mind, encouraging teamwork, inspiring lifelong learning, character development, and how teaching should be like teaching a living language. Now that I've taught for almost two years, maybe it's time I revisit my Philosophy of Education. What exactly am I trying to do? Am I actually doing it? Alas, I'll have to find time for that later. For the next two weeks, my focus is on a battle, not the Check out previous versions of Things Teenagers Say here: Volume 1 | Volume 2 | Volume 3 | Volume 4 | Volume 5 | Volume 6 | Volume 7 | Volume 8 | Volume 9 What starts with an m, ends with an h, and makes children cry? One of my Algebra 2 students came up with this joke on his own. I didn't find it very amusing, but I could tell by the laughing that the rest of my class could relate. If I remember correctly, this was said on our first day of polynomial long division... Mentally, I'm in the 2020's. But, physically, I'm in 2014. I killed the cat, and I don't know how to fix it. Help! You've got to follow the rule of math: Please excuse my something something something. My graph is so cute! Best thing to hear from a student during a test! My arteries hurt because of you. Some of my ellipses look like drunken, squished soldiers. You're weird, Ms. Hagan. But, that's okay. I love weird teachers. Weird teachers are more fun. Can you smell the number nine? Yes, yes I can. Now what did I just agree to?!? Student: It smells like burning wood in here. Me: What? Student: Sometimes I set pencils on fire in my bedroom, and your classroom smells exactly like that. I don't even want to know! Me: I need a week off. Student: Why? So you can spend it with your cats? And, for the record, I still don't own a single cat. I have no clue where my students got the idea that I'm a crazy cat lady! Tomorrow, I'm going to have the X-Box flu. So, that's what they call it nowadays when your parents call you in sick when you're not... Me: What does the word linear remind you of? Student: My mom. Me: Why would the word linear make you think of your mom? Student: My mom gave birth to me. Everything makes me think of her. Somewhere along the way, I started putting lines through my z's without realizing it. Another convert. I started putting lines through my z's because of my 8th grade Algebra 1 teacher. Now, students are putting lines through their z's because of me! Student: I was explaining the quadratic formula to my family. Me: Oh, and what did they think? Student: Their brains were hurting just like mine. The day before Spring Break, I had the privilege of taking a group of students to STEM Day, a morning of science, math, and engineering competitions at Central Tech, our local regional technology center. This was my first year to attend because I ended up having to miss it last year due to the stomach bug. STEM Day Registration There were written math tests in Middle School Math, Algebra 1, Geometry, Algebra 2, and Advanced Math. Students could also enter a Paper Airplane Competition, Scientific Problem Solving Competition, or Ping Pong Launcher Competition. STEM Day Almost 800 students were in attendance! Just a small portion of the students in attendance Tables were set up by local colleges and groups to entertain the students who were not competing at the moment. There were dry ice cheese balls that I did not try. It was fun to watch kids try to eat them with their mouths open to let the steam escape! Frozen Cheese Balls Students could try their hand at operating a robotic arm. It was harder than it looked! Robotic Arm Trainer Robotic cars were on display. Robotic Car This Vortex Cannon could shoot smoke rings across the room. Very cool! Vortex Cannon I'm not sure what this was called, but students stood on a special plate and touched the large ball while it charged. Then, if they placed their hand near the small ball, it would force it to move. Here are some of the airplanes that my students ended up creating for the paper airplane contest. Students were given 10 minutes to create an airplane out of 3-20 sheets of paper and 3 paper clips. Students were not allowed to cut or tear the paper in any way. Paper Airplane Competition Paper Airplane Competition Mini rockets were constructed out of straws and paper. Straw Rockets All in all, it was a fun and informative morning. Only one student ended up taking home a medal, but I'm thankful that my students were able to have this experience. As a school that offers very little in the way of upper level math and science courses, this may be the only exposure that some of my students get to the world of STEM. Plus, it was a great way for my students to see the opportunities they have to explore STEM careers through our local tech center. Have you ever wondered, "Hey, what would Ms. Hagan look like if she was a punk rocker chick?" Yeah, me neither. But, apparently one of my students has wondered that. So, I present to you: Ms. Hagan, Punk Rocker Chick. I tell you, teaching high school is NEVER dull! So, I'm pretty sure if I hadn't learned anything new from #edcampTULSA, it would have still been worth it to attend just for the opportunity to snoop around other teachers' classrooms to see what they are up to. I've posted about the math and science specific ideas I plan on stealing. Today, I'm posting pictures of more general ideas I'm thinking about incorporating in my own classroom. My first experience with a parking lot in the classroom was at an OGAP conference last summer. Every table was given pads of sticky notes. And, we were told that we could post any questions, comments, or concerns that we had on the parking lot. And, the coordinators would make sure that they were covered or taken care of. I don't think anybody used it at all. That doesn't make it a good idea at all. Parking Lot Poster One of my main goals for this summer is to work on my classroom management strategies. I'm going to be really honest. Classroom management is probably my weakest area in the classroom. I need to make some major changes. I went into teaching assuming that high school students were capable of knowing when it was appropriate and inappropriate to do certain things. For example, when I'm working a problem out on the SMART Board, it is inappropriate to have a conversation with your neighbor. But, you wouldn't know that by looking at my students. I'm getting sick and tired of hearing myself say "You should not be talking right now. You should not be talking right now." After two years, I've learned that going in without a plan does not lend to a well-managed classroom. So, next year is going to be different. I'm finally seeing the importance of procedures and everything else that I read about before I started teaching. I'm thinking that if I can train my students to use the parking lot from the very beginning of the school year, it could be very beneficial. It's going to take training and practice, though. If I am going to have more procedures, I am going to need to find a way to communicate those procedures to my students. I liked this procedure sign that I found in one classroom. Classroom Procedure Poster Reminder I liked these Book / Brain / Beyond posters that I saw posted in several classrooms. I know I need to ask my students to do a lot more tasks involving #2 and #3. Book / Brain / Beyond Posters I have a box of these clear plastic dry erase pockets in my cabinet. I used them a lot last year, but this year I've been using double sided dry erase boards that my school purchased for me. I liked the idea of storing papers in these pockets for student access. This teacher used the pocket to hold talent show applications. But, I could store anything in them. Since they are see-through, students could easily see what they were accessing. Using Dry Erase Pockets for Organization Outside one classroom, the teacher had a frame that invited students to ask them about their college experiences at Oklahoma State University and The University of Tulsa. As a Tulsa grad myself, I was excited to see someone else repping the Golden Hurricanes. It made me realize, though, that I'm not quite sure I've ever asked my students to ask me about my college experience. I've got a couple of TU flags hanging my classroom, and I'll gladly answer questions. But, I've never really sought out their questions. I teach in a community where the majority of our students do not go on to higher education. Some do, and hopefully more will in the future. In the mean time, I need to make sure that my students know that I am more than willing to sit down with them and talk about what college is College Display In one classroom, I saw a teacher use baseball card holder pages to display senior pictures. I thought this was a brilliant idea! Of course, it's not quite feasible for my classroom and situation. In two years of teaching, I've been given one senior picture. ONE. Maybe I'll be able to collect this many senior pictures by the time I retire... Senior Picture Display For the past two years, I've had all of my students turn their papers into the same tray. Next year, I think I'm finally going to make trays for each class. This should save me time and frustration in grading. Now, I just have to find a place in my classroom to put six different trays. This should be interesting... Turn In Trays by Hour I'm going to be honest. I'm not sure what this teacher used this pocket chart for. But, it definitely caught my eye. I'm posting this more as a reminder to myself to use the pocket charts I bought last summer at Target. Actually, I need to figure out what I even did with the pocket charts. I haven't used them at all. If anyone has any great ideas for using pocket charts, please share! Pocket Chart I'm also starting to think about how I want to grid my dry erase board next year. I've taught for two years and done it two different ways so far. I kind of like the idea of showing a whole week at once. But, I've never done it this way. And, I'm not sure if it would make it harder or easier to maintain. Assignment Grid Board I think I posted a similar version of this bulletin board yesterday. This one features a day of the week on each folder, though. I can see myself using this for either absent work or extra handouts. File Folders on Bulletin Board Isn't this bulletin board adorable? The teacher took a picture of each class period that she teaches. Every week, she selects a student from each class to fill out a survey about themselves. Their answers are displayed next to the picture of that class inside a picture frame. The board is labeled as Gents and Ladies. I'm thinking of doing this next year to replace my Star Students Board. Gents and Ladies Bulletin Board I'm also in love with this turn in tray. It's just a cardboard cover for a stacking paper tray. But, it prevents students from retrieving papers after turning them in or looking at other people's papers. Plus, I love that the make-up papers have to go in a separate tray. I'm thinking about changing my policy on late work for next year, so having a dedicated tray for that would be especially Paper Turn In Tray Outside of each classroom, each teacher posts what book they are reading, what book they just read, and what book they want to read. I would love to see this happen at my school. What would happen if I just made these signs and hung them up outside each classroom? Do you think teachers would just start using them? It couldn't hurt, right??? Classroom Reading Poster I saw these "Time to be Kind" clocks in several classrooms. One of these clocks was distributed to every teacher to post in their classroom. I'm thinking that this could be a student council initiative next year. We could have a week that focused on random acts of kindness. And, these could be posted around the school as a reminder. Time to be Kind Clock
{"url":"http://mathequalslove.blogspot.com/","timestamp":"2014-04-19T19:34:41Z","content_type":null,"content_length":"193745","record_id":"<urn:uuid:f46f5832-2c25-43e2-a858-4895b1ff0249>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
limiting distribution December 4th 2007, 01:18 PM #1 Dec 2007 limiting distribution Let Y_1 denote the first order statistic of a random sample of size n from a distribution that has pdf f(x) = e^[-x - θ ), θ < x < infinity, zero elsewhere. Let Z_n = n(Y_1 - θ ). Investigate the limit distribution of Z_n Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/24150-limiting-distribution.html","timestamp":"2014-04-17T15:36:54Z","content_type":null,"content_length":"28957","record_id":"<urn:uuid:c1cac179-3a89-4472-9161-3de7fdad77ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Cross Derivatives Numerically in the Corners and Along Edges May 28th 2013, 12:02 PM #1 May 2013 Finding Cross Derivatives Numerically in the Corners and Along Edges I am trying to write a program that takes a square matrix with given values and returns another matrix with the value of f[xy] at all of the locations. I need to find each cross derivative numerically because I am not given a function. I understand how to find it for points that are not on the edge of the matrix, however, I do not know how to make a somewhat accurate estimate of the cross derivative for the points in the corners and along the edges. I have searched all over google for an answer and haven't come up with anything. If anyone knows some sort of formula I could use that would be very helpful. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/219411-finding-cross-derivatives-numerically-corners-along-edges.html","timestamp":"2014-04-20T21:01:36Z","content_type":null,"content_length":"30271","record_id":"<urn:uuid:f001d0f8-3782-4f95-9037-35de9875d8c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorics of the Three-Parameter PASEP Partition Function We consider a partially asymmetric exclusion process (PASEP) on a finite number of sites with open and directed boundary conditions. Its partition function was calculated by Blythe, Evans, Colaiori, and Essler. It is known to be a generating function of permutation tableaux by the combinatorial interpretation of Corteel and Williams. We prove bijectively two new combinatorial interpretations. The first one is in terms of weighted Motzkin paths called Laguerre histories and is obtained by refining a bijection of Foata and Zeilberger. Secondly we show that this partition function is the generating function of permutations with respect to right-to-left minima, right-to-left maxima, ascents, and 31-2 patterns, by refining a bijection of Françon and Viennot. Then we give a new formula for the partition function which generalizes the one of Blythe & al. It is proved in two combinatorial ways. The first proof is an enumeration of lattice paths which are known to be a solution of the Matrix Ansatz of Derrida & al. The second proof relies on a previous enumeration of rook placements, which appear in the combinatorial interpretation of a related normal ordering problem. We also obtain a closed formula for the moments of Al-Salam-Chihara polynomials. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v18i1p22","timestamp":"2014-04-20T00:58:43Z","content_type":null,"content_length":"16025","record_id":"<urn:uuid:29da8a96-7ad7-419e-a224-9b686979845f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzle: How to make "g" (acceleration) equal to 10.0 This is a tricky puzzle question I thought up some time ago, but I figured I would blog it. As people who study physics know, the acceleration a falling body undergoes if dropped (in a vacuum) at the surface of the earth is known as “g”, or 9.8 meters per second per second. This is so close to 10 that most students and people doing back of envelope calculations often use 10 as the value of a “g”. It’s easy. Fall for one second and you’re going 10 meters/second. So I got to wondering, how much would the earth have to change to make “g” equal to 10 instead of 9.80665? So here’s the puzzle. By what ratio would you have to increase the diameter of the Earth so that the people on this bigger planet would have “g” equal to 10? Assume the average density of the Earth remains the same (5.46 g/cc). Then click to the Puzzle Answer Submitted by Anonymous (not verified) on Thu, 2005-12-08 16:51. • reply Post new comment His name is Brad Templeton. You figure it out. Please make up a name if you do not wish to give your real one. The content of this field is kept private and will not be shown publicly. Personal home pages only. Posts with biz home pages get deleted and search engines ignore all links • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> • Web page addresses and e-mail addresses turn into links automatically. • Lines and paragraphs break automatically. More information about formatting options Recent comments • You're good • enjoy enough rice (not verified) • Citizenship through Grandparent. Phil (not verified) • Right to travel • Anonymous (not verified) • media awareness Isabel (not verified) • Gerrymandering etc • brad • Cranes are fine Glen Raphael (not verified) • Mixing it up Lunatic Esex (not verified)
{"url":"http://ideas.4brad.com/node/248","timestamp":"2014-04-18T00:28:11Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:0e70f1bf-0ac9-46fd-b178-2a5ff41767fe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Is Lake Turkana deeper than Lake Tinaroo? You asked: Is Lake Turkana deeper than Lake Tinaroo? Lake Tinaroo Lake Tinaroo, also known as Tinaroo Dam, the man-made reservoir on the Atherton Tableland in Far North Queensland, Australia Lake Turkana Lake Turkana, formerly known as Lake Rudolf, the lake in the Great Rift Valley in Kenya, with its far northern end crossing into Ethiopia Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/is_lake_turkana_deeper_than_lake_tinaroo","timestamp":"2014-04-18T16:14:48Z","content_type":null,"content_length":"62773","record_id":"<urn:uuid:c6bdf29f-145d-4100-86c6-b1d21ac98031>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] "Global" versus "local" dimensions in logic Joao Marcos vegetal at cle.unicamp.br Wed Sep 8 15:28:53 EDT 2004 > This is a good opportunity to mention an important fact that almost > all textbooks I know hide or ignore. In my paper "Simple Consequence > Relations" (Information and Computation 92, 105-139, 1991), as well > as in my contribution to the Volume "What is a Logical System (edited > by D. Gabbay) I emphasized (though I dont claim any priority here, > of course) that there is no single "First Order Logic", > but there are TWO different ones. They share the same class of languages, > and have the same set of logically valid formulas, but they differ > in their consequence relations. According to one (which I have called > the "truth consequence relation"), a formula A follows from a theory T > if every pair <M,v> (where M is a structure for the corresponding > language, and v is an assignment in the domain of M) which satisfies all > the formulas in T satisfies also A. According to the other > (which I have called the "validity consequence relation"), > a formula A follows from a theory T if A is valid in every structure M > (i.e.: satisfied in M by every assignment v) in which all formulas > of T are valid. In the philosophical literature, truth-preserving rules are often called "inference rules", and contrasted to to validity-preserving rules, known as "deduction rules". A more widely adopted denomination nowadays calls the former "local rules" and the latter "global rules", and a local / global consequence relation is one defined exclusively by collecting local / global rules. These are like 2 dimensions of a logic, but there might of course be more, depending on which logical features you want to quantify over. There is a wealth of papers in which the difference between local and global consequence relations are explicitly stated, among others all the papers produced in the last few years by the "Portuguese school" that investigates combinations of logics (check for the word "fibring" at http://wslc.math.ist.utl.pt/s84.www/cs/clc/publist/publist.html). Arnon asked though about BOOKS that do mention that. There are at least 2 that I can remember of, namely, Blackburn et al's "Modal Logic" and Rybakov's "Admissibility of logical inference rules". > It is worth noting that these two consequence relations are > not peculiar to FOL, and the difference is not > due to the presence of quantifierss, but to the use > of variables inside the formal language. I do not see in fact why this distinction would be a privilege of first-order logic. Arnon's examples about the deduction theorem and the uniform substitution, for instance, clearly show the usefulness of such a discrimination already at the propositional level. What would be the "variables" in the deduction theorem, or in the necessitation rule in modal logic (a rule that holds globally but not locally)?? Why is it that such a distinction between local and global consequence relations is in general overlooked in the literature? I believe there might be several reasons for that, including: (1) the coincidence between the set of local rules and the set of global rules in classical propositional logic, and in a few other more usual non-classical (2) the fact that most logics people work with are compact and respect some form of the deduction theorem, what allows for the fundamental logical notion of *inference* to be substituted by the poorer notion of *theoremhood* (the local and the global consequence relations as defined in Arnon's message coincide over the set of theorems of a logic) (3) the fact that sequents for usual logics are most of the time finitary and often can count on some form of the deduction theorem, reducing this to reason (4) the more comprehensive fact that, while the local and the global dimensions are clearly distinguishable in a Hilbert-style axiomatization, they become transparent in sequent-style systems (5) that the distinction is often emphasized for syntactical reasons, but makes more sense from a semantical perspective Perhaps other people from the list will see other reasons? Perhaps they will see reasons to disagree that this is a relevant issue? Joao Marcos More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-September/008485.html","timestamp":"2014-04-19T22:19:13Z","content_type":null,"content_length":"7165","record_id":"<urn:uuid:c04b6de3-db18-4bbd-84e2-111453ef2185>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Noise Criteria Calculator Documentation The Noise Criteria Calculator is a web application that calculates certain single number ratings and generates graphs based on the input of a sound pressure level (SPL) spectrum. The Noise Criteria Calculator is located at http://michaelschwob.com/noise-criteria-calculator The criteria calculated are the traditional noise criteria (NC), the extended NC (eNC), the room criteria (RC Mark II), A-weighted level (dBA) and C-weighted level (dBC). The equations, calculation procedures and algorithms are based on the references cited below. Because the Noise Criteria Calculator is a web application it will operate on any computing device that uses one of the following web browsers with Javascript enabled: Microsoft Internet Explorer, Mozzila Firefox, Opera Browser, Google Chrome. Calculations and image generation are performed on the web server so your computer’s processor and memory are not used for analysis. The application was not developed to be used on devices with very small screens, such as smart phones. Although it will work, it may be cumbersome to use on these devices. The Noise Criteria Calculator input form has two components: the sound pressure level input fields and the graph option buttons. (The form directly below is not operable. It is used, on this page, for illustration purposes.) The sound pressure level input fields correspond to the standard octave bands from 16Hz to 8kHz. This frequency range is inclusive of the bands required for the NC, eNC and RC Mark II. The bands use for each criteria are: Criteria Required Frequency Bands NC 63Hz, 125Hz, 250Hz, 500Hz, 1kHz, 2kHz, 4kHz, 8kHz eNC 16Hz, 31.5Hz, 63Hz, 125Hz, 250Hz, 500Hz, 1kHz, 2kHz, 4kHz, 8kHz RC Mark II 16Hz, 31.5Hz, 63Hz, 125Hz, 250Hz, 500Hz, 1kHz, 2kHz, 4kHz If you are only interested in a particular criterion then you only need to provide the levels in the bands required for that criterion. dBA and dBC are averages and have no band limitations. The sound pressure level input fields will accept any number from 0.0 to 99.9. Negative numbers and non-numeric characters (other than a decimal point) are not allowed to be entered. If a field is left blank then it will be interpreted as 0.0. The graph options produce the following results: Option Button Effect Example NC Graph the input sound pressure levels over the traditional NC curves as described in Reference 4. eNC Graph the input sound pressure levels over the extended NC (eNC) curves as described in ANSI S12.2-2008. RC Mark II Graph the input sound pressure levels over the RC Mark II curves as described by Reference 7. None Graph the input sound pressure levels without reference criteria curves. The Noise Criteria Calculator results section contains four parts: the calculated noise criteria, notes, errors and the data graph. Following are the possible results for each criterion: Criteria Potential Description NC<15 All levels in the input spectrum are less than the NC-15 curve, which is the lowest level curve. The input spectrum cannot be rated. NC NC-Y at X Hz Y is the NC value as determined by tangency at the X Hz octave band. NC>70 A level in the spectrum is higher than the NC-70 curve, which is the highest level curve. The input spectrum cannot be rated. eNC<15 All levels in the input spectrum are less than the eNC-15 curve, which is the lowest level curve. The input spectrum cannot be rated. eNC eNC-Y at X Hz Y is the eNC value as determined by tangency at the X Hz octave band. If the X Hz octave band is not indicated then the eNC value was determined by the SIL according to the procedure in ANSI S12.2-2008. eNC>70 A level in the spectrum is higher than the eNC-70 curve, which is the highest level curve. The input spectrum cannot be rated. RC<25 All levels in the input spectrum are less than the RC-25 curve, which is the lowest level curve. The input spectrum cannot be rated. Y is the RC value as determined by the SPIL. X is the Quality Assessment Index and has the format ND where N is the magnitude of the QAI and D is the sound quality descriptor. D may have the values of: N = Neutral RC Mark II RC-Y, QAI-X L = Low Frequency (Rumble) M = Mid Frequency (Roar) H = High Frequency (Hiss) RC>50 A level in the spectrum is higher than the RC-50 curve, which is the highest level curve. The input spectrum cannot be rated. dBA XdBA, YdBC X is the A-weighted sound level of the input spectrum. dBC Y is the C-weighted sound level of the input spectrum. The notes section, just below the criteria, will appear if there is a possibility of acoustically induced vibrations and rattles in lightweight structures as determined according to the procedure in ANSI S12.2-2008. If this section appears, there are two possible messages: Probability of clearly perceptible acoustically induced vibration and rattle in lightweight wall and ceiling construction. Probability of moderately perceptible acoustically induced vibration and rattle in lightweight wall and ceiling construction. The error section will appear just below the notes if there is an error is encountered while processing on the server. If there are no errors, this section will not appear. The graph section at the bottom of the page will contain a PNG image of a line graph showing the input spectrum graphed according to the selected options described above. The octave band center frequencies are indicated on the x-axis (abscissa) according to the graph option. The sound pressure levels are indicated on the y-axis (ordinate) from 0dB to 100dB in 10dB increments. The selected criteria curves are shown in silver gray. The value of the criteria curve is shown at the end of each curve to the right. The input spectrum levels are shown in blue. 1. ANSI S12.2-2008 “American National Standard Criteria for Evaluating Room Noise” 2. ASHRAE 2005 Handbook, Fundamentals, Chapter 7, “Sound and Vibration” 3. ASHRAE 2007 Handbook, HVAC Applications, Chapter 47, “Sound and Vibration Control” 4. Beranek, L.L., “Revised criteria for noise in buildings.” Noise Control 3, 19-27 (1957). 5. Beranek, L.L. and Ver, I.L., “Noise and Vibration Control Engineering, Principles and Applications”, John Wiley and Sons, Inc., 1992. 6. Blazier, W.E., “Revised noise criteria for application in the acoustical design and rating of HVAC systems.” Noise Control Eng. J. 16(2), 64-73 (1981). 7. Blazier, W.E., “RC Mark II: A refined procedure for rating the noise of heating, ventilating, and air-conditioning (HVAC) systems in buildings.” Noise Control Eng. J. 45(6), 243-250 (1997). 8. Schaffer, M.E., “A Practical Guide to Noise and Vibration Control for HVAC Systems”, American Society of Heating, Refrigeration and Air-Conditioning Engineers, Inc., 2005.
{"url":"http://michaelschwob.com/noise-criteria-calculator/documentation/","timestamp":"2014-04-16T16:17:16Z","content_type":null,"content_length":"29907","record_id":"<urn:uuid:c88d3bf3-5b31-4a15-8200-32a9ee42fe69>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Zariski-closed subsemigroups of SL_n(C) are groups up vote 5 down vote favorite I would like to show that any Zariski-closed subsemigroup of $SL_n(\mathbb{C})$ is a group. If I understand correctly, this is consequence 1.2.A of http://www.heldermann-verlag.de/jlt/jlt03/ BOSLAT.PDF . Is there a more elementary proof? For $SL_2(\mathbb{C})$, the result is quite easy to show directly, or using the Hilbert basis theorem, . reference-request linear-algebra algebraic-groups 1 Note that the linked paper relies on the older Chevalley viewpoint about algebraic groups and algebraic geometry over arbitrary fields, which Hochschild also followed much later on. This doesn't work well in prime characteristic, so eventually the framework used by Chevalley and others shifted (and schemes came in). But for a field like $\mathbb{C}$ none of that really affects the question here. – Jim Humphreys Nov 22 '10 at 13:57 P.S. Tag added. – Jim Humphreys Nov 22 '10 at 17:52 What exactly is the Chevalley viewpoint? Considering algebraic groups as varieties rather than schemes? Or working with algebraic groups through their Hopf algebras? – darij grinberg Nov 22 '10 at @darij: It's a question of using older language about varieties, with the ground field being arbitrary (but infinite). The function algebras certainly play a leading role, especially in 1 Hochschild's Springer graduate text following Chevalley's 1955 book. Much of this works well enough in char 0, but otherwise gets out of control when you construct quotients, etc. Chevalley's version of algebraic geometry was a step toward what is now standard (and may have helped speed the transition). My education however started out with Weil's book ;-( – Jim Humphreys Nov 22 '10 at add comment 3 Answers active oldest votes It is quite elementary. Let $S$ be the semi-group in question. Then for any $g \in S$, the set $g^kS$ for $k=1,2, \dots$ is a decreasing sequence of closed sets, hence it has to up vote 22 stabilize. So, $g^kS=g^{k+1}S$ implies that $gS=S$. Hence $S$ is closed with respect to taking inverse, and therefore is a group. down vote Thanks! Do you know where I might find a reference for this argument? – Colin McQuillan Nov 22 '10 at 13:02 @Colin: Unfortunately, I don't know a reference for this. – Keivan Karai Nov 22 '10 at 13:19 I'm also curious about whether this is written down somewhere, since I have a vague recollection of seeing the argument before. Anyway, the result is in a way analogous to the fact that a subsemigroup of a finite group is a subgroup. – Jim Humphreys Nov 22 '10 at 13:51 1 P.S. My "vague recollection" is probably based on Exercise 7.5 in my 1975 book on algebraic groups: A closed subset of an algebraic group which contains the identity and is closed under taking products is a subgroup. Actually, the identity element comes along for free. (As the context of my Section 7 suggests, the exercise is inspired essentially by Chevalley's 1955 book but was perhaps stated explicitly later on.) – Jim Humphreys Nov 22 '10 at 15:51 Dear Pete: au contraire. The argument adapts to work scheme-theoretically, and then has an important application to scheme-theoretic normalizers (useful for reductive gps): for 1 finite type gp $G$ over a field $k$ (any char.) and a smoth closed subscheme $V$ in $G$, the functor $N_G(V)$ assigning to any $k$-alg. $A$ the set of $g \in G(A)$ such that $g V_A g ^{-1} \subseteq V_A$ is rep'td by a closed subsemigp (usually not smooth even if $G$ is, when char($k) > 0$). Want $N_G(V)$ to act on $V$ by automorphisms (i.e., stable by inversion)! See Def. A.1.9ff. in "Pseudo-reductive gps". – BCnrd Nov 22 '10 at 16:10 show 5 more comments I think it's a matter of basic linear algebra. Generally, let $k$ be a field, and $L$ be some finite-dimensional $k$-algebra. (In our case, $k=\mathbb C$ and $L=\mathrm{M}_n\left(k\right)$.) If $A\in L$ is invertible, then $A^{-1}$ lies in the Zariski closure of the set $\left\lbrace 1,A,A^2,A^3,...\right\rbrace$. Proof. Let $k\left[L\right]$ denote the algebra of all polynomial functions from $L$ to $k$ (where a "polynomial function" means a function that can be written as a polynomial in the coordinates). (If $k$ is infinite, this is isomorphic to the non-naive algebra of coordinate functions, i. e. the symmetric algebra $\mathrm{S}\left(L^{\ast}\right)$, but we don't care about this isomorphy and therefore we don't need $k$ to be infinite.) Let $P\in k\left[L\right]$ be a polynomial such that $P\left(A^i\right)=0$ for every $i\in\mathbb N$. We must then prove that $P\left(A^{-1}\right)=0$ as well. Define a $k$-algebra $U$ by $U=\bigoplus\limits_{i=0}^N L^{\otimes i}$ as a vector space, but with the multiplication being inherited from $L$ on each summand. So, as a vector space $U$ is a "cropped" tensor algebra over $L$, but as an algebra it is a direct product! Let $N=\deg P$. Then the polynomial $P:L\to k$ can be written as $P=p\circ s$, where $U=\bigoplus\limits_{i=0}^N L^{\otimes i}$, where $s:L\to U$ is the canonical map given by vote 3 $s\left(B\right)=1\oplus B\oplus \left(B\otimes B\right)\oplus \left(B\otimes B\otimes B\right)\oplus ...\oplus B^{\otimes N}$, vote and $p:U\to k$ is some $k$-linear map. (In fact, this follows from the properties of the tensor algebra, because here we are NOT using the algebra structure on our $U$, but we are only using the vector space structure on $U$, and as I said, as a vector space $U$ is just the tensor algebra of $L$ "cropped" at $N$, which is enough for linearlizing polynomial maps of degree $\leq Now consider the element $s\left(A\right)\in U$. This element $s\left(A\right)$ is invertible (since $A$ is invertible, so that $A^{\otimes i}$ is invertible for every $i$, and since the multiplication on $U=\bigoplus\limits_{i=0}^N L^{\otimes i}$ is componentwise), and the algebra $U$ is finite-dimensional (although its dimension is usually quite large). Thus, $s\left(A\ right)^{-1}$ lies in the $k$-linear span of the set $\left\lbrace 1,s\left(A\right),\left(s\left(A\right)\right)^2,\left(s\left(A\right)\right)^3,...\right\rbrace$ (because if $u$ is an invertible element of some finite-dimensional $k$-algebra, then $u^{-1}$ lies in the $k$-linear span of the set $\left\lbrace 1,u,u^2,u^3,...\right\rbrace$; this is easily proven using the fact that any element of a finite-dimensional $k$-algebra is algebraic over $k$). Since $s$ is a multiplicative map, we have $\left(s\left(A\right)\right)^i=s\left(A^i\right)$ for all $i$, so that this becomes: The element $s\left(A^{-1}\right)$ lies in the $k$-linear span of the set $\left\lbrace s\left(1\right),s\left(A\right),s\left(A^2\right),s\left(A^3\right),...\right\ rbrace$. Since $p$ is a linear map, we can apply $p$ here and obtain: The element $p\left(s\left(A^{-1}\right)\right)$ lies in the $k$-linear span of the set $\left\lbrace p\left(s\left(1\ right)\right),p\left(s\left(A\right)\right),p\left(s\left(A^2\right)\right),p\left(s\left(A^3\right)\right),...\right\rbrace$. Now $p\circ s=P$, so this becomes: The element $P\left(A^{-1}\ right)$ lies in the $k$-linear span of the set $\left\lbrace P\left(1\right),P\left(A\right),P\left(A^2\right),P\left(A^3\right),...\right\rbrace$. So when $P\left(A^i\right)=0$ for all $i\in\ mathbb N$, then $P\left(A^{-1}\right)=0$, qed. add comment Let $G \subset SL_n(\mathbb C)$ be a Zariski closed subsemigroup. The map $$\alpha(x,y) := (x,xy)$$ defines an injective self-map of $G \times G$ (see as algebraic varieties over $\mathbb C$). By the Ax-Grothendieck theorem, this map is bijective and hence an isomorphism. It is now a standard argument to construct the inverse map for $G$ out of the inverse of $\alpha$. up vote 1 down vote Is there a way of not using the Ax-Grothendieck theorem or anything like this? add comment Not the answer you're looking for? Browse other questions tagged reference-request linear-algebra algebraic-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/46934/zariski-closed-subsemigroups-of-sl-nc-are-groups/46943","timestamp":"2014-04-18T03:23:27Z","content_type":null,"content_length":"73625","record_id":"<urn:uuid:01ae3083-ed1b-43c1-8132-d3d36937a0b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
On the design of RSA with short secret exponent - IN ADVANCES IN CRYPTOLOGY— ASIACRYPT 00 , 2000 "... Batch verification can provide large computational savings when several signatures, or other constructs, are verified together. Several batch verification algorithms have been published in recent years, in particular for both DSA-type and RSA signatures. We describe new attacks on several of these ..." Cited by 18 (4 self) Add to MetaCart Batch verification can provide large computational savings when several signatures, or other constructs, are verified together. Several batch verification algorithms have been published in recent years, in particular for both DSA-type and RSA signatures. We describe new attacks on several of these published schemes. A general weakness is explained which applies to almost all known batch verifiers for discrete logarithm based signature schemes. It is shown how this weakness can be eliminated given extra properties about the underlying group structure. A new general batch verifier for exponentiation in any cyclic group is also described as well as a batch verifier for modified RSA signatures. , 2002 "... We present lattice-based attacks on RSA with prime factors p and q of unbalanced size. In our scenario, the factor q is smaller than and the decryption exponent d is small modulo p-1. We introduce two approaches that both use a modular bivariate polynomial equation with a small root. Extractin ..." Cited by 10 (1 self) Add to MetaCart We present lattice-based attacks on RSA with prime factors p and q of unbalanced size. In our scenario, the factor q is smaller than and the decryption exponent d is small modulo p-1. We introduce two approaches that both use a modular bivariate polynomial equation with a small root. Extracting this root is in both methods equivalent to the factorization of the modulus N = pq. Applying a method of Coppersmith, one can construct from a bivariate modular equation a bivariate polynomial f(x, y) over Z that has the same small root. In our first method, we prove that one can extract the desired root of f(x, y) in polynomial time. This method works up to # 0.382. Our second method uses a heuristic to find the root. This method improves upon the first one by allowing larger values of d modulo p-1. - Proceedings of ACISP 2005, Lecture Notes in Computer Science , 2005 "... Abstract. We propose a key generation method for RSA moduli which allows the cost of the public operations (encryption/verifying) and the private operations (decryption/signing) to be balanced according to the application requirements. Our method is a generalisation of using small public exponents a ..." Cited by 6 (0 self) Add to MetaCart Abstract. We propose a key generation method for RSA moduli which allows the cost of the public operations (encryption/verifying) and the private operations (decryption/signing) to be balanced according to the application requirements. Our method is a generalisation of using small public exponents and small Chinese remainder (CRT) private exponents. Our results are most relevant in the case where the cost of private operations must be optimised. We give methods for which the cost of private operations is the same as the previous fastest methods, but where the public operations are significantly faster. For example, the fastest known (1024 bit) RSA decryption is using small CRT private exponents and moduli which are a product of three primes. In this case we equal the fastest known decryption time and also make the encryption time around 4 times faster. The paper gives an analysis of the security of keys generated by our method, and several new attacks. The ingredients of our analysis include several ideas of Coppersmith and a new technique which exploits linearisation. We also present a new birthday attack on low Hamming-weight private exponents. 1 - in Public Key Cryptology— PKC 2005, Lecture Notes in Computer Science. NewYork "... Abstract. In typical RSA, it is impossible to create a key pair (e, d) such that both are simultaneously much shorter than φ(N). This is because if d is selected first, then e will be of the same order of magnitude as φ(N), and vice versa. At Asiacrypt’99, Sun et al. designed three variants of RSA u ..." Cited by 5 (1 self) Add to MetaCart Abstract. In typical RSA, it is impossible to create a key pair (e, d) such that both are simultaneously much shorter than φ(N). This is because if d is selected first, then e will be of the same order of magnitude as φ(N), and vice versa. At Asiacrypt’99, Sun et al. designed three variants of RSA using prime factors p and q of unbalanced size. The first RSA variant is an attempt to make the private exponent d short below N 0.25 and N 0.292 which are the lower bounds of d for a secure RSA as argued first by Wiener and then by Boneh and Durfee. The second RSA variant is constructed in such a way that both d and e have the same bit-length 1 2 log 2 N + 56. The third RSA variant is constructed by such a method that allows a trade-off between the lengths of d and e. Unfortunately, at Asiacrypt’2000, Durfee and Nguyen broke the illustrated instances of the first RSA variant and the third RSA variant by solving small roots to trivariate modular polynomial equations. Moreover, they showed that the instances generated by these three RSA variants with unbalanced p and q in fact become more insecure than those instances, having the same sizes of exponents as the former, in RSA with balanced p and q. In this paper, we focus on designing a new RSA variant with balanced d and e, and balanced p and q in order to make such an RSA variant more secure. Moreover, we also extend this variant to another RSA variant in which allows a trade-off between the lengths of d and e. Based on our RSA variants, an application to entity authentication for defending the stolen-secret attack is presented. , 2008 "... LSBS-RSA denotes an RSA system with modulus primes, p and q, sharing a large number of least signi…cant bits. In ISC 2007, Zhao and Qi analyzed the security of short exponent LSBS-RSA. They claimed that short exponent LSBS-RSA is much more vulnerable to the lattice attack than the standard RSA. In t ..." Add to MetaCart LSBS-RSA denotes an RSA system with modulus primes, p and q, sharing a large number of least signi…cant bits. In ISC 2007, Zhao and Qi analyzed the security of short exponent LSBS-RSA. They claimed that short exponent LSBS-RSA is much more vulnerable to the lattice attack than the standard RSA. In this paper, we point out that there exist some errors in the calculation of Zhao & Qi’s attack. After re-calculating, the result shows that their attack is unable for attacking RSA with primes sharing bits. Consequently, we give a revised version to make their attack feasible. We also propose a new method to further extend the security boundary, compared with the revised version. The proposed attack also supports the result of analogue Fermat factoring on LSBS-RSA, which claims least signi…cant bits, where n is the bit-length of pq. In conclusion, it is a trade-o ¤ between the number of sharing bits and the security level in LSBS-RSA. One should be more careful when using LSBS-RSA with short exponents. that p and q cannot share more than n 4 Keywords: RSA, least signi…cant bits (LSBs), LSBS-RSA, short exponent attack, lattice reduction technique, the Boneh-Durfee attack. 1 "... In 2000, Boneh-Durfee extended the bound for low private exponent from 0.25 (provided by wiener) to 0.292 with public exponent size is same as modulus size. They have used powerful lattice reduction algorithm (LLL) with coppersmith’s theory of polynomials. In this paper we generalize their attack to ..." Add to MetaCart In 2000, Boneh-Durfee extended the bound for low private exponent from 0.25 (provided by wiener) to 0.292 with public exponent size is same as modulus size. They have used powerful lattice reduction algorithm (LLL) with coppersmith’s theory of polynomials. In this paper we generalize their attack to arbitrary public exponent. "... Lattice basis reduction algorithms have contributed a lot to cryptanalysis of RSA crypto system. With coppersmith’s theory of polynomials, these algorithms are searching for the weak instances of Number-theoretic cryptography, mainly RSA. In this paper we present several lattice based attacks on low ..." Add to MetaCart Lattice basis reduction algorithms have contributed a lot to cryptanalysis of RSA crypto system. With coppersmith’s theory of polynomials, these algorithms are searching for the weak instances of Number-theoretic cryptography, mainly RSA. In this paper we present several lattice based attacks on low private exponent of RSA. "... Abstract. We present lattice-based attacks on RSA with prime factors p and q of unbalanced size. In our scenario, the factor q is smaller than N β and the decryption exponent d is small modulo p − 1. We introduce two approaches that both use a modular bivariate polynomial equation with a small root. ..." Add to MetaCart Abstract. We present lattice-based attacks on RSA with prime factors p and q of unbalanced size. In our scenario, the factor q is smaller than N β and the decryption exponent d is small modulo p − 1. We introduce two approaches that both use a modular bivariate polynomial equation with a small root. Extracting this root is in both methods equivalent to the factorization of the modulus N = pq. Applying a method of Coppersmith, one can construct from a bivariate modular equation a bivariate polynomial f(x, y) over Z that has the same small root. In our first method, we prove that one can extract the desired root of f(x, y) in polynomial time. This method works up to β < 3− √ 5 2 ≈ 0.382. Our second method uses a heuristic to find the root. This method improves upon the first one by allowing larger values of d modulo p − 1 provided that β ≤ 0.23. "... Cloud applications increasing demand for led to an ever growing need for security mechanisms. Cloud computing is a technique to leverage on distributed computing resources one do not own using internet facility in pay per use strategy on demand. A user can access cloud services as a utility service ..." Add to MetaCart Cloud applications increasing demand for led to an ever growing need for security mechanisms. Cloud computing is a technique to leverage on distributed computing resources one do not own using internet facility in pay per use strategy on demand. A user can access cloud services as a utility service and begin to use them almost instantly. These features that make cloud computing so flexible with the fact that services are accessible any where any time lead to several potential risks. The most serious concerns are the possibility of lack of confidentiality, integrity and authentication among the cloud users and service providers. The key intent of this research work is to investigate the existing security schemes and to ensure data confidentiality, integrity and authentication. In our model symmetric and asymmetric cryptographic algorithms are adopted for the optimization of data security in cloud computing. These days encryption techniques which use large keys (RSA and other schemes based on exponentiation of integers) is seldom used for data encryption due to computational overhead. Their usage is restricted to transport of keys for symmetric key encryption and in signature schemes where data size is generally small. Public Key Cryptography with Matrices is a three-stage secured algorithm. We generate a system of non-homogeneous linear equations and using this system, we describe algorithms for key agreement and public encryption whose security is based on solving system of equations over the ring of integers which comes under the NP-Complete problems. Keywords- cryptography, encryption, decryption I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=896214","timestamp":"2014-04-18T00:00:19Z","content_type":null,"content_length":"37422","record_id":"<urn:uuid:44498572-f90d-4d29-a7b2-5d4cf7008f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Innovation: A Better Way Monitoring the Ionosphere with Integer-Leveled GPS Measurements By Simon Banville, Wei Zhang, and Richard B. Langley IT’S NOT JUST FOR POSITIONING, NAVIGATION, AND TIMING. Many people do not realize that GPS is being used in a variety of ways in addition to those of its primary mandate, which is to provide accurate position, velocity, and time information. The radio signals from the GPS satellites must traverse the Earth’s atmosphere on their way to receivers on or near the Earth’s surface. The signals interact with the atoms, molecules, and charged particles that make up the atmosphere, and the process slightly modifies the signals. It is these modified or perturbed signals that a receiver actually processes. And should a signal be reflected or diffracted by some object in the vicinity of the receiver’s antenna, the signal is further perturbed — a phenomenon we call multipath. Now, these perturbations are a bit of a nuisance for conventional users of GPS. The atmospheric effects, if uncorrected, reduce the accuracy of the positions, velocities, and time information derived from the signals. However, GPS receivers have correction algorithms in their microprocessor firmware that attempt to correct for the effects. Multipath, on the other hand, is difficult to model although the use of sophisticated antennas and advanced receiver technologies can minimize its effect. But there are some GPS users who welcome the multipath or atmospheric effects in the signals. By analyzing the fluctuations in signal-to-noise-ratio due to multipath, the characteristics of the reflector can be deduced. If the reflector is the ground, then the amount of moisture in the soil can be measured. And, in wintery climes, changes in snow depth can be tracked from the multipath in GPS signals. The atmospheric effects perturbing GPS signals can be separated into those that are generated in the lower part of the atmosphere, mostly in the troposphere, and those generated in the upper, ionized part of the atmosphere — the ionosphere. Meteorologists are able to extract information on water vapor content in the troposphere and stratosphere from the measurements made by GPS receivers and regularly use the data from networks of ground-based continuously operating receivers and those operating on some Earth-orbiting satellites to improve weather forecasts. And, thanks to its dispersive nature, the ionosphere can be studied by suitably combining the measurements made on the two legacy frequencies transmitted by all GPS satellites. Ground-based receiver networks can be used to map the electron content of the ionosphere, while Earth-orbiting receivers can profile electron density. Even small variations in the distribution of ionospheric electrons caused by earthquakes; tsunamis; and volcanic, meteorite, and nuclear explosions can be detected using GPS. In this month’s column, I am joined by two of my graduate students, who report on an advance in the signal processing procedure for better monitoring of the ionosphere, potentially allowing scientists to get an even better handle on what’s going on above our heads. Representation and forecast of the electron content within the ionosphere is now routinely accomplished using GPS measurements. The global distribution of permanent ground-based GPS tracking stations can effectively monitor the evolution of electron structures within the ionosphere, serving a multitude of purposes including satellite-based communication and navigation. It has been recognized early on that GPS measurements could provide an accurate estimate of the total electron content (TEC) along a satellite-receiver path. However, because of their inherent nature, phase observations are biased by an unknown integer number of cycles and do not provide an absolute value of TEC. Code measurements (pseudoranges), although they are not ambiguous, also contain frequency-dependent biases, which again prevent a direct determination of TEC. The main advantage of code over phase is that the biases are satellite- and receiver-dependent, rather than arc-dependent. For this reason, the GPS community initially adopted, as a common practice, fitting the accurate TEC variation provided by phase measurements to the noisy code measurements, therefore removing the arc-dependent biases. Several variations of this process were developed over the years, such as phase leveling, code smoothing, and weighted carrier-phase leveling (see Further Reading for background literature). The main challenge at this point is to separate the code inter-frequency biases (IFBs) from the line-of-sight TEC. Since both terms are linearly dependent, a mathematical representation of the TEC is usually required to obtain an estimate of each quantity. Misspecifications in the model and mapping functions were found to contribute significantly to errors in the IFB estimation, suggesting that this process would be better performed during nighttime when few ionospheric gradients are present. IFB estimation has been an ongoing research topic for the past two decades are still remains an issue for accurate TEC determination. A particular concern with IFBs is the common assumption regarding their stability. It is often assumed that receiver IFBs are constant during the course of a day and that satellite IFBs are constant for a duration of a month or more. Studies have clearly demonstrated that intra-day variations of receiver instrumental biases exist, which could possibly be related to temperature effects. This assumption was shown to possibly introduce errors exceeding 5 TEC units (TECU) in the leveling process, where 1 TECU corresponds to 0.162 meters of code delay or carrier advance at the GPS L1 frequency (1575.42 MHz). To overcome this limitation, one could look into using solely phase measurements in the TEC estimation process, and explicitly deal with the arc-dependent ambiguities. The main advantage of such a strategy is to avoid code-induced errors, but a larger number of parameters needs to be estimated, thereby weakening the strength of the adjustment. A comparison of the phase-only (arc-dependent) and phase-leveled (satellite-dependent) models showed that no model performs consistently better. It was found that the satellite-dependent model performs better at low-latitudes since the additional ambiguity parameters in the arc-dependent model can absorb some ionospheric features (such as gradients). On the other hand, when the mathematical representation of the ionosphere is realistic, the leveling errors may more significantly impact the accuracy of the approach. The advent of precise point positioning (PPP) opened the door to new possibilities for slant TEC (STEC) determination. Indeed, PPP can be used to estimate undifferenced carrier-phase ambiguity parameters on L1 and L2, which can then be used to remove the ambiguous characteristics of the carrier-phase observations. To obtain undifferenced ambiguities free from ionospheric effects, researchers have either used the widelane/ionosphere-free (IF) combinations, or the Group and Phase Ionospheric Calibration (GRAPHIC) combinations. One critical problem with such approaches is that code biases propagate into the estimated ambiguity parameters. Therefore, the resulting TEC estimates are still biased by unknown quantities, and might suffer from the unstable datum provided by the The recent emergence of ambiguity resolution in PPP presented sophisticated means of handling instrumental biases to estimate integer ambiguity parameters. One such technique is the decoupled-clock method, which considers different clock parameters for the carrier-phase and code measurements. In this article, we present an “integer-leveling” method, based on the decoupled-clock model, which uses integer carrier-phase ambiguities obtained through PPP to level the carrier-phase observations. Standard Leveling Procedure This section briefly reviews the basic GPS functional model, as well as the observables usually used in ionospheric studies. A common leveling procedure is also presented, since it will serve as a basis for assessing the performance of our new method. Ionospheric Observables. The standard GPS functional model of dual-frequency carrier-phase and code observations can be expressed as: where Φ[i] ^j is the carrier-phase measurement to satellite j on the L[i] link and, similarly, P[i] ^j is the code measurement on L[i]. The term is the biased ionosphere-free range between the satellite and receiver, which can be decomposed as: The instantaneous geometric range between the satellite and receiver antenna phase centers is ρ ^j. The receiver and satellite clock errors, respectively expressed as dT and dt^j, are expressed here in units of meters. The term T^j stands for the tropospheric delay, while the ionospheric delay on L1 is represented by I ^j and is scaled by the frequency-dependent constant μ for L2, where . The biased carrier-phase ambiguities are symbolized by and are scaled by their respective wavelengths (λ^i). The ambiguities can be explicitly written as: where N[i ]^j is the integer ambiguity, b[i] is a receiver-dependent bias, and b[i ]^j is a satellite-dependent bias. Similarly, B[i] and B[i ]^j are instrumental biases associated with code measurements. Finally, ε contains unmodeled quantities such as noise and multipath, specific to the observable. The overbar symbol indicates biased quantities. In ionospheric studies, the geometry-free (GF) signal combinations are formed to virtually eliminate non-dispersive terms and thus provide a better handle on the quantity of interest: where IFB[r] and IFB ^j represent the code inter-frequency biases for the receiver and satellite, respectively. They are also commonly referred to as differential code biases (DCBs). Note that the noise terms (ε) are neglected in these equations for the sake of simplicity. Weighted-Leveling Procedure. As pointed out in the introduction, the ionospheric observables of Equations (7) and (8) do not provide an absolute level of ionospheric delay due to instrumental biases contained in the measurements. Assuming that these biases do not vary significantly in time, the difference between the phase and code observations for a particular satellite pass should be a constant value (provided that no cycle slip occurred in the phase measurements). The leveling process consists of removing this constant from each geometry-free phase observation in a satellite-receiver arc: where the summation is performed for all observations forming the arc. An elevation-angle-dependent weight (w) can also be applied to minimize the noise and multipath contribution for measurements made at low elevation angles. The double-bar symbol indicates leveled observations. Integer-Leveling Procedure The procedure of fitting a carrier-phase arc to code observations might introduce errors caused by code noise, multipath, or intra-day code-bias variations. Hence, developing a leveling approach that relies solely on carrier-phase observations is highly desirable. Such an approach is now possible with the recent developments in PPP, allowing for ambiguity resolution on undifferenced observations. This procedure has gained significant momentum in the past few years, with several organizations generating “integer clocks” or fractional offset corrections for recovering the integer nature of the undifferenced ambiguities. Among those organizations are, in alphabetical order, the Centre National d’Études Spatiale; GeoForschungsZentrum; GPS Solutions, Inc.; Jet Propulsion Laboratory; Natural Resources Canada (NRCan); and Trimble Navigation. With ongoing research to improve convergence time, it would be no surprise if PPP with ambiguity resolution would become the de facto methodology for processing data on a station-by-station basis. The results presented in this article are based on the products generated at NRCan, referred to as “decoupled clocks.” The idea behind integer leveling is to introduce integer ambiguity parameters on L1 and L2, obtained through PPP processing, into the geometry-free linear combination of Equation (7). The resulting integer-leveled observations, in units of meters, can then be expressed as: where and are the ambiguities obtained from the PPP solution, which should be, preferably, integer values. Since those ambiguities are obtained with respect to a somewhat arbitrary ambiguity datum, they do not allow instant recovery of an unbiased slant ionospheric delay. This fact was highlighted in Equation (10), which indicates that, even though the arc-dependency was removed from the geometry-free combination, there are still receiver- and satellite-dependent biases (b[r] and b ^j, respectively) remaining in the integer-leveled observations. The latter are thus very similar in nature to the standard-leveled observations, in the sense that the biases b[r] and b ^j replace the well-known IFBs. As a consequence, integer-leveled observations can be used with any existing software used for the generation of TEC maps. The motivation behind using integer-leveled observations is the mitigation of leveling errors, as explained in the next sections. Slant TEC Evaluation As a first step towards assessing the performance of integer-leveled observations, STEC values are derived on a station-by-station basis. The slant ionospheric delays are then compared for a pair of co-located receivers, as well as with global ionospheric maps (GIMs) produced by the International GNSS Service (IGS). Leveling Error Analysis. Relative leveling errors between two co-located stations can be obtained by computing between-station differences of leveled observations: where subscripts A and B identify the stations involved, and ε[l] is the leveling error. Since the distance between stations is short (within 100 meters, say), the ionospheric delays will cancel, and so will the satellite biases (b ^j) which are observed at both stations. The remaining quantities will be the (presumably constant) receiver biases and any leveling errors. Since there are no satellite-dependent quantities in Equation (11), the differenced observations obtained should be identical for all satellites observed, provided that there are no leveling errors. The same principles apply to observations leveled using other techniques discussed in the introduction. Hence, Equation (11) allows comparison of the performance of various leveling approaches. This methodology has been applied to a baseline of approximately a couple of meters in length between stations WTZJ and WTZZ, in Wettzell, Germany. The observations of both stations from March 2, 2008, were leveled using a standard leveling approach, as well as the method described in this article. Relative leveling errors computed using Equation (11) are displayed in Figure 1, where each color represents a different satellite. It is clear that code noise and multipath do not necessarily average out over the course of an arc, leading to leveling errors sometimes exceeding a couple of TECU for the standard leveling approach (see panel (a)). On the other hand, integer-leveled observations agree fairly well between stations, where leveling errors were mostly eliminated. In one instance, at the beginning of the session, ambiguity resolution failed at both stations for satellite PRN 18, leading to a relative error of 1.5 TECU, more or less. Still, the advantages associated with integer leveling should be obvious since the relative error of the standard approach is in the vicinity of -6 TECU for this satellite. The magnitude of the leveling errors obtained for the standard approach agrees fairly well with previous studies (see Further Reading). In the event that intra-day variations of the receiver IFBs are observed, even more significant biases were found to contaminate standard-leveled observations. Since the decoupled-clock model used for ambiguity resolution explicitly accounts for possible variations of any equipment delays, the estimated ambiguities are not affected by such effects, leading to improved leveled observations. STEC Comparisons. Once leveled observations are available, the next step consists of separating STEC from instrumental delays. This task can be accomplished on a station-by-station basis using, for example, the single-layer ionospheric model. Replacing the slant ionospheric delays (I ^j) in Equation (10) by a bilinear polynomial expansion of VTEC leads to: where M(e) is the single-layer mapping function (or obliquity factor) depending on the elevation angle (e) of the satellite. The time-dependent coefficients a[0], a[1], and a[2] determine the mathematical representation of the VTEC above the station. Gradients are modeled using Δλ, the difference between the longitude of the ionospheric pierce point and the longitude of the mean sun, and Δϕ, the difference between the geomagnetic latitude of the ionospheric pierce point and the geomagnetic latitude of the station. The estimation procedure described by Attila Komjathy (see Further Reading) is followed in all subsequent tests. An elevation angle cutoff of 10 degrees was applied and the shell height used was 450 kilometers. Since it is not possible to obtain absolute values for the satellite and receiver biases, the sum of all satellite biases was constrained to a value of zero. As a consequence, all estimated biases will contain a common (unknown) offset. STEC values, in TECU, can then be computed as: where the hat symbol denotes estimated quantities, and is equal to zero (that is, it is not estimated) when biases are obtained on a station-by-station basis. The frequency, f[1], is expressed in Hz. The numerical constant 40.3, determined from values of fundamental physical constants, is sufficiently precise for our purposes, but is a rounding of the more precise value of 40.308. While integer-leveled observations from co-located stations show good agreement, an external TEC source is required to make sure that both stations are not affected by common errors. For this purpose, Figure 2 compares STEC values computed from GIMs produced by the IGS and STEC values derived from station WTZJ using both standard- and integer-leveled observations. The IGS claims root-mean-square errors on the order of 2-8 TECU for vertical TEC, although the ionosphere was quiet on the day selected, meaning that errors at the low-end of that range are expected. Errors associated with the mapping function will further contribute to differences in STEC values. As apparent from Figure 2, no significant bias can be identified in integer-leveled observations. On the other hand, negative STEC values (not displayed in Figure 2) were obtained during nighttimes when using standard-leveled observations, a clear indication that leveling errors contaminated the STEC Evaluation in the Positioning Domain. Validation of slant ionospheric delays can also be performed in the positioning domain. For this purpose, a station’s coordinates from processing the observations in static mode (that is, one set of coordinates estimated per session) are estimated using (unsmoothed) single-frequency code observations with precise orbit and clock corrections from the IGS and various ionosphere-correction sources. Figure 3 illustrates the convergence of the 3D position error for station WTZZ, using STEC corrections from the three sources introduced previously, namely: 1) GIMs from the IGS, 2) STEC values from station WTZJ derived from standard leveling, and 3) STEC values from station WTZJ derived from integer leveling. The reference coordinates were obtained from static processing based on dual-frequency carrier-phase and code observations. The benefits of the integer-leveled corrections are obvious, with the solution converging to better than 10 centimeters. Even though the distance between the stations is short, using standard-leveled observations from WTZJ leads to a biased solution as a result of arc-dependent leveling errors. Using a TEC map from the IGS provides a decent solution considering that it is a global model, although the solution is again biased. This station-level analysis allowed us to confirm that integer-leveled observations can seemingly eliminate leveling errors, provided that carrier-phase ambiguities are fixed to proper integer values. Furthermore, it is possible to retrieve unbiased STEC values from those observations by using common techniques for isolating instrumental delays. The next step consisted of examining the impacts of reducing leveling errors on VTEC. VTEC Evaluation When using the single-layer ionospheric model, vertical TEC values can be derived from the STEC values of Equation (13) using: Dividing STEC by the mapping function will also reduce any bias caused by the leveling procedure. Hence, measures of VTEC made from a satellite at a low elevation angle will be less impacted by leveling errors. When the satellite reaches the zenith, then any bias in the observation will fully propagate into the computed VTEC values. On the other hand, the uncertainty of the mapping function is larger at low-elevation angles, which should be kept in mind when analyzing the results. Using data from a small regional network allows us to assess the compatibility of the VTEC quantities between stations. For this purpose, GPS data collected as a part of the Western Canada Deformation Array (WCDA) network, still from March 2, 2008, was used. The stations of this network, located on and near Vancouver Island in Canada, are indicated in Figure 4. Following the model of Equation (12), all stations were integrated into a single adjustment to estimate receiver and satellite biases as well as a triplet of time-varying coefficients for each station. STEC values were then computed using Equation (13), and VTEC values were finally derived from Equation (14). This procedure was again implemented for both standard- and integer-leveled observations. To facilitate the comparison of VTEC values spanning a whole day and to account for ionospheric gradients, differences with respect to the IGS GIM were computed. The results, plotted by elevation angle, are displayed in Figure 5 for all seven stations processed (all satellite arcs from the same station are plotted using the same color). The overall agreement between the global model and the station-derived VTECs is fairly good, with a bias of about 1 TECU. Still, the top panel demonstrates that, at high elevation angles, discrepancies between VTEC values derived from standard-leveled observations and the ones obtained from the model have a spread of nearly 6 TECU. With integer-leveled observations (see bottom panel), this spread is reduced to approximately 2 TECU. It is important to realize that the dispersion can be explained by several factors, such as remaining leveling errors, the inexact receiver and satellite bias estimates, and inaccuracies of the global model. It is nonetheless expected that leveling errors account for the most significant part of this error for standard-leveled observations. For satellites observed at a lower elevation angle, the spread between arcs is similar for both methods (except for station UCLU in panel (a) for which the estimated station IFB parameter looks significantly biased). As stated previously, the reason is that leveling errors are reduced when divided by the mapping function. The latter also introduces further errors in the comparisons, which explains why a wider spread should typically be associated with low-elevation-angle satellites. Nevertheless, it should be clear from Figure 5 that integer-leveled observations offer a better consistency than standard-leveled observations. The technique of integer leveling consists of introducing (preferably) integer ambiguity parameters obtained from PPP into the geometry-free combination of observations. This process removes the arc dependency of the signals, and allows integer-leveled observations to be used with any existing TEC estimation software. While leveling errors of a few TECU exist with current procedures, this type of error can be eliminated through use of our procedure, provided that carrier-phase ambiguities are fixed to the proper integer values. As a consequence, STEC values derived from nearby stations are typically more consistent with each other. Unfortunately, subsequent steps involved in generating VTEC maps, such as transforming STEC to VTEC and interpolating VTEC values between stations, attenuate the benefits of using integer-leveled observations. There are still ongoing challenges associated with the GIM-generation process, particularly in terms of latency and three-dimensional modeling. Since ambiguity resolution in PPP can be achieved in real time, we believe that integer-leveled observations could benefit near-real-time ionosphere monitoring. Since ambiguity parameters are constant for a satellite pass (provided that there are no cycle slips), integer ambiguity values (that is, the leveling information) can be carried over from one map generation process to the next. Therefore, this methodology could reduce leveling errors associated with short arcs, for instance. Another prospective benefit of integer-leveled observations is the reduction of leveling errors contaminating data from low-Earth-orbit (LEO) satellites, which is of particular importance for three-dimensional TEC modeling. Due to their low orbits, LEO satellites typically track a GPS satellite for a short period of time. As a consequence, those short arcs do not allow code noise and multipath to average out, potentially leading to important leveling errors. On the other hand, undifferenced ambiguity fixing for LEO satellites already has been demonstrated, and could be a viable solution to this problem. Evidently, more research needs to be conducted to fully assess the benefits of integer-leveled observations. Still, we think that the results shown herein are encouraging and offer potential solutions to current challenges associated with ionosphere monitoring. We would like to acknowledge the help of Paul Collins from NRCan in producing Figure 4 and the financial contribution of the Natural Sciences and Engineering Research Council of Canada in supporting the second and third authors. This article is based on two conference papers: “Defining the Basis of an ‘Integer-Levelling’ Procedure for Estimating Slant Total Electron Content” presented at ION GNSS 2011 and “Ionospheric Monitoring Using ‘Integer-Levelled’ Observations” presented at ION GNSS 2012. ION GNSS 2011 and 2012 were the 24th and 25th International Technical Meetings of the Satellite Division of The Institute of Navigation, respectively. ION GNSS 2011 was held in Portland, Oregon, September 19–23, 2011, while ION GNSS 2012 was held in Nashville, Tennessee, September 17–21, 2012. SIMON BANVILLE is a Ph.D. candidate in the Department of Geodesy and Geomatics Engineering at the University of New Brunswick (UNB) under the supervision of Dr. Richard B. Langley. His research topic is the detection and correction of cycle slips in GNSS observations. He also works for Natural Resources Canada on real-time precise point positioning and ambiguity resolution. WEI ZHANG received his M.Sc. degree (2009) in space science from the School of Earth and Space Science of Peking University, China. He is currently an M.Sc.E. student in the Department of Geodesy and Geomatics Engineering at UNB under the supervision of Dr. Langley. His research topic is the assessment of three-dimensional regional ionosphere tomographic models using GNSS measurements. • Authors’ Conference Papers “Defining the Basis of an ‘Integer-Levelling’ Procedure for Estimating Slant Total Electron Content” by S. Banville and R.B. Langley in Proceedings of ION GNSS 2011, the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation, Portland, Oregon, September 19–23, 2011, pp. 2542–2551. “Ionospheric Monitoring Using ‘Integer-Levelled’ Observations” by S. Banville, W. Zhang, R. Ghoddousi-Fard, and R.B. Langley in Proceedings of ION GNSS 2012, the 25th International Technical Meeting of the Satellite Division of The Institute of Navigation, Nashville, Tennessee, September 17–21, 2012, pp. 3753–3761. • Errors in GPS-Derived Slant Total Electron Content “GPS Slant Total Electron Content Accuracy Using the Single Layer Model Under Different Geomagnetic Regions and Ionospheric Conditions” by C. Brunini, and F.J. Azpilicueta in Journal of Geodesy, Vol. 84, No. 5, pp. 293–304, 2010, doi: 10.1007/s00190-010-0367-5. “Calibration Errors on Experimental Slant Total Electron Content (TEC) Determined with GPS” by L. Ciraolo, F. Azpilicueta, C. Brunini, A. Meza, and S.M. Radicella in Journal of Geodesy, Vol. 81, No. 2, pp. 111–120, 2007, doi: 10.1007/s00190-006-0093-1. • Global Ionospheric Maps “The IGS VTEC Maps: A Reliable Source of Ionospheric Information Since 1998” by M. Hernández-Pajares, J.M. Juan, J. Sanz, R. Orus, A. Garcia-Rigo, J. Feltens, A. Komjathy, S.C. Schaer, and A. Krankowski in Journal of Geodesy, Vol. 83, No. 3–4, 2009, pp. 263–275, doi: 10.1007/s00190-008-0266-1. • Ionospheric Effects on GNSS “GNSS and the Ionosphere: What’s in Store for the Next Solar Maximum” by A.B.O. Jensen and C. Mitchell in GPS World, Vol. 22, No. 2, February 2011, pp. 40–48. “Space Weather: Monitoring the Ionosphere with GPS” by A. Coster, J. Foster, and P. Erickson in GPS World, Vol. 14, No. 5, May 2003, pp. 42–49. “GPS, the Ionosphere, and the Solar Maximum” by R.B. Langley in GPS World, Vol. 11, No. 7, July 2000, pp. 44–49. Global Ionospheric Total Electron Content Mapping Using the Global Positioning System by A. Komjathy, Ph. D. dissertation, Technical Report No. 188, Department of Geodesy and Geomatics Engineering, University of New Brunswick, Fredericton, New Brunswick, Canada, 1997. • Decoupled Clock Model “Undifferenced GPS Ambiguity Resolution Using the Decoupled Clock Model and Ambiguity Datum Fixing” by P. Collins, S. Bisnath, F. Lahaye, and P. Héroux in Navigation: Journal of The Institute of Navigation, Vol. 57, No. 2, Summer 2010, pp. 123–135. Richard B. Langley is a professor in the Department of Geodesy and Geomatics Engineering at the University of New Brunswick (UNB) in Fredericton, Canada, where he has been teaching and conducting research since 1981. He has a B.Sc. in applied physics from the University of Waterloo and a Ph.D. in experimental space science from York University, Toronto. He spent two years at MIT as a postdoctoral fellow, researching geodetic applications of lunar laser ranging and VLBI. For work in VLBI, he shared two NASA Group Achievement Awards. Professor Langley has worked extensively with the Global Positioning System. He has been active in the development of GPS error models since the early 1980s and is a co-author of the venerable “Guide to GPS Positioning” and a columnist and contributing editor of GPS World magazine. His research team is currently working on a number of GPS-related projects, including the study of atmospheric effects on wide-area augmentation systems, the adaptation of techniques for spaceborne GPS, and the development of GPS-based systems for machine control and deformation monitoring. Professor Langley is a collaborator in UNB’s Canadian High Arctic Ionospheric Network project and is the principal investigator for the GPS instrument on the Canadian CASSIOPE research satellite now in orbit. Professor Langley is a fellow of The Institute of Navigation (ION), the Royal Institute of Navigation, and the International Association of Geodesy. He shared the ION 2003 Burka Award with Don Kim and received the ION’s Johannes Kepler Award in
{"url":"http://gpsworld.com/innovation-a-better-way-monitoring-the-ionosphere-with-integer-leveled-gps-measurements/","timestamp":"2014-04-18T23:43:54Z","content_type":null,"content_length":"99806","record_id":"<urn:uuid:ac9df36d-0746-489e-a29e-06a5fa29a365>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
In general: The odds against winning are found by calculating: Let A be the event of winning (i.e. A^' is the event of losing). The odds against winning (i.e. the odds the horse will lose) are 9:1. We usually say the 'odds on' to win are 9:1. Bookmakers assign odds against winning events in sport, horse racing, greyhound racing etc. So, if a bookmaker assigns odds of 9:1 without allowing for a profit margin, then a competitor is given 9 chances of losing and one chance of winning out of 10 chances. That is: Note that if a gambler places a $1 bet with a bookmaker at 9:1 odds on to win and his horse wins, the gambler will win $9 plus the $1 bet, obtaining $10; otherwise the gambler will lose $1. If a $5 bet is placed with the bookmaker at 9:1 and his horse wins, the gambler will win $45 plus the $5 bet, obtaining $50; otherwise the gambler will lose $5. Past performance, the recent form of competitors, the expected weather conditions for the event, the jockey and other relevant factors are taken into consideration to determine the odds before the occurrence of the event. As gamblers place their bets, bookmakers adjust the odds in order to minimise the amounts paid out and thus maximise their profit, i.e. the bookmakers alter the odds so that they make a profit. The bookmakers' odds are decided on subjective estimates of probabilities biased in their favour. Example 8 What are the odds in favour of throwing a 1 with a die? Example 9 If there is a 30% chance of winning, find the odds against winning. Example 10 If the odds against winning a horse race are 2:1, find the probability of winning the race. Example 11 If the odds in favour of winning a race are 3:5, find the probability of winning the race. Odds in favour of winning a race of 3:5 mean 3 chances of winning to every 5 chances of losing. So, if the race were held 8 times, one would be expected to win 3 races and lose 5 races. Example 12 I bet $4 on a horse race. How much would I receive from the bookmaker if a win results with the following odds on to win? a. evens b. 3:1 c. 5:2 Example 13 If there is a 0% chance of winning, can we find the odds against winning? So, we cannot find the odds against winning. Key Terms odds, odds against winning, odds on to win
{"url":"http://www.mathsteacher.com.au/year10/ch05_probability/07_odds/odds.htm","timestamp":"2014-04-18T20:47:45Z","content_type":null,"content_length":"15718","record_id":"<urn:uuid:98e0bfce-a7fc-400c-a996-27ac53c32e2d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
The Input-output Relation Of Nonlinear System ... | Chegg.com The input-output relation of nonlinear system is Y(t) = x^2(t) where x(t) is the imput and y(t) is the output a. The signal x(t) is band limited with a max frequency = 2000*pi RAD/SEC. dETERMINE IF Y(T) IS ALSO BAND LIMITED, AND IF SO, what is the max frequency. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/input-output-relation-nonlinear-system-y-t-x-2-t-x-t-imput-y-t-output--signal-x-t-band-lim-q1891581","timestamp":"2014-04-21T02:47:19Z","content_type":null,"content_length":"20686","record_id":"<urn:uuid:391f6170-ae5a-4cb9-aebc-8c24b39c2a11>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
QLineF Class Reference The QLineF class provides a two-dimensional vector using floating point precision. More... Member Function Documentation QLineF::QLineF () Constructs a null line. QLineF::QLineF ( const QPointF & p1, const QPointF & p2 ) Constructs a line object that represents the line between p1 and p2. Constructs a line object that represents the line between (x1, y1) and (x2, y2). QLineF::QLineF ( const QLine & line ) Construct a QLineF object from the given integer-based line. See also toLine(). QPointF QLineF::p1 () const Returns the line's start point. See also setP1(), x1(), y1(), and p2(). QPointF QLineF::p2 () const Returns the line's end point. See also setP2(), x2(), y2(), and p1(). qreal QLineF::x1 () const Returns the x-coordinate of the line's start point. See also p1(). qreal QLineF::x2 () const Returns the x-coordinate of the line's end point. See also p2(). qreal QLineF::y1 () const Returns the y-coordinate of the line's start point. See also p1(). qreal QLineF::y2 () const Returns the y-coordinate of the line's end point. See also p2(). qreal QLineF::angle () const Returns the angle of the line in degrees. Positive values for the angles mean counter-clockwise while negative values mean the clockwise direction. Zero degrees is at the 3 o'clock position. This function was introduced in Qt 4.4. See also setAngle(). qreal QLineF::angleTo ( const QLineF & line ) const Returns the angle (in degrees) from this line to the given line, taking the direction of the lines into account. If the lines do not intersect within their range, it is the intersection point of the extended lines that serves as origin (see QLineF::UnboundedIntersection). The returned value represents the number of degrees you need to add to this line to make it have the same angle as the given line, going counter-clockwise. This function was introduced in Qt 4.4. See also intersect(). qreal QLineF::dx () const Returns the horizontal component of the line's vector. See also dy() and pointAt(). qreal QLineF::dy () const Returns the vertical component of the line's vector. See also dx() and pointAt(). QLineF QLineF::fromPolar ( qreal length, qreal angle ) [static] Returns a QLineF with the given length and angle. The first point of the line will be on the origin. Positive values for the angles mean counter-clockwise while negative values mean the clockwise direction. Zero degrees is at the 3 o'clock position. This function was introduced in Qt 4.4. IntersectType QLineF::intersect ( const QLineF & line, QPointF * intersectionPoint ) const Returns a value indicating whether or not this line intersects with the given line. The actual intersection point is extracted to intersectionPoint (if the pointer is valid). If the lines are parallel, the intersection point is undefined. bool QLineF::isNull () const Returns true if the line is not set up with valid start and end point; otherwise returns false. qreal QLineF::length () const Returns the length of the line. See also setLength(). QLineF QLineF::normalVector () const Returns a line that is perpendicular to this line with the same starting point and length. See also unitVector(). QPointF QLineF::pointAt ( qreal t ) const Returns the point at the parameterized position specified by t. The function returns the line's start point if t = 0, and its end point if t = 1. See also dx() and dy(). void QLineF::setP1 ( const QPointF & p1 ) Sets the starting point of this line to p1. This function was introduced in Qt 4.4. See also setP2() and p1(). void QLineF::setP2 ( const QPointF & p2 ) Sets the end point of this line to p2. This function was introduced in Qt 4.4. See also setP1() and p2(). void QLineF::setAngle ( qreal angle ) Sets the angle of the line to the given angle (in degrees). This will change the position of the second point of the line such that the line has the given angle. Positive values for the angles mean counter-clockwise while negative values mean the clockwise direction. Zero degrees is at the 3 o'clock position. This function was introduced in Qt 4.4. See also angle(). void QLineF::setLength ( qreal length ) Sets the length of the line to the given length. QLineF will move the end point - p2() - of the line to give the line its new length. If the line is a null line, the length will remain zero regardless of the length specified. See also length() and isNull(). void QLineF::setLine ( qreal x1, qreal y1, qreal x2, qreal y2 ) Sets this line to the start in x1, y1 and end in x2, y2. This function was introduced in Qt 4.4. See also setP1(), setP2(), p1(), and p2(). void QLineF::setPoints ( const QPointF & p1, const QPointF & p2 ) Sets the start point of this line to p1 and the end point of this line to p2. This function was introduced in Qt 4.4. See also setP1(), setP2(), p1(), and p2(). QLine QLineF::toLine () const Returns an integer based copy of this line. Note that the returned line's start and end points are rounded to the nearest integer. See also QLineF(). void QLineF::translate ( const QPointF & offset ) Translates this line by the given offset. void QLineF::translate ( qreal dx, qreal dy ) This is an overloaded function. Translates this line the distance specified by dx and dy. QLineF QLineF::translated ( const QPointF & offset ) const Returns this line translated by the given offset. This function was introduced in Qt 4.4. QLineF QLineF::translated ( qreal dx, qreal dy ) const This is an overloaded function. Returns this line translated the distance specified by dx and dy. This function was introduced in Qt 4.4. QLineF QLineF::unitVector () const Returns the unit vector for this line, i.e a line starting at the same point as this line with a length of 1.0. See also normalVector(). bool QLineF::operator!= ( const QLineF & line ) const Returns true if the given line is not the same as this line. A line is different from another line if their start or end points differ, or the internal order of the points is different. bool QLineF::operator== ( const QLineF & line ) const Returns true if the given line is the same as this line. A line is identical to another line if the start and end points are identical, and the internal order of the points is the same.
{"url":"http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qlinef.html","timestamp":"2014-04-20T00:46:55Z","content_type":null,"content_length":"36126","record_id":"<urn:uuid:923e85af-8578-40bf-bf03-f9e9acba8bfa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
the cross product March 21st 2010, 07:23 PM the cross product i wasn't @ school the day we learned the cross product. heres my first question, just help me get this one so i can complete the worksheet please!!! <-8,13,-2> X <-2,19,1> March 21st 2010, 09:25 PM Do you know how to find determinants? There isn't really anything fancy to understand, the cross product is just a defintion. If "a" and "b" are the vectors: The cross-product is the vector defined by: $a X b=<a_2b_3-a_3b_2,a_3b_1-a_1b_3,a_1b_2-a_2b_1>$ This is difficult to remember, it's best if you know how to find determinants. If you scroll down to "matrix" notation, you'll see how the determinant definition makes 3-by-3 cross products a little easier to remember.
{"url":"http://mathhelpforum.com/pre-calculus/134964-cross-product-print.html","timestamp":"2014-04-18T21:51:12Z","content_type":null,"content_length":"5414","record_id":"<urn:uuid:ca1cca9e-38f7-4084-b66c-74b2f5de27d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] masked arrays and nd_image Stephen Walton stephen.walton at csun.edu Wed Aug 11 12:02:08 CDT 2004 On Wed, 2004-08-11 at 06:24, Todd Miller wrote: > I think the key here may be the "filled()" method which lets you convert > a masked array into a NumArray with the masked values filled into some > fill value, say 0. I'm not sure what the post convolution mask value > should be. I hope I'm not jumping in where I don't belong, but here at SFO/CSUN we've had quite a bit of experience with convolutions and correlations of time series (serieses?) with missing data points. I'm sure you all know this, but: For the scaling to be correct, you have to not only mask out the values you don't want, but normalize the sum to reflect the fact that different numbers of values will appear in the sum. Our MATLAB code to convolve x and y, with bad points marked by NaNs, is: for i = 1 : xlen-ylen+1 if isempty(b) z(i)= NaN; I'd be happy to know how to code up the equivalent in numarray. In the above, note that x1 is x padded on both ends with ylen-1 NaNs. Unfortunately, and again I'm sure everyone knows this, you can't use FFTs to speed up convolutions/correlations if you have missing data points, so you have to use discrete techniques. The numerical analysis literature refers to this problem as Fourier analysis of unequally spaced data. The only publications and algorithms I could find went the wrong way: given an unequally spaced set of points in Fourier space, find the most likely reconstruction in real space. Stephen Walton <stephen.walton at csun.edu> Dept. of Physics & Astronomy, Cal State Northridge -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20040811/69e91579/attachment.bin More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-August/003376.html","timestamp":"2014-04-16T10:34:44Z","content_type":null,"content_length":"4745","record_id":"<urn:uuid:d30d9b60-8f75-466c-aeb4-80da2d295a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [TowerTalk] Quagi Optimization At 08:05 AM 2/20/2006, Al Williams wrote: >----- Original Message ----- From: "Jim Lux" <jimlux@earthlink.net> >To: "Pat Barthelow" <aa6eg@hotmail.com>; <towertalk@contesting.com> >>In general, elements closer to the driven element are more critical (they >>have more current). >For a REGULAR YAGI do the parasitic element's induced voltages cause >radiation which then cause currents in the neighboring elements and so on? >I am pretty sure that this is true but do these SECONDARY currents have >the amplitude and phase to have much effect on the total pattern? You bet they do. That's part of the challenge of yagi antenna design (using analytic methods): every element interacts with every other. You're essentially solving a NxN matrix equation to create a set of element excitations that gives you the desired pattern, with the elements of the matrix being the mutual coupling between the various pairs of elements. A lot of the early Yagi work focussed on coming up with convenient formulations that would let you do this without too much complexity (imagine inverting a 10x10 matrix by hand!). Typically, you'd approximate the current along the element with something like a sinusoidal distribution, to make the calculation easier. And, if you want superdirective gain (which you do), adjacent elements are actually somewhat out of phase with each other, which has the effect of cancelling side and back radiation more than it cancels the forward radiation. This has to be traded off with the IR losses in the elements (jacking up the coupling by putting elements close together may give you great directivity, but also huge losses, so the gain will be terrible). Relatively small changes in the current phase and amplitude will reduce the amount of cancellation, so the performance is degraded. >Do antenna modelling programs take these secondary currents and radiation >into account when summing up the segments and elements? yes, they do. In fact, what they do is calculate the mutual coupling from each segment in the model to every other segment in the model, build up a HUGE matrix, and solve it. So, for a 1000 segment model, you're talking solving 1000 equations with 1000 unknowns. Once you know the current in each of those 1000 segments, you can calculate the far field by summing the contributions from each segment. TowerTalk mailing list
{"url":"http://lists.contesting.com/_towertalk/2006-02/msg00281.html?contestingsid=b6snlbvigt4u6v62mq1pjrso50","timestamp":"2014-04-20T11:43:48Z","content_type":null,"content_length":"10204","record_id":"<urn:uuid:c42fa002-c15a-45c2-98a2-23c2042a7510>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionKinematic EquationKinematic EquationKinetic ParametersTorsion StiffnessDamping FactorMoment of InertiaPhase-Frequency CharacteristicsPhase-Frequency CharacteristicsPhase-Frequency Characteristics of Different Damping RatiosConditioning Circuits ImpactDynamic Simulation of Phase DifferenceSimulation ModelingDynamic SimulationMeasuring of Phase DifferenceMeasuring PlatformMeasurementsResults and DiscussionConclusionsConflicts of InterestReferencesFigures and Table Figure 3 shows the gyro principle structure diagram, where the middle section is a sensitive element, called a silicon pendulum. Through bulk micro-mechanical processing technology, it is fabricated on a mono-Si wafer. There are two copper plated ceramic electrodea on both sides of the silicon pendulum, on which copper has been plated by mean of a sputtering process. The plate electrode and silicon pendulum form symmetrical detection capacitances, where m is a detection capacitance positive pole and n is another detection capacitance positive pole, as shown in Figure 4. Figure 3 shows that elastic torsion beams are located on both ends of the constraint center of the silicon pendulum and they are fixed on the frame. Nitrogen is encapsulated between the electrode and the silicon pendulum. When the silicon pendulum is vibrating around the constraint center, the capacitance changes periodically. Then, by picking up the capacitance circuit, the gyro outputs a sensitive signal. The rotating aircraft-driven silicon micromachined gyro is installed on the rotating aircraft. In Figure 3, oy is the output axis, called also precession axis, ox is the input axis and oz is the driving axis, which is accordance with the spin axes of a rotating aircraft. When it is rotating at a high speed around the spin axis, the aircraft will drive the silicon pendulum rotating at the same angular velocity as the aircraft spin angular velocity φ̇ and the silicon pendulum will also acquire angular momentum. Angular momentum is same in the direction as the oz axis. If the aircraft turns with angular velocity of Ω around the input axis, the silicon pendulum will be forced to produce precession. Thus, along with the direction of precession oy axis, the precession moment acts upon silicon pendulum. The precession moment can be balanced with various moments such as the inertia moment, damping torque and elastic moment. Concerning these moments, because the silicon pendulum is vibrating around the precession axis, the angular acceleration generates inertia moment. Having angular velocity, the squeeze film resistance of nitrogen generates a damping torque, and the elastic torsion beam also generates elastic moment. In short, when aircraft undertakes a forced turn around the input axis ox, as a result, the silicon pendulum will generate precession to make angular momentum tend to the input transverse angular velocity Ω. The term α̇ represents the angular velocity of the silicon pendulum vibration. Suppose that the input angular velocity Ω is a constant, with the aircraft spin, the silicon pendulum will generate a simple harmonic vibration. After the detection circuit and conditioning circuit, the gyro will output a periodic voltage signal, as shown in Figure 5. A case is introduced to validate the correctness and applicability of the Si micromachined gyro driven by a rotating aircraft. The experimental platform is shown in Figure 6. Figure 6 shows that the gyro was installed on an emulator and emulator was mounted on an angular vibration table. The spin angular velocity was given by the emulator and the value of the spin angular velocity had been adjusted to 5,400°/s (15 Hz). Input transverse angular velocity was provided by the angular vibration table and was inputted a constant value of 180°/s. The gyro's output signal was measured via a scope and the measured waveform is shown in Figure 7. If the aircraft is not rotating at a constant rate but rather a sinusoidal function with a frequency of 1 Hz, the gyro's output signal is a AM signal and the envelope of the signal corresponds to the sinusoidal function, as shown in Figure 8. The test control panel is shown in Figure 9. Waveforms in Figure 5 and Figure 7 are perfectly in accordance. Therefore, testing validates that the previous analysis is correct. With respecting to the above-mentioned motion, it can be illustrated by rigid motion around fixed point. Therefore, using the Euler dynamic equation based on rigid movement around a fixed point and its projection on the procession axis, we will easily derive the motion equation to express Equation (1) as in [10] J y α ¨ + D α ˙ + [ ( J z − J x ) φ ˙ 2 + K T ] α = ( J z + J y − J x ) Ω φ ˙ cos ( φ ˙ t ) where J[x], J[y] and J[z] represent the moment of inertia of the silicon pendulum relative to x, y and z, respectively. K[T] is the torsion stiffness of the elastic torsion beam and D is a damping factor. α̇ represents the value of the vibration angular velocity of the silicon pendulum, Ω represents the value of the input angular velocity, and φ̇ represents the value of the spin angular velocity of the aircraft. To simplify Equation (1) further, it can be written as: α ¨ + 2 ς ω n α ˙ + ω n 2 α = f 0 cos ( φ ˙ t ) where: ω n 2 = 1 J y [ ( J z − J x ) φ ˙ 2 + K T ] ς = D 2 ω n J y = D 2 J y [ ( J z − J x ) φ ˙ 2 + K T ] f 0 = 1 J y ( J z + J y − J x ) Ω φ ˙ Equation (2) is a second order constant coefficient linear differential equation, then, the solution of Equation (2) is given by: α = A 1 e − n t cos ( ω f t + δ ) + A 2 cos ( φ ˙ t − β ) where A[1] is a integration constant, which value is determined by the initial conditions, n is an attenuation factor, including n=ςω[n] and ω f = ω n 2 − n 2. Equation (4) indicates that the solution consists of a damping vibration and a harmonic vibration. The first item of Equation (4) would be quickly attenuated to zero, then the forced vibration will reach steady state and it can be calculated as follows: α = ( J z + J y − J x ) φ ˙ Ω [ ( J z − J x − J y ) φ ˙ 2 + K T ] 2 + ( D φ ˙ ) 2 cos ( φ ˙ t − β ) Then, the amplitude of the angular vibration can be written as: α m = ( J z + J y − J x ) φ ˙ [ ( J z − J x − J y ) φ ˙ 2 + K T ] 2 + ( D φ ˙ ) 2 Ω The phase difference is: β = arctan ( 2 n φ ˙ ω n 2 − φ ˙ 2 ) Figure 10 is the structural diagram of the silicon pendulum. The gyro frame is fixed during packaging and cannot move. There is a movable part, called the silicon pendulum, inside the frame. Its two ends are elastic torsion beams, which connect with the frame. There are 14 damping bars on two sides of the silicon pendulum. Squeeze film damping, has a big impact on the dynamic response of MEMS microstructure, and can be used to adjust the quality factor of the micro-mechanical structure. Increased damping bars can reduce the pressure of the gas film, and the gas film damping is reduced accordingly, so in the MEMS design, in order to speed up the release or decrease damping, we usually include a release bar or so-called damping bar in the silicon pendulum. In the chip process, we have four steps. The first step shapes a movable section, the second step etches the damping bars, the next is buffer layer etching on the beams and the last step forms the silicon pendulum. The masks of the four steps and chip are shown in Figure 11. Using the above four step etching, the final chip is obtained. A picture of the chip is shown in Figure 12. Considering that it is convenient to calculate the torsion stiffness of an elastic torsion beam, it is supposed that on processing the torsion angle is proportional to the length of the beam, warping of the cross sections of elastic torsion beam are same, and values are same, but in opposite direction, for the torsion moment of the two ends of an elastic torsion beam. Under the condition of the above assumptions, using elastic mechanics, the torsion stiffness with rectangular cross section can be obtained by: K T = 512 G t 3 w π 4 L ∑ n = 1 , 3 , 5 , ... ∞ 1 n 4 ( 1 − 2 t n π w tanh n π w 2 t ) where L, w, and t are the length, width and thickness of the elastic torsion beam, respectively. G is the sheer modulus of the silicon material. Practically the torsion angle α is limited to 0.003 rad, K[T] is large enough, thus we can simplify Equation (8), and then Equation (9) can be obtained: K T = 2 3 ⋅ G t 3 w L If G, L, w, and t are chosen to be: L = 600 μ m , w = 400 μ m , t = 75 μ m , G = 5.1 × 10 10 N / m 2 the calculated result is: K T = 2.507 × 10 − 3 N ⋅ m The displacement of the silicon pendulum edge node is far less than both the lateral size and gap relative to the electrodes in the torsion motion of Figure 4. By using numerical FEA simulation, the maximum displacement is only 886 nm. Therefore, in the analysis of the problem, we make the assumption that the silicon pendulum undergoes a rigid motion of small amplitude, the gas film gap is just a time function, and the linear solution of the Reynolds equation is given for calculating the damping coefficient of the silicon pendulum. When a rectangular plate with length A and width B is moving relative to the undersurface of a gap distance of h, the damping coefficient of the squeeze film resistance can be expressed by: f = F damp d h / d t = A B 3 μ h 3 [ 1 − 192 A π 5 ∑ n = 1 , 3 , 5 , ... ∞ 1 n 5 tan h n π 4 2 B ] where μ is the gas viscosity coefficient. Since the infinite series can quickly converge, the damping coefficient is approximately equal to the first term of Equation (12), that is: f = A B 3 μ h 3 [ 1 − 192 A π 5 tanh π 4 2 B ] In order to calculate the damping factor D shown in Figure 10, the first step is to figure out the damping factor of the entire silicon pendulum, then, to calculate the damping factor of the damping hole, notch and fourteen damping bars. Finally, the entire silicon pendulum damping factor minus the damping factors of the missing parts above. As a result, the real damping factor of the silicon pendulum is The chosen relevant dimensions are: h = 20 μm, A = B =14 mm, μ = 17.81×10^−^6 P[a]·s, thus, the damping factor of the silicon pendulum can be figured out: D = 1.676 × 10 − 4 N ⋅ m ⋅ s Moment of inertia (J[x], J[y], J[z]) relative to the principal axes of inertia are calculated in the same way as the damping factor. Finally, the obtained moment of inertia can respectively be written as: J x = 2.27794 g ⋅ mm 2 J y = 2.1527 g ⋅ mm 2 J z = 4.43064 g ⋅ mm 2
{"url":"http://www.mdpi.com/1424-8220/13/8/11051/xml","timestamp":"2014-04-19T14:36:38Z","content_type":null,"content_length":"77259","record_id":"<urn:uuid:14ce00d7-c26b-417c-9868-ff2a81c86e12>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/sarah_miller720/medals","timestamp":"2014-04-17T01:38:23Z","content_type":null,"content_length":"84056","record_id":"<urn:uuid:d8a3f058-1a57-41be-9b7a-a8996ee3ec2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Bristol, PA Algebra 2 Tutor Find a Bristol, PA Algebra 2 Tutor ...Likewise, a student struggling with Algebra I is a far cry from one going for an A in honors pre-calculus. I am comfortable and experienced with all levels of students. Because many of my students have achieved dramatic increases in their scores, some people get the impression that I am mainly an SAT coach. 23 Subjects: including algebra 2, English, calculus, geometry I went to school for computer engineering at Carnegie Mellon University, changed my major to chemical engineering and transferred to the University of Delaware where I completed my degree. I started a business with a friend of mine, which I ran successfully for about 12 years. I then changed my career and became a teacher. 15 Subjects: including algebra 2, chemistry, calculus, physics I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including algebra 2, calculus, physics, ACT Math I have been teaching Algebra and middle school math for 4 years in Camden, NJ. My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. 8 Subjects: including algebra 2, geometry, algebra 1, SAT math ...While getting my Master's degree I worked with children with special needs as a Teacher's Assistant so I am comfortable and experienced in all types of learners. I am Pennsylvania state certified to teach K-6. I am currently working as a 4th grade teacher. 15 Subjects: including algebra 2, reading, writing, geometry Related Bristol, PA Tutors Bristol, PA Accounting Tutors Bristol, PA ACT Tutors Bristol, PA Algebra Tutors Bristol, PA Algebra 2 Tutors Bristol, PA Calculus Tutors Bristol, PA Geometry Tutors Bristol, PA Math Tutors Bristol, PA Prealgebra Tutors Bristol, PA Precalculus Tutors Bristol, PA SAT Tutors Bristol, PA SAT Math Tutors Bristol, PA Science Tutors Bristol, PA Statistics Tutors Bristol, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Beverly, NJ algebra 2 Tutors Bordentown algebra 2 Tutors Burlington City, NJ algebra 2 Tutors Burlington Township, NJ algebra 2 Tutors Burlington, NJ algebra 2 Tutors Columbus, NJ algebra 2 Tutors Fairless Hills algebra 2 Tutors Fieldsboro, NJ algebra 2 Tutors Hulmeville, PA algebra 2 Tutors Jobstown algebra 2 Tutors Penndel, PA algebra 2 Tutors Rancocas algebra 2 Tutors Riverside, NJ algebra 2 Tutors Roebling algebra 2 Tutors West Bristol, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/Bristol_PA_Algebra_2_tutors.php","timestamp":"2014-04-19T12:12:03Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:0333ff88-af12-4807-a39f-934e39053e35>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Tuesday, March 25, 2008 Mathematical physicist wins 2008 Templeton Prize The 2008 Templeton Prize has been awarded to Polish mathematical physicist Michael Heller. Heller has worked for more than 40 years in theology, philosophy, mathematics and cosmology, and intends to use the £820,000 prize to set up a cross-university and inter-disciplinary institute to investigate questions in science, theology and philosophy. 16th century depiction of Genesis (Michelangelo, Sistine Chapel): God creates Adam. Like Galileo, Heller thinks that mathematics is the "language of God." The Templeton Prize was founded in 1972 by philanthropist Sir John Templeton, and is awarded annually to a living person for "progress toward research or discoveries about spiritual realities". It is the world's largest annual monetary prize of any kind given to an individual (£820,000). Plus reported on John Barrow's success in 2006. Heller has been rewarded for "developing sharply focused and strikingly original concepts on the origin and cause of the Universe, often under intense (communist Poland) governmental repression." Heller's work these days is largely in non-commutative geometry, which he uses to attempt to remove the problem of a cosmological singularity at the origin of the Universe. "If on the fundamental level of physics there is no space and no time, as many physicists think," says Heller, "non-commutative geometry could be a suitable tool to deal with such a situation." You can read more on non-commutative geometry in the Plus article Quantum Geometry. posted by westius @ 2:00 PM 0 Comments:
{"url":"http://plus.maths.org/content/comment/reply/4174","timestamp":"2014-04-17T07:07:01Z","content_type":null,"content_length":"23260","record_id":"<urn:uuid:0f60f84f-d879-46ca-9927-f0a47dde456c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;quantum-mechanics cv.complex-variables&#39; Questions We are considering the instantaneous eigenstates of an analytically time-dependent hamiltonian and I would like to know how legitimate it is to extend them to the complex plane. Specifically, our ... The goal of this question is to conceptualize in some way the fact that the Riemann zeta function $\zeta(s)$, and other zeta functions like it, have analytic continuations. Background I have by now
{"url":"http://mathoverflow.net/questions/tagged/quantum-mechanics+cv.complex-variables","timestamp":"2014-04-19T18:04:47Z","content_type":null,"content_length":"35941","record_id":"<urn:uuid:1de5baaa-b051-4918-b98a-21a6d465a8a4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Canonical heights on the jacobians of curves of genus 2 and the infinite descent Flynn, E. V. and Smart, N. P. (1997) Canonical heights on the jacobians of curves of genus 2 and the infinite descent. Acta Arithmetica, LXXIX (4). pp. 333-352. ISSN 0065-1036 We give an algorithm to compute the canonical height on a Jacobian of a curve of genus 2. The computations involve only working with the Kummer surface and so lengthy computations with divisors in the Jacobian are avoided. We use this height algorithm to give an algorithm to perform the infinite descent stage of computing the Mordell-Weil group. This last stage is performed by a lattice enlarging procedure. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/264/","timestamp":"2014-04-19T14:37:50Z","content_type":null,"content_length":"13116","record_id":"<urn:uuid:8535b8fd-500b-41e1-a8d4-2105b5913c57>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths in a minute: St Paul's dome One of London's most loved landmarks, the dome of St Paul's, has looked over the city for more than three centuries. However many people don't realise that it hides an intriguing example of the interplay between maths and architecture. Seen from the outside, the building is crowned by a glorious hemispherical dome supporting a magnificent lantern. But what you see from the inside is not the same as what you see from the outside. Sir Christopher Wren created an ingenious design of three nested domes: a hemispherical outer dome to dominate the skyline, a steeper inner dome more fitting with the internal dimensions of the cathedral, and a hidden middle dome. This middle dome was necessary to provide structural support to the outer dome and lantern. Although the outer dome's spherical form is important aesthetically, it is an inherently weak structure and would never have carried the weight of the lantern. And although the inner dome appears to be open to the lantern above, it is actually the inside of the middle dome painted to appear as the lantern. Christopher Wren's sketch for the triple dome design for St Paul's cathedral. It clearly shows his plotting of the cubic curve, y=x^3, to give the shape for the middle dome. (Image from the British This early sketch (c. 1690) of the triple dome design shows Wren using a mathematical curve to define the shape of the middle dome; the cubic curve y=x^3 is clearly plotted on axes marked on the design. The curve not only defines the shape of the middle dome but also the height and width of the surrounding abutments, positioned so as to contain a continuation of the cubic curve to ground level. Wren was applying the theory of his colleague Robert Hooke about the mathematical shapes of ideal masonry domes and arches, one of the earliest instances of mathematical science being used as part of the design process. In 1675 Hooke published the anagram Ut pendet continuum flexile, sic stabit contiguum rigidum inversum, which translates to "as hangs the flexible line, so but inverted will stand the rigid arch". Hooke had correctly understood that the tension passing through a hanging cord is equivalent to the compression in a standing arch. And so the natural form of a hanging cord — a catenary — would also be the shape of the line of thrust of an arch. For an arch to be stable it needs to contain this line of thrust, either in the material of the arch itself or in its abutments. Therefore the ideal shape for a masonry arch, the shape requiring the least material, is a catenary. Hook and Wren thought that the ideal shape of a masonry dome would be the cubico-parabolic conoid created by rotating half of the cubic curve, y=x^3. Their mathematical descriptions were very close but the correct equations defining the shape of the catenary and the ideal dome were discovered much later (you can find the details in an excellent paper by Jaques Heyman). The design of the triple dome continued to evolve after this drawing, with the use of experimental models and the impacts of economics and aesthetics deciding its final shape. The middle dome, as finally constructed, is no longer the pure geometric form of the sketch. But it is clear that its shape is derived from the mathematical concept of a cubic curve, one of the most awe-inspiring instances of the role of mathematics in architecture. You can read more about the mathematics of architecture and the design of St Paul's at the online exhibition, Compass & Rule, at the Museum of the History of Science, Oxford. The site includes some fascinating footage of the geometric construction of classical architecture using just a compass and rule. You can uncover more maths in our urban environment at Maths in the City and find out more about the maths of engineering and architecture here on Plus.
{"url":"http://plus.maths.org/content/maths-minute-st-pauls-dome","timestamp":"2014-04-17T22:38:29Z","content_type":null,"content_length":"26141","record_id":"<urn:uuid:143276d0-39da-4f0c-ae36-2b73fdc6998a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Cambria Heights Math Tutor Find a Cambria Heights Math Tutor ...I have a high NYS Regents exam passing rate in Integrated Algebra and Geometry. Let me help your child become successful in areas such as Pre-algebra, Algebra I, Algebra II, or Geometry! I am very patient and willing to do what it takes to prepare them for their exams,as well as do well in their Math courses.I am a NYS certified Math Teacher for grades 7-12. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...At first, physics didn't make any sense to me either. Before this, most of school is memorization and regurgitation, but physics is more of a way of thinking. After patiently working at it, something clicked and it transformed the way I look at the world. 9 Subjects: including algebra 2, SAT math, calculus, chemistry ...I expect my students to work hard, and to use both sides of their brains: that is, to exercise their creativity as much as their analytic capacity, whatever the subject. I want my students not only excel, but also to enjoy the process of learning, and ultimately to take their education into thei... 30 Subjects: including calculus, GRE, reading, writing ...I can also work with groups of 2-4 children for $80-120/hour. My hours are flexible and I can meet you at the Forest Hills Library or another mutually convenient location. I have worked in a self-contained special education class for the past thirteen years. 5 Subjects: including algebra 1, geometry, prealgebra, elementary math ...Any and every chance I get, I am increasing my knowledge base. I am passionate about sharing my past and newfound knowledge through mentoring, tutoring, teaching, and helping others to achieve their educational aspirations. I have experience tutoring all ages and grade levels in a wide variety of subject areas. 36 Subjects: including ACT Math, SPSS, algebra 1, English
{"url":"http://www.purplemath.com/cambria_heights_math_tutors.php","timestamp":"2014-04-16T04:33:38Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:1e550223-c943-40cf-8f3b-9ea06dec4b99>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Warrenville, IL Trigonometry Tutor Find a Warrenville, IL Trigonometry Tutor ...Have tutored many college and high school students in statistics, finance or algebra. Business Career: Worked as an actuary for more than 30 years. Made frequent use of statistical and financial concepts in monitoring company mortality and lapse experience and in pricing policies. 13 Subjects: including trigonometry, geometry, statistics, Microsoft Excel ...As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th grader) scores above 99 percentile in all math tests, and you too can have high scores.My PhD in Physics... 23 Subjects: including trigonometry, calculus, physics, statistics ...After I graduated with Physics Major, I started tutoring Physics as well. So, altogether, I have almost 20 years experience of tutoring and 10 years of teaching in the subjects I mentioned. During the tutoring sessions, I encourage students to participate in solving problems by keeping them engaged. 11 Subjects: including trigonometry, physics, algebra 2, geometry ...I have taken Calculus 1,2 & 3 along with Differential Equations, Basic math algebra 1 & 2, Geometry etc. Science classes that I have taken but not limited to are Intro to Physics 1 & Electricity and Magnetism; along with quantum physics. I have taken Biology, Intro to Chemistry, Organic Chemistry and BioChemistry as well! 26 Subjects: including trigonometry, reading, calculus, chemistry ...I've developed an approach to these types of questions that make them considerably easier for students to tackle. I truly enjoy helping students to master the Reading portion of the ACT test. These passages often seem very different, even obscure, to students - based upon what they are used to reading in high school. 20 Subjects: including trigonometry, reading, English, writing
{"url":"http://www.purplemath.com/Warrenville_IL_trigonometry_tutors.php","timestamp":"2014-04-21T10:27:39Z","content_type":null,"content_length":"24544","record_id":"<urn:uuid:1eb7afe3-f376-40eb-8a6b-366911a5e139>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Taxicab Numbers Curious properties sometimes lurk within seemingly undistinguished numbers. Consider the story concerning Indian mathematician Srinivasa Ramanujan (1887–1920). His friend G.H. Hardy (1877–1947) once remarked that the taxi by which he had arrived had a "dull" number–1729, or 7 x 13 x 19. Ramanujan was quick to point out that 1729 is actually a "very interesting" number. It's the smallest whole number expressible as a sum of two cubes in two ways: Both 1^3 + 12^3 and 9^3 + 10^3 equal 1729. The first published reference to this property of the integer 1729 is in the writings of 17th-century French mathematician Bernard Frénicle de Bessy (1605–1670). You might then wonder about the identity of the smallest number expressible as the sum of two cubes in three different ways. It's 87539319, discovered in 1957 by John Leech (1926–1992) in the course of an extensive computer search. 87539319 = 167^3 + 436^3 = 228^3 + 423^3 = 255^3 + 414^3 Nowadays, mathematicians define the smallest number expressible as the sum of two cubes in n different ways as the nth taxicab number, denoted Taxicab(n). Hence, Taxicab(2) = 1729 and Taxicab(3) = Interestingly, Hardy and E.M. Wright had proved a theorem guaranteeing that the taxicab number exists for any value of n greater than or equal to 1. So the search was on, but finding the numbers turned out to be exceedingly difficult. Taxicab(4) was discovered in 1991 by amateur number theorist E. Rosenstiel, who obtained expert help from computer scientists J.A. Dardis and Colin R. Rosenstiel to track it down. Taxicab(4) = 6963472309248 = 2421^3 + 19083^3 = 5436^3 + 18948^3 = 10200^3 + 18072^3 = 13322^3 + 16630^3 David W. Wilson found the fifth taxicab number on Nov. 21, 1997. A few months later, Daniel J. Bernstein came upon the same number in an independent investigation. Taxicab(5) = 48988659276962496 = 38787^3 + 365757^3 = 107839^3 + 362753^3 = 205292^3 + 342952^3 = 221424^3 + 336588^3 = 231518^3 + 331954^3 Wilson's discovery had occurred during a lengthy computer search in which he was trying to extend the list of known pairs of cubes that add up to the same number in four ways. The same search turned up other examples of five-way sums of two cubes and 8230545258248091551205288 as the smallest known six-way sum of cubes. In 1998, Bernstein found a smaller example of a six-way sum of cubes. He also established that the sixth taxicab number had to be greater than 10^18, fencing in the possible value of Taxicab(6). Recently, Randall L. Rathbun came upon an even smaller candidate: 24153319581254312065344. Does it qualify to be Taxicab(6)? No one knows yet. A smaller candidate for the least six-way sum of cubes might lurk among the numbers between 10^18 and Rathbun's result. It's startling to note that very little is yet known about taxicab numbers. Some related questions are even more challenging. For example, you can ask, as Hardy did of Ramanujan, for the smallest number that is a sum of two fourth powers in two ways. Ramanujan couldn't provide an immediate answer, but it was known to Leonhard Euler (1707–1783): 635318657 = 59^4 + 158^4 = 133^4 + 134^4. Then it gets tougher. The first number representable as the sum of two fourth powers in three ways must, if exists, have at least 19 digits. Unlike the situation for cubes, however, there is no theorem yet that guarantees the existence of the relevant fourth-power sums. Are there any numbers that are a sum of two fifth powers in two ways? No one appears to know. You can also look for the equivalent of taxicab numbers when you allow both positive and negative cubes. For example, in the case of three-way sums of cubes, 728 = 6^3 + 8^3 = 9^3 – 1^3 = 12^3 – 10^ 3. The smallest positive integer that can be written as the sum of two positive or negative cubes in n ways is sometimes called the nth cabtaxi number. The eighth cabtaxi number is now known, and the ninth must have at least 19 digits. In this nook of number theory, as in many others, questions abound and answers are elusive. In 1991, Rosentiel and his collaborators concluded, "This leaves open the possibility of further projects, as and when yet more powerful computers with suitable algorithms can be applied to these remaining unsolved problems." More than a decade later, progress remains slow. Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
{"url":"https://www.sciencenews.org/article/taxicab-numbers","timestamp":"2014-04-18T01:39:31Z","content_type":null,"content_length":"76522","record_id":"<urn:uuid:a00e72d8-e655-431e-b8e3-0e342b964374>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic Methods and Algorithms Abello, J., Buchsbaum, A. and Westbrook, J. 1998 A functional approach to external graph algorithms. Proc. 6th European Symp. on Algorithms, pp. 332 343. [.ps.Z] [CS] Achacoso, T. B. and Yamamoto,W. S. 1992 Ay s Neuroanatomy of C. elegans for Computation. Boca Raton, FL: CRC Press. Aczel, J. and Daroczy, Z. 1975 On measures of information and their characterizations. New York: Academic Press. Adamic, L., Lukose, R. M., Puniyani, A. R. and Huberman, B. A. 2001 Search in power-law networks. Phys. Rev. E 64, 046135. [Pub] Aggarwal, C. C., Al-Garawi, F. and Yu, P. S. 2001 Intelligent crawling on the World Wide Web with arbitrary predicates. In Proc. 10th Int. World Wide Web Conf., pp. 96 105. [Pub] [CS] Aiello,W., Chung, F. and Lu, L. 2001A Random Graph Model for Power Law Graphs. Experimental Math. 10, 53 66. [.pdf] [CS] Aji, S. M. and McEliece, R. J. 2000 The generalized distributive law. IEEE Trans. Inform. Theory 46, 325 343. [.pdf] Albert, R. and Barabási, A.-L. 2000 Topology of evolving networks: local events and universality. Phys. Rev. Lett. 85, 5234 5237. [.pdf] Albert, R., Jeong, H. and Barabási, A.-L. 1999 Diameter of the World-Wide Web. Nature 401, 130. [.pdf] Albert, R., Jeong, H. and Barabási, A.-L. 2000 Error and attack tolerance of complex networks. Nature 406, 378 382. [.pdf] Allwein, E. L., Schapire, R. E. and Singer, Y. 2000 Reducing multiclass to binary: a unifying approach for margin classifiers. Proc. 17th Int. Conf. on Machine Learning, pp. 9 16. San Francisco, CA: Morgan Kaufmann. [Pub] [CS] Amaral, L. A. N., Scala, A., Barthélémy, M. and Stanley, H. E. 2000 Classes of small-world networks. Proc. Natl Acad. Sci. 97, 11 149 11 152. [.pdf] Amento, B., Terveen, L. and Hill, W. 2000 Does authority mean quality? Predicting expert quality ratings ofWeb documents. Proc. 23rd Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 296 303. New York: ACM Press. [.pdf] [CS] Anderson, C. R., Domingos, P. and Weld, D. 2001 Adaptive Web navigation for wireless devices. Proc. 17th Int. Joint Conf. on Artificial Intelligence, pp. 879 884. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Anderson, C. R., Domingos, P. andWeld, D. 2002 Relational markov models and their application to adaptive Web navigation. Proc. 8th Int. Conf. on Knowledge Discovery and Data Mining, pp. 143 152. New York: ACM Press. [.pdf] Androutsopoulos, I., Koutsias, J., Chandrinos, K. and Spyropoulos, D. 2000 An experimental comparison of naive Bayesian and keyword-based anti-spam filtering with personal e-mail messages. Proc. 23rd ACM SIGIR Ann. Conf., pp. 160 167. [.pdf] Ansari, A. and Mela, C. 2003 E-customization. J. Market. Res. (In the press.) [.pdf] Apostol, T. M. 1969 Calculus, vols I and II. John Wiley & Sons, Ltd/Inc. Appelt, D., Hobbs, J., Bear, J., Israel, D., Kameyama, M., Kehler, A., Martin, D., Meyers K. and Tyson, M. 1995 SRI International FASTUS system: MUC-6 test results and analysis. Proc. 6th Message Understanding Conf. (MUC-6), pp. 237 248. San Francisco, CA: Morgan Kaufmann. [.ps.gz] [CS] Apté C., Damerau, F. and Weiss, S. M. 1994 Automated learning of decision rules for text categorization. (Special Issue on Text Categorization.) ACM Trans. Informat. Syst. 12, 233 251. [.pdf] [CS] Araújo M. D., Navarro, G. and Ziviani, N. 1997 Large text searching allowing errors. In Proc. 4th South American Workshop on String Processing (ed. R. Baeza-Yates), International Informatics Series, pp. 2 20. Ottawa: Carleton University Press. [Pub] [CS] Armstrong, R., Freitag, D., Joachims, T. and Mitchell, T. 1995 WebWatcher: a learning apprentice for the World Wide Web. Proc. 1995 AAAI Spring Symp. on Information Gathering from Heterogeneous, Distributed Environments, pp. 6 12. [.pdf] [CS] Aslam, J. A. and Montague, M. 2001 Models for metasearch. Proc. 24th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 276 284. New York: ACM Press. [.ps] Baldi, P. 2002 A computational theory of surprise. In Information, Coding, and Mathematics (ed. M. Blaum, P. G. Farrell and H. C. A. van Tilborg), pp. 1 25. Boston, MA: Kluwer Academic. Baldi, P. and Brunak, S. 2001 Bioinformatics: The Machine Learning Approach, 2nd edn. MIT Press, Cambridge, MA. [Pub] Baluja, S., Mittal,V. and Sukthankar, R. 2000 Applying machine learning for high performance named-entity extraction. Computat. Intell. 16, 586 595. [.pdf] [CS] Barabási, A.-L. and Albert, R. 1999 Emergence of scaling in random networks. Science 286, 509 512. [.pdf] Barabási, A.-L., Albert, R. and Jeong, H. 1999 Mean-field theory for scale-free random networks. Physica A 272, 173 187. [.pdf] [CS] Barabási, A.-L., Freeh,V.W., Jeong, H. and Brockman, J. B. 2001 Parasitic computing. Nature 412, 894 897. [.pdf] Barlow, R. and Proshan, F. 1975 Statistical Theory of Reliability and Life Testing. Austin, TX: Holt, Rinehart and Winston. Barthélémy, M. and Amaral, L. A. N. 1999 Small-world networks: evidence for a crossover picture. Phys. Rev. Lett. 82, 3180 3183. [.pdf] Bass, F. M. 1969 A new product growth model for consumer durables. Mngmt Sci. 15, 215 227. Bellman, R. E. 1957 Dynamic Programming. Princeton, NJ: Princeton University Press. [.pdf] Berger, J. O. 1985 Statistical Decision Theory And Bayesian Analysis. Springer. Bergman, M. 2000 The Deep Web: Surfacing Hidden Value. J. Electron. Publ. 7. (Available from http://www.completeplanet.com/Tutorials/DeepWeb/.) Berners-Lee, T. 1994 Universal Resource Identifiers in WWW: A Unifying Syntax for the Expression of Names and Addresses of Objects on the Network as used in theWorld-Wide Web. RFC 1630. (Available from http://www.ietf.org/rfc/rfc1630.txt.) Berners-Lee, T., Fielding, R. and Masinter, L. 1998 Uniform Resource Identifiers (URI): Generic Syntax. RFC 2396. (Available from http://www.ietf.org/rfc/rfc2396.txt.) Berry,M.W. 1992 Large scale singular value computations. J. Supercomput. Applic. 6, 13 49. Berry, M. W. and Browne, M. 1999 Understanding Search Engines: Mathematical Modeling and Text Retrieval. Philadelphia, PA: Society for Industrial and Applied Mathematics. Bharat, K. and Broder,A. 1998 A technique for measuring the relative size and overlap of public Web search engines. Proc. 7th Int. World Wide Web Conf., Brisbane, Australia, pp. 379 388. [Pub] Bharat, K. and Henzinger, M. R. 1998 Improved algorithms for topic distillation in a hyperlinked environment. Proc. 21st Ann Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 104 111. New York: ACM Press. [.pdf] [CS] Bianchini, M., Gori, M. and Scarselli, F. 2001 Inside Google s Web page scoring system. Technical report, Dipartimento di Ingegneria dell Informazione, Università di Siena. Bikel, D. M., Miller, S., Schwartz, R. and Weischedel, R. 1997 Nymble: a high-performance learning name-finder. In Proceedings of ANLP-97, pp. 194 201. (Available from http://citeseer.ist.psu.edu/ Billsus, D. and Pazzani, M. 1998 Learning collaborative information filters. Proc. Int. Conf. on Machine Learning, pp. 46 54. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Blahut, R. E. 1987 Principles and Practice of Information Theory. Reading, MA: Addison-Wesley. Blei, D., Ng, A. Y. and Jordan, M. I. 2002a Hierarchical Bayesian models for applications in information retrieval. In Bayesian Statistics 7 (ed. J. M. Bernardo, M. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M.West). Oxford University Press. [.ps] Blei, D., Ng, A.Y. and Jordan, M. I. 2002b Latent Dirichlet allocation. In Advances in Neural Information Processing Systems 14 (ed. T. Dietterich, S. Becker and Z. Ghahramani). San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Blum, A. and Mitchell, T. 1998 Combining labeled and unlabeled data with co-training. Proc. 11th Ann. Conf. on Computational Learning Theory (COLT-98), pp. 92 100. New York: ACM Press. [.ps] [CS] Bollacker, K. D., Lawrence, S. and Giles, C. L. 1998 CiteSeer: an autonomous Web agent for automatic retrieval and identification of interesting publications. In Proc. 2nd Int. Conf. on Autonomous Agents (Agents 98) (ed. K. P. Sycara and M. Wooldridge), pp. 116 123. New York: ACM Press. [.ps.gz] [CS] Bollobás, B. 1985 Random Graphs. London: Academic Press. Bollobás, B. and de laVega,W. F. 1982 The diameter of random regular graphs. Combinatorica 2, 125 134. Bollobás, B. and Riordan, O. 2003 The diameter of a scale-free random graph. Combinatorica. (In the press.) Bollobás, B., Riordan, O., Spencer, J. and Tusnády G. 2001 The degree sequence of a scale-free random graph process. Random. Struct. Alg. 18, 279 290. Borodin, A., Roberts, G. O., Rosenthal, J. S. and Tsaparas, P. 2001 Finding authorities and hubs from link structures on the World Wide Web. Proc. 10th Int. Conf. on World Wide Web, pp. 415 429. [Pub Box, G. E. P. and Tiao, G. C. 1992 Bayesian Inference In Statistical Analysis. John Wiley & Sons, Ltd/Inc. Boyan, J., Freitag, D. and Joachims, T. 1996 A machine learning architecture for optimizing Web search engines. Proc. AAAI Workshop on Internet-Based Information Systems. [.ps.gz] [CS] Brand, M. 2002 Incremental singular value decomposition of uncertain data with missing values. Proc. European Conf. on Computer Vision (ECCV): Lecture Notes in Computer Science, pp. 707 720. Springer. [.pdf] Bray, T. 1996 Measuring the Web. In Proc. 5th Int. Conf. on the World Wide Web, 6 10 May 1996, Paris, France. Comp. Networks 28, 993 1005. [Pub] Breese, J. S., Heckerman, D. and Kadie, C. 1998 Empirical analysis of predictive algorithms for collaborative filtering. Proc. 14th Conf. on Uncertainty in Artificial Intelligence, pp. 43 52. San Francisco, CA: Morgan Kaufmann. [CS] Brewington, B. and Cybenko, G. 2000 How dynamic is the Web? Proc. 9th Int. World Wide Web Conf. Geneva: International World Wide Web Conference Committee (IW3C2). [Pub] [CS] Brin, S. and Page, L. 1998 The anatomy of a large-scale hypertextual (Web) search engine. In Proc. 7th Int. World Wide Web Conf. (WWW7). Comp. Networks 30, 107 117. [Pub] [CS] Broder, A., Kumar, R., Maghoul, F., Raghavan, P., Rajagopalan, S., Stata, R., Tomikns, A. and Wiener, J. 2000 Graph structure in the Web. In Proc. 9th Int. World Wide Web Conf. (WWW9). Comp. Networks 33, 309 320. [Pub] Brown, L. D. 1986 Fundamentals of Statistical Exponential Families. Hayward, CA: Institute of Mathematical Statistics. Bucklin, R. E. and Sismeiro, C. 2003 A model of Web site browsing behavior estimated on clickstream data. (In the press.) [.pdf] Buntine,W. 1992 Learning classification trees. Statist. Comp. 2, 63 73. [CS] Buntine,W. 1996 A guide to the literature on learning probabilistic networks from data. IEEE Trans. Knowl. Data Engng 8, 195 210. [CS] Byrne, M. D., John, B. E., Wehrle, N. S. and Crow, D. C. 1999 The tangled Web we wove: a taskonomy of WWW use. Proc. CHI 99: Human Factors in Computing Systems, pp. 544 551. New York: ACM Press. [ Cadez, I. V., Heckerman, D., Smyth, P., Meek, C. and White, S. 2003 Model-based clustering and visualization of navigation patterns on a Web site. Data Mining Knowl. Discov. (In the press.) [.pdf] Califf, M. E. and Mooney, R. J. 1998 Relational learning of pattern-match rules for information extraction. Working Notes of AAAI Spring Symp. on Applying Machine Learning to Discourse Processing, pp. 6 11. Menlo Park, CA: AAAI Press. [.pdf] [CS] Callaway, D. S., Hopcroft, J. E., Kleinberg, J., Newman, M. E. J. and Strogatz, S. H. 2001 Are randomly grown graphs really random? Phys. Rev. E 64, 041902. [.pdf] Cardie, C. 1997 Empirical methods in information extraction. AI Mag. 18, 65 80. Carlson, J. M. and Doyle, J. 1999 Highly optimized tolerance: a mechanism for power laws in designed systems. Phys. Rev. E 60, 1412 1427. [.ps] [CS] Castelli, V. and Cover, T. 1995 On the exponential value of labeled samples. Pattern Recog. Lett. 16, 105 111. Catledge, L. D. and Pitkow, J. 1995 Characterizing browsing strategies in the World-Wide Web. Comp. Networks ISDN Syst. 27, 1065 1073. [Pub] [CS] Chaitin, G. J. 1987 Algorithmic Information Theory. Cambridge University Press. Chakrabarti, S., Dom, B., Gibson, D., Kleinberg, J., Kumar, S. R., Raghavan, P., Rajagopalan S. and Tomkins, A. 1999a Mining the link structure of the World Wide Web. IEEE Computer 32, 60 67. [.ps] [ Chakrabarti, S., Joshi, M. M., Punera, K. and Pennock, D. M. 2002 The structure of broad topics on the Web. Proc. 11th Int. Conf. on World Wide Web, pp. 251 262. New York:ACM Press. [.pdf] [CS] Chakrabarti, S., van den Berg, M. and Dom, B. 1999b Focused crawling: a new approach to topic-specific Web resource discovery. In Proc. 8th Int. World Wide Web Conf., Toronto. Comp. Networks 31, 11 16. [.pdf] [CS] Charniak, E. 1991 Bayesian networks without tears. AI Mag. 12, 50 63. Chen, S. F. and Goodman, J. 1996 An empirical study of smoothing techniques for language modeling. In Proc. 34th Ann. Meeting of the Association for Computational Linguistics (ed. A. Joshi and M. Palmer), pp. 310 318. San Francisco, CA: Morgan Kaufmann. [.pdf] Chickering, D. M., Heckerman, D. and Meek, C. 1997 A Bayesian approach to learning Bayesian networks with local structure. Uncertainty in Artificial Intelligence: Proc. 13th Conf. (UAI-1997), pp. 80 89. San Francisco, CA: Morgan Kaufmann. [CS] Cho, J. and Garcia-Molina, H. 2000a Estimating frequency of change. Technical Report DBPUBS-4, Stanford University. (Available via http://dbpubs.stanford.edu/pub/2000-4.) Cho, J. and Garcia-Molina, H. 2000b Synchronizing a database to improve freshness. Proc. 2000 ACM Int. Conf. on Management of Data (SIGMOD), pp. 117 128. [.pdf] [CS] Cho, J. and Garcia-Molina, H. 2002 Parallel crawlers. Proc. 11th World Wide Web Conf. (WWW11), Honolulu, Hawaii. [.pdf] [CS] Cho, J. and Ntoulas, A. 2002 Effective change detection using sampling. Proc. 28th Int. Conf. on Very Large Databases (VLDB). [.pdf] [CS] Cho, J., Garcia-Molina, H. and Page, L. 1998 Efficient crawling through URL ordering. In Proc. 7th Int. World Wide Web Conf. (WWW7). Comp. Networks 30, 161 172. [Pub] [.ps] [CS] Chung, E. R. K., Graham, R. L. andWilson, R. M. 1989 Quasi-random graphs. Combinatorica 9, 345 362. [.pdf] Chung, F. and Graham, R. 2002 Sparse quasi-random graphs. Combinatorica 22, 217 244. [.pdf] Chung, F. and Lu, L. 2001 The diameter of random sparse graphs. Adv. Appl. Math. 26, 257 279. [.pdf] [CS] Chung, F. and Lu, L. 2002 Connected components in random graphs with given expected degree sequences. Technical Report, Univeristy of California, San Diego. [.pdf] Chung, F., Garrett, M., Graham, R. and Shallcross, D. 2001 Distance realization problems with applications to Internet tomography. J. Comput. Syst. Sci. 63, 432 448. [.pdf] Clarke, I., Miller, S. G., Hong,T.W., Sandberg, O. and Wiley, B. 2002 Protecting free expression online with Freenet. IEEE Internet Computing 6, 40 49. [.pdf] [CS] Clarke, I., Sandberg, O., Wiley, B. and Hong, T. W. 2000 Freenet: a distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies: International Workshop on Design Issues in Anonymity and Unobservability, LNCS 2009 (ed. H. Federrath), pp. 311 320. Springer. [.pdf] [CS] Cockburn, A. and McKenzie, B. 2002 What do Web users do? An empirical analysis of Web use. Int. J. Human Computer Studies 54, 903 922. [.pdf] [CS] Cohen, W. W. 1995 Text categorization and relational learning. In Proc. ICML-95, 12th Int. Conf. on Machine Learning (ed. A. Prieditis and S. J. Russell), pp. 124 132. San Francisco, CA: Morgan Kaufmann. [.ps] [CS] Cohen, W. W. 1996 Learning rules that classify e-mail. In AAAI Spring Symp. on Machine Learning in Information Access (ed. M. Hearst and H. Hirsh). 1996 Spring Symposium Series. Menlo Park, CA: AAAI Press. [.ps] [CS] Cohen, W. W. and McCallum, A. 2002 Information Extraction from the World Wide Web. Tutorial presented at 15th Neural Information Processing Conf. (NIPS-15). [.ps] Cohen,W.W., Schapire, R. E. and Singer,Y. 1999 Learning to order things. J. Artif. Intell. Res. 10, 243 270. [Pub] [CS] Cohen,W.W., McCallum, A. and Quass, D. 2000 Learning to understand the Web. IEEE Data Enging Bull. 23, 17 24. [.pdf] [CS] Cohn, D. and Chang, H. 2000 Learning to probabilistically identify authoritative documents. Proc. 17th Int. Conf. on Machine Learning, pp. 167 174. San Francisco, CA: Morgan Kaufmann. [.ps.gz] [CS] Cohn, D. and Hofmann, T. 2001 The missing link: a probabilistic model of document content and hypertext connectivity. In Advances in Neural Information Processing Systems (ed. T. K. Leen, T. G. Dietterich and V. Tresp). Boston, MA: MIT Press. [.pdf] [CS] Cooley, R., Mobasher, B. and Srivastava, J. 1999 Data preparation for mining World Wide Web browsing patterns. Knowl. Informat. Syst. 1, 5 32. [.pdf] [CS] Cooper, C. and Frieze, A. 2001 A general model of Web graphs. Technical Report. [.pdf] [CS] Cooper, G. F. 1990 The computational complexity of probabilistic inference using Bayesian belief networks. Art. Intell. 42, 393 405. Cooper,W. S. 1991 Some inconsistencies and misnomers in probabilistic information retrieval. Proc. 14th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 57 61. New York: ACM Press. [CS] Cormen, T. H., Leiserson, C. E., Rivest, R. L. and Stein, C. 2001 Introduction to Algorithms, 2nd edn. Cambridge, MA: MIT Press. Cortes, C. and Vapnik, V. N. 1995 Support vector networks. Machine Learning 20, 1 25. Cover, T. M. and Hart, P. E. 1967 Nearest neighbor pattern classification. IEEE Trans. Inform. Theory 13, 21 27. [.ps.gz] [CS] Cover, T. M. and Thomas, J. A. 1991 Elements of Information Theory. John Wiley & Sons, Ltd/Inc. Cox, R. T. 1964 Probability, frequency and reasonable expectation. Am. J. Phys. 14, 1 13. Crammer, K. and Singer,Y. 2000 On the learnability and design of output codes for multiclass problems. Proc. 13 Conf. Computational Learning Theory, pp. 35 46. [.ps.gz] [CS] Craven, M., di Pasquo D., Freitag, D., McCallum A., Mitchell, T., Nigan, K. and Slattery, S. 2000 Learning to construct knowledge bases from the World Wide Web. Artif. Intel. 118(1 2), 69 113. [.pdf] Craven, M. and Slattery, S. 2001 Relational learning with statistical predicate invention: better models for hypertext. Machine Learning 43(1/2), 97 119. [.pdf] [CS] Cristianini, N. and Shawe-Taylor, J. 2000 An Introduction to Support Vector Machines. Cambridge University Press. Daley, D. J. and Gani, J. 1999 Epidemic Modeling: an Introduction. Cambridge University Press. Davison, B. D. 2000a Recognizing nepotistic links on the Web. AAAI Workshop on Artificial Intelligence for Web Search. [.pdf] [CS] Davison, B. D. 2000b Topical locality in the Web. Proc. 23rd Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 272 279. [.pdf] [CS] Dawid, A. P. 1992 Applications of a general propagation algorithm for probabilistic expert systems. Stat. Comp. 2, 25 36. Day, J. 1995 The (un)revised OSI reference model. ACM SIGCOMM Computer Communication Review 25, 39 55. [Pub] Day, J. and Zimmerman, H. 1983 The OSI reference model. Proc. IEEE 71, 1334 1340. De Bra, P. and Post, R. 1994 Information retrieval in the World Wide Web: making client-based searching feasible. Proc. 1st Int. World Wide Web Conf. [.ps] [CS] Dechter, R. 1999 Bucket elimination: A unifying framework for reasoning. Artif. Intel. 113, 41 85. [.ps] [CS] Deering, S. and Hinden, R. 1998 Internet Protocol, Version 6 (IPv6) Specification. RFC 2460. (Available from http://www.ietf.org/rfc/rfc2460.txt.) Del Bimbo A. 1999 Visual Information Retrieval. San Francisco, CA: Morgan Kaufmann. Dempster, A. P., Laird, N. M. and Rubin, D. B. 1977 Maximum likelihood from incomplete data via the em algorithm. Journal Royal Statistical Society B39, 1 22. Deshpande, M. and Karypis, G. 2001 Selective Markov models for predicting Web-page accesses. Proc. SIAM Conf. on Data Mining SIAM Press. [.pdf] [CS] Dhillon, I. S., Fan, J. and Guan,Y. 2001 Efficient clustering of very large document collections In Data Mining for Scientific and Engineering Applications (ed. Grossman, R., Kamath, C. and Naburu R). Kluwer Academic. [.ps.gz] [CS] Dhillon, I. S. and Modha, D. S. 2001 Concept decompositions for large sparse text data using clustering. Machine Learning 42, 143 175. [.ps.gz] [CS] Dietterich, T. G. and Bakiri, G. 1995 Solving multiclass learning problems via error-correcting output codes. J. Artificial Intelligence Research 2, 263 286. [.pdf] [CS] Dijkstra, E. D. 1959A note on two problem in connexion with graphs. Numerische Mathematik 1, 269 271. Diligenti, M., Coetzee, F., Lawrence, S., Giles, C. L. and Gori, M. 2000 Focused crawling using context graphs. In VLDB 2000, Proc. 26th Int. Conf. on Very Large Data Bases, 10 14 September 2000, Cairo, Egypt (ed. A. El Abbadi, M. L. Brodie, S. Chakravarthy, U. Dayal, N. Kamel, G. Schlageter and K.Y. Whang), pp. 527 534. Los Altos, CA: Morgan Kaufmann. [.pdf] [CS] Dill, S., Kumar, S. R., McCurley, K. S., Rajagopalan, S., Sivakumar, D. and Tomkins, A. 2001 Self-similarity in the Web. Proc. VLDB, pp. 69 78. [Pub] [CS] Domingos, P. and Pazzani, M. 1997 On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29, 103 130. [.pdf] [CS] Domingos, P. and Richardson, M. 2001 Mining the network value of customers. Proc. ACM 7th Int. Conf. on Knowledge Discovery and Data Mining, pp. 57 66. New York: ACM Press. [.pdf] [CS] Dreilinger, D. and Howe, A. E. 1997 Experiences with selecting search engines using metasearch. ACM Trans. Informat. Syst. 15, 195 222. [.pdf] [CS] Drucker, H.,Vapnik,V. N. and Wu, D. 1999 Support vector machines for spam categorization. IEEE Trans. Neural Networks 10, 1048 1054. Duda, R. O. and Hart, P. E. 1973 Pattern Classification and Scene Analysis. John Wiley & Sons, Ltd/Inc. Dumais, S., Platt, J., Heckerman, D. and Sahami, M. 1998 Inductive learning algorithms and representations for text categorization. In Proc. 7th Int. Conf. on Information and Knowledge Management, pp. 148 155. New York: ACM Press. [.pdf] Jones, K. S. and Willett, P. (eds) 1997 Readings in information retrieval. San Mateo, CA: Morgan Kaufmann. Edwards, J., McCurley, K. and Tomlin, J. 2001 An adaptive model for optimizing performance of an incremental Web crawler. Proc. 10th Int. World Wide Web Conf., pp. 106 113. [Pub] [CS] Elias, P. 1975 Universal codeword sets and representations of the integers. IEEE Trans. Inform. Theory 21, 194 203. Erdos, P. and Rényi, A. 1959 On random graphs. Publ. Math. Debrecen 6, 290 291. Erdos, P. and Rényi, A. 1960 On the evolution of random graphs. Magy. Tud. Akad. Mat. Kut. Intez. Kozl. 5, 17 61. Everitt B. S. 1984 An Introduction to Latent Variable Models. London: Chapman & Hall. Evgeniou, T., Pontil, M. and Poggio, T. 2000 Regularization networks and support vector machines. Adv. Comput. Math. 13, 1 50. [.ps] [CS] Fagin, R., Karlin, A., Kleinberg, J., Raghavan, P., Rajagopalan, S., Rubinfeld, R., Sudan., M. and Tomkins, A. 2000 Random walks with back buttons . Proc. ACM Symp. on Theory of Computing, pp. 484 493. New York: ACM Press. [.ps] [CS] Faloutsos, C. and Christodoulakis, S. 1984 Signature files: an access method for documents and its analytical performance evaluation. ACM Trans. Informat. Syst. 2, 267 288. Faloutsos, M., Faloutsos, P. and Faloutsos, C. 1999 On power-law relationships of the Internet topology. Proc. ACM SIGCOMM Conf., Cambridge, MA, 251 262. [.pdf] [CS] Feller, W. 1971 An Introduction to Probability Theory and its Applications, 2nd edn, vol. 2. John Wiley & Sons, Ltd/Inc. Fermi, E. 1949 On the origin of the cosmic radiation. Phys. Rev. 75, 1169 1174. Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P. and Berners-Lee, T. 1999 Hypertext Transfer Protocol: HTTP/1.1. RFC 2616. (Available from http://www.ietf.org/rfc/ Fienberg, S. E., Johnson, M. A. and Junker, B. J. 1999 Classical multilevel and Bayesian approaches to population size estimation using multiple lists. J. Roy. Statist. Soc. A162, 383 406. Flake, G.W., Lawrence, S. and Giles, C. L. 2000 Efficient identification of Web communities. Proc. 6th ACMSIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 150 160. New York: ACM Press. [ .pdf] [CS] Flake, G.W., Lawrence, S., Giles, C. L. and Coetzee, F. 2002 Self-organization and identification of Web communities. IEEE Computer 35, 66 71. [.pdf] Fox, C. 1992 Lexical analysis and stoplists. In Information Retrieval: Data Structures and Algorithms (ed. W. B. Frakes and R. Baeza-Yates), ch. 7. Englewood Cliffs, NJ: Prentice Hall. Fraley, C. and Raftery, A. E. 2002 Model-based clustering, discriminant analysis, and density estimation. J. Am. Statist. Assoc. 97, 611 631. [.pdf] [CS] Freitag, D. 1998 Information extraction from HTML: Application of a general machine learning approach. Proc. AAAI-98, pp. 517 523. Menlo Park, CA: AAAI Press. [.ps.gz] [CS] Freitag, D. and McCallum, A. 2000 Information extraction with HMM structures learned by stochastic optimization. Proc. AAAI/IAAI, pp. 584 589. [.ps.gz] [CS] Freund,Y. and Schapire, R. E. 1996 Experiments with a new boosting algorithm. In Proc. 13th Int. Conf. on Machine Learning, pp. 148 146. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Frey, B. J. 1998 Graphical Models for Machine Learning and Digital Communication. MIT Press. Friedman, N. and Goldszmidt, M. 1996 Learning Bayesian networks with local structure. In Proc. 12th Conf. on Uncertainty in Artificial Intelligence, Portland, Oregon (ed. E. Horwitz and F. Jensen), pp. 274 282. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Friedman, N., Getoor, L., Koller, D. and Pfeffer, A. 1999 Learning probabilistic relational models. In Proc. 16th Int. Joint Conf. on Artificial Intelligence (IJCAI-99) (ed. D. Thomas), vol. 2 , pp. 1300 1309. San Francisco, CA: Morgan Kaufmann. [.ps] [CS] Fuhr, N. 1992 Probabilistic models in information retrieval. Comp. J. 35, 243 255. [.ps.gz] [CS] Galambos, J. 1987 The Asymptotic Theory of Extreme Order Statistics, 2nd edn. Malabar, FL: Robert E. Krieger. Garfield, E. 1955 Citation indexes for science: a new dimension in documentation through association of ideas. Science 122, 108 111. Garfield, E. 1972 Citation analysis as a tool in journal evaluation. Science 178, 471 479. [.pdf] Garner, R. 1967 A Computer Oriented, Graph Theoretic Analysis of Citation Index Structures. Philadelphia, PA: Drexel University Press. Gelbukh, A. and Sidorov, G. 2001 Zipf and Heaps Laws coefficients depend on language. Proc. 2001 Conf. on Intelligent Text Processing and Computational Linguistics, pp. 332 335. Springer. [.html] Gelman, A., Carlin, J. B., Stern, H. S. and Rubin, D. B. 1995 Bayesian Data Analysis. London: Chapman & Hall. Ghahramani, Z. 1998 Learning dynamic Bayesian networks. In Adaptive Processing of Sequences and Data Structures. Lecture Notes in Artifical Intelligence (ed. M. Gori and C. L. Giles), pp. 168 197. Springer. [.ps.gz] [CS] Ghahramani, Z. and Jordan, M. I. 1997 Factorial hidden Markov models. Machine Learning 29, 245 273. [.ps.gz] [CS] Ghani, R. 2000 Using error-correcting codes for text classification. Proc. 17th Int. Conf. on Machine Learning, pp. 303 310. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Gibson, D., Kleinberg, J. and Raghavan, P. 1998 Inferring Web communities from link topology. Proc. 9th ACM Conf. on Hypertext and Hypermedia : Links, Objects, Time and Space structure in Hypermedia Systems, pp. 225 234. New York: ACM Press. [.pdf] [CS] Gilbert, E. N. 1959 Random graphs. Ann. Math. Statist. 30, 1141 1144. Gilbert, N. 1997 A simulation of the structure of academic science. Sociological Research Online 2. (Available from http://www.socresonline.org.uk/socresonline/2/2/3.html) Gilks, W. R., Thomas, A. and Spiegelhalter, D. J. 1994 A language and program for complex Bayesian modelling. The Statistician 43, 69 78. Greenberg, S. 1993 The Computer User as Toolsmith: the Use, Reuse, and Organization or Computer-Based Tools. Cambridge University Press. Guermeur, Y., Elisseeff, A. and Paugam-Mousy, H. 2000 A new multi-class SVM based on a uniform convergence result. In Proc. IJCNN: Int. Joint Conf. on Neural Networks, vol. 4, pp 4183 4188. Piscataway, NJ: IEEE Press. [.ps] [CS] Han, E. H., Karypis, G. and Kumar,V. 2001 Text categorization using weight-adjusted k-nearest neighbor classification. In Proc. PAKDD-01, 5th Pacific Asia Conferenece on Knowledge Discovery and Data Mining (ed. D. Cheung, Q. Li and G. Williams). Lecture Notes in Computer Science Series, vol. 2035, pp. 53 65. Springer. [.pdf] [CS] Han, J. and Kamber, M. 2001 Data Mining: Concepts and Techniques. San Francisco, CA: Morgan Kaufmann. Hand, D., Mannila, H. and Smyth, P. 2001 Principles of Data Mining. Cambridge, MA: MIT Press. Harman, D., Baeza-Yates, R., Fox, E. and Lee,W. 1992 Inverted files. In Information Retrieval, Data Structures and Algorithms (ed. W. B. Frakes and R. A. Baeza-Yates), pp. 28 43. Englewood Cliffs, NJ: Prentice Hall. Hastie, T., Tibshirani, R. and Friedman, J. 2001 Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. Heckerman, D. 1998 A tutorial on learning with Bayesian networks. In Learning in Graphical Models (ed. M. Jordan). Kluwer. Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R. and Kadie, C. 2000 Dependency networks for inference, collaborative filtering, and data visualization. J. Mach. Learn. Res. 1, 49 75. [Pub] Hersovici, M., Jacovi, M., Maarek,Y. S., Pelleg, D., Shtalhaim, M. and Ur, S. 1998 The shark search algorithm an application: tailored Web site mapping. In Proc. 7th Int. World-Wide Web Conf. Comp. Networks 30, 317 326. [Pub] [CS] Heydon, A. and Najork, M. 1999 Mercator: a scalable, extensible Web crawler. Proc. World Wide Web Conf. 2, 219 229. (Available from http://research.compaq.com/SRC/mercator/research.html.) Heydon, A. and Najork, M. 2001 High-performance Web crawling. Technical Report SRC 173. Compaq Systems Research Center. [CS] Hoffman, T. 1999 Probabilistic latent semantic indexing. Proc. 22nd Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 50 57. New York: ACM Press. [.pdf] [CS] Hofmann, T. 2001 Unsupervised learning by probabilistic latent semantic analysis. Machine Learning 42, 177 196. [.pdf] Hofmann, T. and Puzicha, J. 1999 Latent class models for collaborative filtering. In Proc. 16th Int. Joint Conf. on Artificial Intelligence, pp. 688 693. [.pdf] Hofmann, T., Puzicha, J. and Jordan, M. I. 1999 Learning from dyadic data. In Advances in Neural Information Processing Systems 11: Proc. 1998 Conf. (ed. M. S. Kearns, S. A. Solla and D. Cohen), pp. 466 472. Cambridge, MA: MIT Press. [Pub] Huberman, B. A. and Adamic, L. A. 1999 Growth dynamics of the World Wide Web. Nature 401, 131. [.pdf] Huberman, B. A., Pirolli, P. L. T., Pitkow, J. E. and Lukose, R. M. 1998 Strong regularities in World Wide Web surfing. Science 280, 95 97. [.pdf] [CS] Hunter, J. and Shotland, R. 1974 Treating data collected by the smallworld method as a Markov process. Social Forces 52, 321. ISO 1986 Information Processing, Text and Office Systems, Standard Generalized Markup Language (SGML), ISO 8879, 1st edn. Geneva, Switzerland: International Organization for Standardization. Itti, L. and Koch, C. 2001 Computational modelling of visual attention. Nature Rev. Neurosci. 2, 194 203. [.pdf] Jaakkola, T. S. and Jordan, I. 1997 Recursive algorithms for approximating probabilities in graphical models. In Advances in Neural Information Processing Systems (ed. M. C. Mozer, M. I. Jordan and T. Petsche), vol. 9, pp. 487 493. Cambridge, MA: MIT Press. [.ps] [CS] Jaeger, M. 1997 Relational Bayesian networks In Proc. 13th Conf. on Uncertainty in Artificial Intelligence (UAI-97) (ed. D. Geiger and P. P. Shenoy), pp. 266 273. San Francisco, CA: Morgan Kaufmann. [.ps.gz] [CS] Janiszewski, C. 1998 The influence of display characteristics on visual exploratory behavior. J. Consumer Res. 25, 290 301. Jansen, B. J., Spink, A., Bateman, J. and Saracevic, T. 1998 Real-life information retrieval: a study of user queries on the Web. SIGIR Forum 32, 5 17. [.pdf] Jaynes, E. T. 1986 Bayesian methods: general background. In Maximum Entropy and Bayesian Methods in Statistics (ed. J. H Justice), pp. 1 25. Cambridge University Press. [.pdf] Jaynes, E. T. 2003 Probability Theory: The Logic of Science. Cambridge University Press. Jensen, F. V. 1996 An Introduction to Bayesian Networks. Springer. Jensen, F.V., Lauritzen, S. L. and Olesen, K. G. 1990 Bayesian updating in causal probabilistic networks by local computations. Comput. Statist. Q. 4, 269 282. Jeong, H., Tomber, B., Albert, R., Oltvai, Z. and Barabási, A.-L. 2000 The large-scale organization of metabolic networks. Nature 407, 651 654. [Pub] [CS] Joachims, T. 1997 A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In Proc. 14th Int. Conf. on Machine Learning, pp. 143 151. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Joachims, T. 1998 Text categorization with support vector machines: learning with many relevant features. In Proc. 10th European Conf. on Machine Learning, pp. 137 142. Springer. [CS] Joachims,T. 1999a Making large-scale SVM learning practical In Advances in Kernel Methods: Support Vector Learning (ed. B. Schölkopf, C. J. C. Burges and A. J. Smola), pp. 169 184. Cambridge, MA; MIT Press. [.ps.gz] [CS] Joachims,T. 1999b Transductive inference for text classification using support vector machines. Proc. 16th Int. Conf. on Machine Learning (ICML), pp. 200 209. San Francisco, CA:Morgan Kaufmann. [ .ps.gz] [CS] Joachims, T. 2002 Learning to Classify Text using Support Vector Machines. Kluwer. Jordan, M. I. (ed.) 1999 Learning in Graphical Models. Cambridge, MA: MIT Press. Jordan, M. I., Ghahramani, Z. and Saul, L. K. 1997 Hidden Markov decision trees. In Advances in Neural Information Processing Systems (ed. M. C. Mozer, M. I. Jordan and T. Petsche), vol. 9, pp. 501 507. Cambridge, MA: MIT Press. [.ps.gz] [CS] Jumarie, G. 1990 Relative information. Springer. Kask, K. and Dechter, R. 1999 Branch and bound with mini-bucket heuristics. Proceedings Int. Joint Conf. on Artificial Intelligence (IJCAI99), pp. 426 433. [.pdf] [CS] Kessler, M. 1963 Bibliographic coupling between scientific papers. Am. Documentat. 14, 10 25. Killworth, P. and Bernard, H. 1978 Reverse small world experiment. Social Networks 1, 159. Kira, K. and Rendell, L. A. 1992 A practical approach to feature selection. Proc. 9th Int. Conf. on Machine Learning, pp. 249 256. San Francisco, CA: Morgan Kaufmann. Kittler, J. 1986 Feature selection and extraction. In Handbook of Pattern Recognition and Image Processing (ed. T.Y.Young and K. S. Fu), ch. 3. Academic. Kleinberg, J. 1998 Authoritative sources in a hyperlinked environment. Proc. 9th Ann. ACM SIAM Symp. on Discrete Algorithms, pp. 668 677. New York: ACM Press. (A preliminary version of this paper appeared as IBM Research Report RJ 10076, May 1997.) [.pdf] [CS] Kleinberg, J. 1999 Hubs, authorities, and communities. ACM Comput. Surv. 31, 5. [Pub] Kleinberg, J. 2000a Navigation in a small world. Nature 406, 845. [.pdf] Kleinberg, J. 2000b The small-world phenomenon: an algorithmic perspective. Proc. 32nd ACM Symp. on the Theory of Computing. [.ps] [CS] Kleinberg, J. 2001 Small-world phenomena and the dynamics of information. Advances in Neural Information Processing Systems (NIPS), vol. 14. Cambridge, MA: MIT Press. [Pub] [CS] Kleinberg, J. and Lawrence, S. 2001 The structure of the Web. Science 294, 1849 1850. [.pdf] Kleinberg, J., Kumar, R., Raghavan, P., Rajagopalan, S. and Tomkins, A. 1999 The Web as a graph: measurements, models, and methods. Proc. Int. Conf. on Combinatorics and Computing. Lecture notes in Computer Science, vol. 1627. Springer. [CS] Kohavi, R. and John, G. 1997 Wrappers for feature subset selection. Artif. Intel. 97, 273 324. [CS] Koller, D. and Sahami, M. 1997 Hierarchically classifying documents using very few words. Proc. 14th Int. Conf. on Machine Learning (ICML-97), pp. 170 178. San Francisco, CA: Morgan Kaufmann. [.pdf] Koller, D. and Sahami, N. 1996 Toward optimal feature selection. Proc. 13th Int. Conf. on Machine Learning, pp. 284 292. [.ps] [CS] Korte, C. and Milgram, S. 1978 Acquaintance networks between racial groups: application of the small world method. J. Pers. Social Psych. 15, 101. Koster, M. 1995 Robots in the Web: threat or treat? ConneXions 9(4). [.html] Krishnamurthy, B., Mogul, J. C. and Kristol, D. M. 1999 Key differences between HTTP/1.0 and HTTP/1.1. In Proc. 8th Int. World-Wide Web Conf. Elsevier. [.ps.gz] [CS] Kruger, A., Giles, C. L., Coetzee, F., Glover, E. J., Flake, G. W., Lawrence, S. and Omlin, C.W. 2000 DEADLINER: building a new niche search engine. In Proc. 2000 ACM CIKM International Conf. on Information and Knowledge Management (CIKM-00) (ed. A. Agah, J. Callan and E. Rundensteiner), pp. 272 281. New York: ACM Press. [Pub] [CS] Kullback, S. 1968 Information theory and statistics. New York: Dover. Kumar, S. R., Raghavan, P., Rajagopalan, S. and Tomkins, A. 1999a Extracting large-scale knowledge bases from the Web. Proc. 25th VLDB Conf. , pp. 639 650. [.pdf] [CS] Kumar, S. R., Raghavan, P., Rajagopalan, S. and Tomkins, A. 1999b Trawling the Web for emerging cyber communities. In Proc. 8th World Wide Web Conf. Comp. Networks 31, 11 16. [.pdf] [CS] Kumar, S. R., Raghavan, P., Rajagopalan, S., Sivakumar, D., Tomkins, A. and Upfal, E. 2000 Stochastic models for the Web graph. Proc. 41st IEEE Ann. Symp. on the Foundations of Computer Science, pp. 57 65. [.pdf] [CS] Kushmerick, N., Weld, D. S. and Doorenbos, R. B. 1997 Wrapper induction for information extraction. In Proc. Int. Joint Conf. on Artificial Intelligence (IJCAI), pp. 729 737. [.ps.Z] [CS] Lafferty, J., McCallum, A. and Pereira, F. 2001 Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc. 18th Int. Conf. on Machine Learning, pp. 282 289. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Lam, W. and Ho, C.Y. 1998 Using a generalized instance set for automatic text categorization. In Proc. SIGIR-98, 21st ACM Int. Conf. on Research and Development in Information Retrieval (ed. W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson and J. Zobel), pp. 81 89. New York: ACM Press. [Pub] Lang, K. 1995 Newsweeder: Learning to filter news Proc. 12th Int. Conf. on Machine Learning (ed. A. Prieditis and S. J. Russell), pp. 331 339. San Francisco, CA: Morgan Kaufmann. Langley, P. 1994 Selection of relevant features in machine learning. Proc. AAAI Fall Symp. on Relevance, pp. 140 144. [.ps.gz] [CS] Lau, T. and Horvitz, E. 1999 Patterns of search: analyzing and modeling Web query refinement. Proc. 7th Int. Conf. on User Modeling, pp. 119 128. Springer. [.pdf] [CS] Lauritzen, S. L. 1996 Graphical Models. Oxford University Press. Lauritzen, S. L. and Spiegelhalter,D. J. 1988 Local computations with probabilities on graphical structures and their application to expert systems. J. Roy. Statist. Soc. B50, 157 224. Lawrence, S. 2001 Online or invisible? Nature 411, 521. [.pdf] [CS] Lawrence, S. and Giles, C. L. 1998a Context and page analysis for improved Web search. IEEE Internet Computing 2, 38 46. [.pdf] [CS] Lawrence, S. and Giles, C. L. 1998b Searching the World Wide Web. Science 280, 98 100. [.pdf] [CS] Lawrence, S. and Giles, C. L. 1999a Acccessibility of information on the Web. Nature 400, 107 109. [Pub] Lawrence, S., Giles, C. L. and Bollacker, K. 1999 Digital libraries and autonomous citation indexing. IEEE Computer 32, 67 71. [.pdf] [CS] Leek, T. R. 1997 Information extraction using hidden Markov models. Master s thesis, University of California, San Diego. [.ps.gz] Lempel, R. and Moran, S. 2001 SALSA: the stochastic approach for link-structure analysis. ACM Trans. Informat. Syst. 19, 131 160. [.ps] Letsche, T. A. and Berry, M. W. 1997 Large-scale information retrieval with latent semantic indexing. Information Sciences 100, 105 137. [.html] [CS] Lewis, D. D. 1992 An evaluation of phrasal and clustered representations on a text categorization task. Proc. 15th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 37 50. New York: ACM Press. Lewis, D. D. 1997 Reuters-21578 text categorization test collection. (Documentation and data available at http://www.daviddlewis.com/resources/testcollections/reuters21578/.) Lewis, D. D. 1998 Naive Bayes at forty: the independence assumption in information retrieval. Proc. 10th European Conf. on Machine Learning, pp. 4 15. Springer. [CS] Lewis, D. D. and Catlett, J. 1994 Heterogeneous uncertainty sampling for supervised learning. In Proc. ICML-94, 11th Int. Conf. on Machine Learning (ed. W. W. Cohen and H. Hirsh), pp. 148 156. San Francisco, CA: Morgan Kaufmann. [CS] Lewis, D. D. and Gale, W. A. 1994 A sequential algorithm for training text classifiers. Proc. 17th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 3 12. Springer. Lewis, D. D. and Ringuette, M. 1994 Comparison of two learning algorithms for text categorization. In Proc. 3rd Ann. Symp. on Document Analysis and Information Retreval, pp. 81 93. [CS] Li, S., Montgomery, A., Srinivasan, K. and Liechty, J. L. 2002 Predicting online purchase conversion using Web path analysis. Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh, PA. (Available from http://www.andrew.cmu.edu/&Mac247;alm3/ papers/purchase%20conversion.pdf.) Li, W. 1992 Random texts exhibit Zipf s-law-like word frequency distribution. IEEE Trans. Inform. Theory 38, 1842 1845. [.pdf] [CS] Lieberman, H. 1995 Letizia: An agent that assists Web browsing. In Proc. 14th Int. Joint Conf. on Artificial Intelligence (IJCAI-95) (ed. C. S. Mellish), pp. 924 929. San Mateo, CA: Morgan Kaufmann. [.ps] [CS] Little, R. J. A. and Rubin, D. B. 1987 Statistical Analysis with Missing Data. John Wiley & Sons, Ltd/Inc. Liu, H. and Motoda, H. 1998 Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic. Lovins, J. B. 1968 Development of a stemming algorithm. Mech. Transl. Comput. Linguistics 11, 22 31. [An unofficial webpage about Lovin's stemmer is here] McCallum, A. and Nigam, K. 1998 A comparison of event models for naive Bayes text classification. AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41 48. Menlo Park, CA: AAAI Press. [ .pdf] [CS] McCallumA., Freitag, D. and Pereira, F. 2000a Maximum entropy Markov models for information extraction and segmentation. Proc. 17th Int. Conf. on Machine Learning, pp. 591 598. San Francisco, CA: Morgan Kaufmann. [.ps.gz] [CS] McCallum, A., Nigam, K. and Ungar, L. H. 2000b Efficient clustering of high-dimensional data sets with application to reference matching. Proc. 6th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 169 178. New York: ACM Press. [.ps.gz] [CS] McCallum, A. K., Nigam, K., Rennie, J. and Seymore, K. 2000c Automating the construction of Internet portals with machine learning. Information Retrieval 3, 127 163. [.ps.gz] [CS] McCann, K., Hastings, A. and Huxel, G. R. 1998 Weak trophic interactions and the balance of nature. Nature 395, 794 798. McClelland, J. L. and Rumelhart, D. E. 1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, MA: MIT Press. McEliece, R. J. 1977 The Theory of Information and Coding. Reading, MA: Addison-Wesley. McEliece, R. J. and Yildirim, M. 2002 Belief propagation on partially ordered sets. In Mathematical Systems Theory in Biology, Communications, and Finance (ed. D. Gilliam and J. Rosenthal). Institute for Mathematics and its Applications, University of Minnesota. [.pdf] McEliece, R. J., MacKay, D. J. C. and Cheng, J. F. 1997 Turbo decoding as an instance of Pearl s belief propagation algorithm. IEEE J. Select. Areas Commun. 16, 140 152. [.pdf] MacKay, D. J. C. and Peto, L. C. B. 1995a A hierarchical Dirichlet language model. Natural Language Engng 1, 1 19. [.ps.gz] [CS] McLachlan, G. and Peel, D. 2000 Finite Mixture Models. John Wiley & Sons, Ltd/Inc. Mahmoud, H. M. and Smythe, R. T. 1995 A survey of recursive trees. Theory Prob. Math. Statist. 51, 1 27. Manber, U. and Myers, G. 1990 Suffix arrays: a new method for on-line string searches. Proc. 1st Ann. ACM SIAM Symp. on Discrete Algorithms, pp. 319 327. Philadelphia, PA: Society for Industrial and Applied Mathematics. [.pdf] Mandelbrot, B. 1977 Fractals: Form, Chance, and Dimension. New York: Freeman. Marchiori, M. 1997 The quest for correct information on the Web: hyper search engines. In Proc. 6th Int. World-Wide Web Conf., Santa Clara, CA. Comp. Networks 29, 1225 1235. [.html] Mark, E. F. 1988 Searching for information in a hypertext medical handbook. Commun ACM 31, 880 886. Maron, M. E. 1961 Automatic indexing: an experimental inquiry. J. ACM 8, 404 417. Maslov, S. and Sneppen, K. 2002 Specificity and stability in topology of protein networks. Science 296, 910 913. Melnik, S., Raghavan, S.,Yang, B. and Garcia-Molina, H. 2001 Building a distributed full-text index for the Web. ACM Trans. Informat. Syst. 19, 217 241. [.pdf] Mena, J. 1999 Data Mining your Website. Boston, MA: Digital Press. Menczer, F. 1997 ARACHNID: adaptive retrieval agents choosing heuristic neighborhoods for information discovery. Proc. 14th Int. Conf. on Machine Learning, pp. 227 235. San Francisco, CA: Morgan Kaufmann. [.ps] [CS] Menczer, F. and Belew, R. K. 2000 Adaptive retrieval agents: internalizing local context and scaling up to the Web. Machine Learning 39, 203 242. [.pdf] [CS] Milgram, S. 1967 The small world problem. Psychology Today 1, 61. Milo, R., Shen-Orr, S., Itzkovitz, S., Kashtan, N., Chklovskii, D. and Alon, U. 2002 Network motifs: simple building blocks of complex networks. Science 298, 824 827. [.pdf] Mitchell, T. 1997 Machine Learning. McGraw-Hill. Mitzenmacher, M. 2002 A brief history of generative models for power law and lognormal distributions. Technical Report, Harvard University, Cambridge, MA. [.pdf] [CS] Moffat, A. and Zobel, J. 1996 Self-indexing inverted files for fast text retrieval. ACM Trans. Informat. Syst. 14, 349 379. [Pub] [CS] Montgomery,A. L. 2001 Applying quantitative marketing techniques to the Internet. Interfaces 30, 90 108. [.pdf] Mooney, R. J. and Roy, L. 2000 Content-based book recommending using learning for text categorization. Proc. 5th ACM Conf. on Digital Libraries, pp. 195 204. New York: ACM Press. [.pdf] [CS] Mori, S., Suen, C. andYamamoto, K. 1992 Historical review of OCR research and development. Proc. IEEE 80, 1029 1058. Moura, E. S., Navarro, G. and Ziviani, N. 1997 Indexing compressed text. In Proc. 4th South American Workshop on String Processing (ed. R. Baeza-Yates), International Informatics Series, pp. 95 111. Ottawa: Carleton University Press. [.ps] [CS] Najork, M. andWiener, J. 2001 Breadth-first search crawling yields high-quality pages. Proc. 10th Int. World Wide Web Conf., pp. 114 118. Elsevier. [Pub] Neal, R. M. 1992 Connectionist learning of belief networks. Artif. Intel. 56, 71 113. Neville-Manning, C. and Reed, T. 1996 A PostScript to plain text converter. Technical report. (Available from http://www.nzdl.org/html/prescript.html.) Newman, M. E. J., Moore, C. and Watts, D. J. 2000 Mean-field solution of the small-world network model. Phys. Rev. Lett. 84, 3201 3204. [.pdf] [CS] Ng, A.Y. and Jordan, M. I. 2002 On discriminative vs generative classifiers: a comparison of logistic regression and naive Bayes. Advances in Neural Information Processing Systems 14. Proc. 2001 Neural Information Processing Systems (NIPS) Conference. MIT Press. [.pdf] Ng, A. Y., Zheng, A. X. and Jordan, M. I. 2001 Stable algorithms for link analysis. Proc. 24th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 258 266. New York: ACM Press. [.pdf] [CS] Nigam, K. and Ghani, R. 2000 Analyzing the effectiveness and applicability of co-training. In Proc. 2000 ACM CIKM Int. Conf. on Information and Knowledge Management (CIKM-00) (ed. A. Agah, J. Callan and E. Rundensteiner), pp. 86 93. New York: ACM Press. [.pdf] [CS] Nigam, K., McCallum A., Thrun, S. and Mitchell, T. 2000 Text classification from labeled and unlabeled documents using EM. Machine Learning 39, 103 134. [.pdf] [CS] Nothdurft, H. 2000 Salience from feature contrast: additivity across dimensions. Vision Res. 40, 1183 1201. Olshausen, B. A., Anderson, C. H. and Essen, D. C. V. 1993 A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J. Neurosci. 13, 4700 Oltvai, Z. N. and Barabási, A.-L. 2002 Life s complexity pyramid. Science 298, 763 764. [.pdf] O Neill, E. T., McClain P. D. and Lavoie, B. F. 1997 A methodology for sampling the World Wide Web Annual Review of OCLC Research. (Available from http://www.oclc.org/research/publications/arr/1997/ Page, L., Brin, S., Motwani, R. andWinograd, T. 1998 The PageRank citation ranking: bringing order to the Web. Technical report, Stanford University. (Available at http://wwwdb.stanford.edu/xbackrub/ Paine, R. T. 1992 Food-web analysis through field measurements of per capita interaction strength. Nature 355, 73 75. Pandurangan, G., Raghavan, P. and Upfal, E. 2002 Using PageRank to characterize Web structure. Proc. 8th Ann. Int. Computing and Combinatorics Conf. (COCOON). Lecture Notes in Computer Science, vol. 2387, p. 330. Springer. [.ps.gz] [CS] Papineni, K. 2001 Why inverse document frequency? Proc. North American Association for Computational Linguistics, pp. 25 32. [Pub] [CS] Passerini, A., Pontil, M. and Frasconi, P. 2002 From margins to probabilities in multiclass learning problems. In Proc. 15th European Conf. on Artificial Intelligence (ed. F. van Harmelen). Frontiers in Artificial Intelligence and Applications Series. Amsterdam: IOS Press. [.pdf] [CS] Pazzani, M. 1996 Searching for dependencies in Bayesian classifiers. In Proc. 5th Int.Workshop on Artificial Intelligence and Statistics, pp. 239 248. Springer. [CS] Pearl, J. 1988 Probabilistic reasoning in intelligent systems. San Mateo, CA: Morgan Kaufmann. Pennock, D. M., Flake, G. W., Lawrence, S., Glover, E. J. and Giles, C. L. 2002 Winners don t take all: characterizing the competition for links on the Web. Proc. Natl Acad. Sci. 99, 5207 5211. [.ps] Perline, R. 1996 Zipf s law, the central limit theorem, and the random division of the unit interval. Phys. Rev. E 54, 220 223. Pew Internet Project Report 2002 Search engines. (Available at http://www.pewinternet.org/ reports/toc.asp?Report=64.) [.pdf] Phadke, A. G. and Thorp, J. S. 1988 Computer Relaying for Power Systems. John Wiley & Sons, Ltd/Inc. Philips, T. K., Towsley, D. F. andWolf, J. K. 1990 On the diameter of a class of random graphs. IEEE Trans. Inform. Theory 36, 285 288. Pimm, S. L., Lawton, J. H. and Cohen, J. E. 1991 Food web patterns and their consequences. Nature 350, 669 674. Pittel, B. 1994 Note on the heights of random recursive trees and random m-ary search trees. Random Struct. Algorithms 5, 337 347. Platt, J. 1999 Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods SupportVector Learning (ed. B. Schölkopf, C. J. C. Burges and A. J. Smola,), pp. 185 208. Cambridge, MA: MIT Press. Popescul, A., Ungar, L. H., Pennock, D. M. and Lawrence, S. 2001 Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. Proc.17th Int. Conf. on Uncertainty in Artificial Intelligence, pp. 437 444. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Porter, M. 1980 An algorithm for suffix stripping. Program 14, 130 137. Quinlan, J. R. 1986 Induction of decision trees. Machine Learning 1, 81 106. Quinlan, J. R. 1990 Learning logical definitions from relations. Machine Learning 5, 239 266. Rafiei, D. and Mendelzon, A. 2000 What is this page known for? Computing Web page reputations. Proc. 9th World Wide Web Conf. [Pub] Raggett, D., Hors, A. L. and Jacobs, I. (eds) 1999 HTML 4.01 Specification. W3 Consortium Recommendation. (Available from http://www.w3.org/TR/html4/.) Raskinis, I. M. G. and Ganascia, J. 1996 Text categorization: a symbolic approach. Proc. 5th Ann. Symp. on Document Analysis and Information Retrieval. New York: ACM Press. Redner, S. 1998 How popular is your paper? An empirical study of the citation distribution. Euro. Phys. J. B4, 131 134. [.ps] Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P. and Riedl, J. 1994 GroupLens: an open architecture for collaborative filtering of netnews. Proc. 9th ACM Conf. on Computer- Supported Cooperative Work, pp. 175 186. New York: ACM Press. [.pdf] [CS] Ripeanu, M., Foster, I. and Iamnitchi, A. 2002 Mapping the Gnutella network: properties of large-scale peer-to-peer systems and implications for system design. IEEE Internet Comput. J. 6, 99 100. [ .pdf] [CS] Roberts, M. J. and Mahesh, S. M. 1999 Hotmail. Technical report, Harvard University, Cambridge, MA. Case 899-185, Harvard Business School Publishing. Robertson, S. E. 1977 The probability ranking principle in IR. J. Documentation 33, 294 304. (Also reprinted in Jones and Willett (1997), pp. 281 286.) Robertson, S. E. and Spärck Jones, K. 1976 Relevance weighting of search terms. J. Am. Soc. Informat. Sci. 27, 129 146. Robertson, S. E. and Walker, S. 1994 Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. Proc. 17th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, pp. 232 241. Springer. Rosenblatt, F. 1958 The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386 408. Ross, S. M. 2002 Probability Models for Computer Science. San Diego, CA: Adademic Press. Russell, S. and Norvig, P. 1995 Artificial Intelligence: A Modern Approach. Prentice Hall. Sahami, M., Dumais, S., Heckerman, D. and Horvitz, E. 1998 A Bayesian approach to filtering junk e-mail. AAAI-98 Workshop on Learning for Text Categorization, pp. 55 62. [.ps] [CS] Salton, G. 1971 TheSMARTRetrieval System: Experiments in Automatic Document Processing. Englewood Cliffs, NJ: Prentice Hall. Salton, G. and McGill, M. J. 1983 Introduction to modern information retrieval. McGraw-Hill. Salton, G., Fox, E. A. andWu, H. 1983 Extended boolean information retrieval. Commun. ACM 26, 1022 1036. Sarukkai, R. R. 2000 Link prediction and path analysis using Markov chains. Comp. Networks 33, 377 386. [Pub] Sarwar, B. M., Karypis, G., Konstan, J. A. and Riedl, J. T. 2000 Analysis of recommender algorithms for e-commerce. Proc. 2nd ACM Conf. on Electronic Commerce, pp. 158 167. New York: ACM Press. Saul, L. and Pereira, F. 1997 Aggregate and mixed-order Markov models for statistical language processing. In Proc. 2nd Conf. on Empirical Methods in Natural Language Processing (ed. C. Cardie and R. Weischedel), pp. 81 89. Somerset, NJ: Association for Computational Linguistics. [.pdf] [CS] Saul, L. K. and Jordan, M. I. 1996 Exploiting tractable substructures in intractable networks. In Advances in Neural Information Processing Systems (ed. D. S. Touretzky, M. C. Mozer and M. E Hasselmo), vol. 8, pp. 486 492.Cambridge, MA: MIT Press. [.ps.gz] Savage, L. J. 1972 The foundations of statistics. New York: Dover. Schafer, J. B., Konstan, J. A. and Riedl, J. 2001 E-commerce recommendation applications. J. Data Mining Knowl. Discovery 5, 115 153. [CS] Schapire, R. E. and Freund,Y. 2000 Boostexter: a boosting-based system for text categorization. Machine Learning 39, 135 168. [.pdf] [CS] Schoelkopf, B. and Smola, A. 2002 Learning with Kernels. Cambridge, MA: MIT Press. Sebastiani, F. 2002 Machine learning in automated text categorization. ACM Comput. Surv. 34, 1 47. [.pdf] [CS] Sen, R. and Hansen, M. H. 2003 Predicting a Web user s next request based on log data. J. Computat. Graph. Stat. (In the press.) [.pdf] Seneta, E. 1981 Nonnegative Matrices and Markov Chains. Springer. Shachter, R. D. 1988 Probabilistic inference and influence diagrams. Oper. Res. 36, 589 604. Shachter, R. D., Anderson, S. K. and Szolovits, P. 1994 Global conditioning for probabilistic inference in belief networks. Proc. Conf. on Uncertainty in AI, pp. 514 522. San Francisco, CA: Morgan Kaufmann. [.pdf] [CS] Shahabi, C., Banaei-Kashani, F. and Faruque, J. 2001 A framework for efficient and anonymous Web usage mining based on client-side tracking. In Proceedings of WEBKDD 2001. Lecture Notes in Artificial Intelligence, vol. 2356, pp. 113 144. Springer. [.pdf] Shannon, C. E. 1948a A mathematical theory of communication. Bell Syst. Tech. J. 27, 379 423. Shannon, C. E. 1948b A mathematical theory of communication. Bell Syst.Tech. J. 27, 623 656. Shardanand, U. and Maes, P. 1995 Social information filtering: algorithms for automating word of mouth . Proc. Conf. on Human Factors in Computing Systems, pp. 210 217. [.pdf] [CS] Shore, J. E. and Johnson, R.W. 1980 Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inform. Theory 26, 26 37. Silverstein, C., Henzinger, M., Marais, H. and Moricz, M. 1998 Analysis of a very large AltaVista query log. Technical Note 1998-14, Digital System Research Center, Palo Alto, CA. [.ps.gz] [CS] Slonim, N. and Tishby, N. 2000 Document clustering using word clusters via the information bottleneck method. Proc. 23rd Int. Conf. on Research and Development in Information Retrieval, pp. 208 215. New York: ACM Press. [.ps.gz] [CS] Slonim, N., Friedman, N. and Tishby, N. 2002 Unsupervised document classification using sequential information maximization. Proc. 25th Int. Conf. on Research and Development in Information Retrieval , pp. 208 215. New York: ACM Press. [.ps.gz] [CS] Small, H. 1973 Co-citation in the scientific literature: A new measure of the relationship between two documents. J. Am. Soc. Inf.Sci. 24, 265 269. [.pdf] Smyth, P., Heckerman, D. and Jordan, M. I. 1997 Probabilistic independence networks for hidden Markov probability models. Neural Comp. 9, 227 267. [.pdf] [CS] Soderland, S. 1999 Learning information extraction rules for semi-structured and free text. Machine Learning 34, 233 272. [.ps] [CS] Sperberg-McQueen, C. and Burnard, L. (eds) 2002 TEI P4: Guidelines for Electronic Text Encoding and Interchange. Text Encoding Initiative Consortium. (Available from http://www.tei-c.org/.) Spink, A., Jansen, B. J.,Wolfram, D. and Saracevic, T. 2002 From e-sex to e-commerce: Web search changes. IEEE Computer 35, 107 109. [.pdf] Sutton, R. S. and Barto, A. G. 1998 Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press. Tan, P. and Kumar, V. 2002 Discovery of Web robot sessions based on their navigational patterns. Data Mining Knowl. Discov. 6, 9 35. [.ps.gz] [CS] Tantrum, J., Murua, A. and Stuetzle, W. 2002 Hierarchical model-based clustering of large datasets through fractionation and refractionation. Proc. 8th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining. New York: ACM Press. [.pdf] Taskar, B., Abbeel, P. and Koller, D. 2002 Discriminative probabilistic models for relational data. Proc. 18th Conf. on Uncertainty in Artificial Intelligence. San Francisco, CA: Morgan Kaufmann. [ Tauscher, L. and Greenberg, S. 1997 Revisitation patterns in World Wide Web navigation. Proc. Conf. on Human Factors in Computing Systems CHI 97, pp. 97 137. New York:ACM Press. [.pdf] [CS] Tedeschi, B. 2000 Easier to use sites would help e-tailers close more sales. New York Times, 12 June 2000. Tishby, N., Pereira, F. and Bialek,W. 1999 The information bottleneck method. In Proc. 37th Ann. Allerton Conf. on Communication, Control, and Computing (ed. B. Hajek and R. S. Sreenivas), pp. 368 377. [.pdf] [CS] Titterington, D. M., Smith,A. F. M. and Makov,U. E. 1985 Statistical Analysis of Finite Mixture Distributions. John Wiley & Sons, Ltd/Inc. Travers, J. and Milgram, S. 1969 An experimental study of the smal world problem. Sociometry 32, 425. Ungar, L. H. and Foster, D. P. 1998 Clustering methods for collaborative filtering. In Proc. Workshop on Recommendation Systems at the 15th National Conf. on Artificial Intelligence. Menlo Park, CA: AAAI Press. [.ps] [CS] Vapnik, V. N. 1982 Estimation of Dependences Based on Empirical Data. Springer. Vapnik, V. N. 1995 The Nature of Statistical Learning Theory. Springer. Vapnik, V. N. 1998 Statistical Learning Theory. John Wiley & Sons, Ltd/Inc. Viterbi, A. J. 1967 Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inform. Theory 13, 260 269. Walker, J. 2002 Links and power: the political economy of linking on the Web. Proc. 13th Conf. on Hypertext and Hypermedia, pp. 72 73. New York: ACM Press. [.html] [CS] Wall, L., Christiansen, T. and Schwartz RL. 1996 Programming Perl, 2nd edn. Cambridge, MA: O Reilly & Associates. Wasserman, S. and Faust, K. 1994 Social Network Analysis. Cambridge University Press. Watts, D. J. and Strogatz, S. H. 1998 Collective dynamics of small-world networks. Nature 393, 440 442. [.pdf] Watts, D. J., Dodds, P. S. and Newman, M. E. J. 2002 Identity and search in social networks. Science 296, 1302 1305. [.pdf] Weiss, S. M., Apte, C., Damerau, F. J., Johnson, D. E., Oles, F. J., Goetz, T. and Hampp, T. 1999 Maximizing text-mining performance. IEEE Intell. Syst. 14, 63 69. [.pdf] Weiss, Y. 2000 Correctness of local probability propagation in graphical models with loops. Neural Comp. 12, 1 41. [.pdf] White, H. 1970 Search parameters for the small world problem. Social Forces 49, 259. Whittaker, J. 1990 Graphical Models in Applied Multivariate Statistics. John Wiley & Sons, Ltd/Inc. Wiener, E. D., Pedersen, J. O. and Weigend, A. S. 1995 A neural network approach to topic spotting. Proc. SDAIR-95, 4th Ann. Symp. on Document Analysis and Information Retrieval, Las Vegas, NV, pp. 317 332. [CS] Witten, I. H., Moffat, A. and Bell, T. C. 1999 Managing Gigabytes: Compressing and Indexing Documents and Images, 2nd edn. San Francisco, CA: Morgan Kaufmann. Witten, I. H., Neville-Manning, C. and Cunningham, S. J. 1996 Building a digital library for computer science research: technical issues. Proc. Australasian Computer Science Conf., Melbourne, Wolf, J., Squillante, M.,Yu, P., Sethuraman, J. and Ozsen, L. 2002 Optimal crawling strategies for Web search engines. Proc. 11th Int. World Wide Web Conf., pp. 136 147. [pub] Xie, Y. and O Hallaron, D. 2002 Locality in search engine queries and its implications for caching. Proc. IEEE Infocom 2002, pp. 1238 1247. Piscataway, NJ: IEEE Press. [.pdf] [CS] Yang, Y. 1999 An evaluation of statistical approaches to text categorization. Information Retrieval 1, 69 90. [.ps.gz] [CS] Yang,Y. and Liu, X. 1999 A re-examination of text categorization methods In Proc. SIGIR-99, 22nd ACM Int. Conf. on Research and Development in Information Retrieval (ed. M. A. Hearst, F. Gey and R. Tong), pp. 42 49. New York: ACM Press. [.ps.gz] [CS] Yedidia, J., Freeman,W. T. and Weiss,Y. 2000 Generalized belief propagation. Neural Comp. 12, 1 41. [.pdf] York, J. 1992 Use of the Gibbs sampler in expert systems. Artif. Intell. 56, 115 130. Zamir, O. and Etzioni, O. 1998 Web document clustering: a feasibility demonstration. Proc 21st Int. Conf. on Research and Development in Information Retrieval (SIGIR), pp. 46 54. New York: ACM Press. [.pdf] [CS] Zelikovitz, S. and Hirsh, H. 2001 Using LSI for text classification in the presence of background text. Proc. 10th Int. ASM Conf. on Information and Knowledge Management, pp. 113 118. New York: ACM Press. [.pdf] [CS] Zhang, T. and Iyengar, V. S. 2002 Recommender systems using linear classifiers. J. Machine Learn. Res. 2, 313 334. [Pub] Zhang, T. and Oles, F. J. 2000 A probability analysis on the value of unlabeled data for classification problems. Proc. 17th Int. Conf. on Machine Learning, Stanford, CA, pp. 1191 1198. [.ps] Zhu, X., Yu, J. and Doyle, J. 2001 Heavy tails, generalized coding, and optimal Web layout. Proc. 2001 IEEE INFOCOM Conf., vol. 3, pp. 1617 1626. Piscataway, NJ: IEEE Press. [.ps] [CS] Zukerman, I., Albrecht, D. W. and Nicholson, A. E. 1999 Predicting users' requests on the WWW. Proc. UM99: 7th Int. Conf. on User Modeling, pp. 275 284. Springer. [.pdf]
{"url":"http://ibook.ics.uci.edu/references.html","timestamp":"2014-04-17T18:45:30Z","content_type":null,"content_length":"111124","record_id":"<urn:uuid:cb46673a-adff-4461-aa18-6efd4d5703a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 2008 [00328] [Date Index] [Thread Index] [Author Index] Re: fractional derivative (order t) of (Log[x])^n and Log[Log[x]] • To: mathgroup at smc.vnet.net • Subject: [mg91279] Re: fractional derivative (order t) of (Log[x])^n and Log[Log[x]] • From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de> • Date: Wed, 13 Aug 2008 04:39:53 -0400 (EDT) • References: <g7rim5$ice$1@smc.vnet.net> click on any formula at and you will find the Mathematica input. For the first formula it is D[Log[1 + z], {z, \[Alpha]}] == z^(1 - \[Alpha]) Hypergeometric2F1Regularized[1, 1, 2 - \[Alpha], -z] hanrahan398 at yahoo.co.uk wrote: > I'd be grateful if someone could tell me a nicely computable formula > for the fractional derivative w.r.t. x (order t) of (Log[x])^n, where > n is a positive integer. > (Ideally I would like a formula where t can be any real number, but > one for t>=0 would be most helpful!) > The second thing I am seeking is a formula for the fractional > derivative w.r.t. x (order t) of Log[Log[x]], Log[Log[Log[x]]], etc., > and more generally, of Log[...[Log[Log[x]]...], where there are n > nested log functions, where n is of course a positive integer. > (I have visited: > <http://functions.wolfram.com/ElementaryFunctions/Log/20/03/>, and > there is a formula there for the t-th (fractional) derivative of > Log[x]^n, but I do not understand how to input it!! > Basically I need formulae for the order-t fractional derivatives of > (Log[x])^n and of Log[Log[x]], Log[Log[Log[x]]] (and generally with n > nested logs), which I can use for variable x and given values of n and > t, and can also evaluate at given values of x. > Many thanks in advance. > Michael
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Aug/msg00328.html","timestamp":"2014-04-16T10:18:03Z","content_type":null,"content_length":"26857","record_id":"<urn:uuid:c37a0e05-518d-422a-90d9-de9a49111c83>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
mRMR FAQ Q1.2 What are "MID" and "MIQ"? Q1.4 Where to download the mRMR software and source codes? Q1.5 What is the correct format of my input data? Q1.6 How should I understand the results of mRMR? For example, I ran it on a small 3-variable data set & it gave me an output like 2 1 3. Does that mean 2 is the least statistical dependent & 3 is most dependent? That is, 2 is the most relevant & 3 is least relevant? Q1.7 How to cite/acknowledge mRMR? Q1.9 Damage & other risks & disclaimer? 2. How to use the online version Q2.1 Where is the online version of mRMR? Q2.2 Is it true that the online version only considers the mutual information based mRMR? Q2.3 What are the input/parameters of the online program? Q2.4 What should be the input file and its format for the online version? Q2.5 What is the meaning of the output of the online program? 3. How to use the C/C++ version Q3.1 Where is the C/C++ version of mRMR? Q3.2 Can I use the C/C++ version for Linux, Mac, Unix, or other *nix machines? Q3.3 Can I use the C/C++ version Windows machines? Q3.4 Is it true that the C/C++ version only considers the mutual information based mRMR? Q3.5 What are the input/parameters of the C/C++ program? Q3.6 What should be the input file and its format for the C/C++ version? Q3.7 What is the meaning of the output of the C/C++ program? Q3.8 Are the C/C++ program and the online version the same? Q3.9 Any help information available when I try to run the C/C++ version? Q3.10 The C/C++ binary program hangs. Why? 4. How to use the Matlab version Q4.1 Where is the Matlab version of mRMR? Q4.2 Can I use the Matlab versions for Linux, Mac, Unix, Windows, or other machines? Q4.3 Is it true that the released Matlab version only considers the mutual information based mRMR? Q4.4 What are the input/parameters of the Matlab version? Q4.5 What is the output of the Matlab version? Q4.6 What are mrmr_mid_d , mrmr_miq_d , and mrmr_mibase_d? Q4.7 Do the Matlab version and C/C++ version produce the same results? 5. Handle discrete/categorical and continuous variables Q5.1 How mRMR handles continuous variables? Q5.2 My variables are continuous or have as many as 500/1000 discrete/categorical states. Can I use mRMR? Q5.3 What is the best way to discretize the data? Q5.4 Why your README file mentions "Results with the continuous variables are not as good as with Discrete variables"? Q6.1 A typo in Eq. (8) in your JBCB05 paper (and also the CSB03 paper)? Q6.2 Can I ask questions and what is your contact? 1. Basic questions Q1.1 What is mRMR? A. It means minimum-Redundancy-Maximum-Relevance feature/variable/attribute selection. The goal is to select a feature subset set that best characterizes the statistical property of a target classification variable, subject to the constraint that these features are mutually as dissimilar to each other as possible, but marginally as similar to the classification variable as possible. We showed several different forms of mRMR, where "relevance" and "redundancy" were defined using mutual information, correlation, t-test/F-test, distances, etc. Importantly, for mutual information, we showed that the method to detect mRMR features also searches for a feature set of which features jointly have the maximal statistical "dependency" on the classification variable. This "dependency" term is defined using a new form of the high-dimensional mutual information. The mRMR method was first developed as a fast and powerful feature "filter". We then also showed a method to combine mRMR and "wrapper" selection methods. These methods have produced promising results on a range of datasets in many different areas. Q1.2 What are "MID" and "MIQ"? A. MID and MIQ represent the Mutual Information Difference and Quotient schemes, respectively, to combine the relevance and redundancy that are defined using Mutual Information (MI). They are the two most used mRMR schemes. Q1.3 How to use mRMR? A. There are three ways. 1) Prepare your data and run our online program at the web site http://research.janelia.org/peng/proj/mRMR . 2) Download the precompiled C/C++ version (binary) and run on your own machine. 3) Download the Matlab versions (binary plus source codes) and run on your own machine. You can find the software download links at our web site, too. Q1.4 Where to download the mRMR software and source codes? A. You can download different versions at this web site, or follow the links to download the Matlab versions. The Matlab version contains all key source codes, including the mRMR algorithm (in Matlab ) and mutual information computation toolbox (in C/C++ and can be compiled as Matlab mex functions). Q1.5 What is the correct format of my input data? A. See the answers to the respective mRMR versions below. Q1.6 How should I understand the results of mRMR? For example, I ran it on a small 3-variable data set & it gave me an output like 2 1 3. Does that mean 2 is the least statistical dependent & 3 is most dependent? That is, 2 is the most relevant & 3 is least relevant? A. That means the first selected feature is 2, and the last is 3. That also means the combination of 2 and 1 is better than the combination of 2 and 3. That also means 2 is the best feature if you only want one feature. And "2 and 1" is the best combination if you want two features. However, this does NOT mean 3 is the least relevant or most dependent. If you select features without considering the relationship between all the features, but only between individual features and the target class variable, you may get results that 3 is more relevant than 1. However the combination of 2 and 1 is better than that of 2 and 3. Q1.7 How to cite/acknowledge mRMR? A. We will appreciate if you appropriately cite the following papers: [TPAMI05] Hanchuan Peng, Fuhui Long, and Chris Ding, "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp.1226-1238, 2005. [PDF] This paper presents a theory of mutual information based feature selection. Demonstrates the relationship of four selection schemes: maximum dependency, mRMR, maximum relevance, and minimal redundancy. Also gives the combination scheme of "mRMR + wrapper" selection and mutual information estimation of continuous/hybrid variables. [JBCB05] Chris Ding, and Hanchuan Peng, "Minimum redundancy feature selection from microarray gene expression data," Journal of Bioinformatics and Computational Biology, Vol. 3, No. 2, pp.185-205, 2005. [PDF] This paper presents a comprehensive suite of experimental results of mRMR for microarray gene selection on many different conditions. It is an extended version of the CSB03 paper. [CSB03] Chris Ding, and Hanchuan Peng, "Minimum redundancy feature selection from microarray gene expression data," Proc. 2nd IEEE Computational Systems Bioinformatics Conference (CSB 2003), pp.523-528, Stanford, CA, Aug, 2003. [PDF] This paper presents the first set of mRMR results and different definitions of relevance/redundancy terms. [IS05] Hanchuan Peng, Chris Ding, and Fuhui Long, "Minimum redundancy maximum relevance feature selection," IEEE Intelligent Systems, Vol. 20, No. 6, pp.70-71, November/December, 2005. [PDF] A short invited essay that introduces mRMR and demonstrates the importance to reduce redundancy in feature selection. [Bioinfo07] Jie Zhou, and Hanchuan Peng, "Automatic recognition and annotation of gene expression patterns of fly embryos," Bioinformatics, Vol. 23, No. 5, pp. 589-596, 2007. [PDF] One application of mRMR in selecting good wavelet image features. A. The mRMR software packages can be downloaded and used, subject to the following conditions: Software and source code Copyright (C) 2000-2007 Written by Hanchuan Peng. These software packages are copyright under the following conditions: Permission to use, copy, and modify the software and their documentation is hereby granted to all academic and not-for-profit institutions without fee, provided that the above copyright notice and this permission notice appear in all copies of the software and related documentation and our publications (TPAMI05, JBCB05, CSB03, etc.) are appropriately cited. Permission to distribute the software or modified or extended versions thereof on a not-for-profit basis is explicitly granted, under the above conditions. However, the right to use this software by companies or other for profit organizations, or in conjunction with for profit activities, and the right to distribute the software or modified or extended versions thereof for profit are NOT granted except by prior arrangement and written consent of the copyright holders. For these purposes, downloads of the source code constitute "use" and downloads of this source code by for profit organizations and/or distribution to for profit institutions in explicitly prohibited without the prior consent of the copyright holders. Use of this source code constitutes an agreement not to criticize, in any way, the code-writing style of the author, including any statements regarding the extent of documentation and comments present. The software is provided "AS-IS" and without warranty of any kind, expressed, implied or otherwise, including without limitation, any warranty of merchantability or fitness for a particular purpose. In no event shall the authors be liable for any special, incidental, indirect or consequential damages of any kind, or any damages whatsoever resulting from loss of use, data or profits, whether or not advised of the possibility of damage, and on any theory of liability, arising out of or in connection with the use or performance of these software packages. Q1.9 Damage & other risks & disclaimer? A. See the detailed disclaimer and conditions in the answer to Q1.8. In short, we will NOT be liable for any damage of any kinds, or loss of data, because you use the released software. It is all at your own risk. [Return to top] [Return to mRMR main page] 2. How to use the online version Q2.1 Where is the online version of mRMR? A. The web site http://research.janelia.org/peng/proj/mRMR . Q2.2 Is it true that the online version only considers the mutual information based mRMR? A. Yes. Since the mutual information based mRMR produces most promising results and most used, the online program only uses mutual information to define relevance and redundancy of features. Q2.3 What are the input/parameters of the online program? A. You need an input file, of course (some people just clicked "Submit job" without specifying anything…). You also need to choose how the relevancy and redundancy terms should be combined (i.e. MID or MIQ), how many features you want to select, the property of your variables (categorical or continuous), and how you want to discretize your data in case you have continuous data. Q2.4 What should be the input file and its format for the online version? A. It should be a CSV (CSV, comma separated values) file, where each row is a sample and each column is a variable/attribute/feature. Make sure your data is separated by comma, but not blank space of other characters! The first row must be the feature names, and the first column must be the classes for samples. You may download a testing example data set here, which is the microrray data of lung cancer (7 classes). In this sample data set, each variable/feature/column has been discretized as 3-states, encoded in digits "-2", "0" and "2". You may use other integers (such as -1, 0, 1) for the categorical/discrete states defined by yourself, - but never use letters or combinations of digits and letters (such as "10v"). Try not to use strange states such as 1001 or 10000 as the program will use these values to guess what will be a reasonable amount memory to allocate. For example, if each variable has only 5 states, then try to use -2,-1,0,1,2, or 0,1,2,3,4, but NEVER use something like "-10000, -1000, 0, 1000, 10000"! (Note: The released version was only designed for the obviously meaningful inputs.) More examples can be found at the mRMR web site, too. Your data can contain continuous values, except the first column (which is the class variable) and first row (which is the header). In this case, you can ask mRMR to do discretization for you. See FAQ part 5. If you have variables that are continuous or have many categorical states (e.g. several hundred), you may want to read more the mRMR handle continuous variables. See FAQ part 5. Q2.5 What is the meaning of the output of the online program? A. The meanings of most parts of the online program are intuitive: it automatically compares the features selected using the conventional maximum relevance (MaxRel) method and the mRMR. Suppose you ask the program to select 50 features for you, then you can also truncate the results and use the first 20 or 30 features. You can also test the classification accuracy using the first K features, where K=1,…,50 in this case. In this way, you can actually see with what number of features you will get the satisfactory cross-validation classification accuracy. This method was used in our papers, The first column is the order of features selected. The second column is the actual indexes of the features. The third column includes the respective feature names extracted from your input file. The last column is just the best score in the process to select the *current* best feature. Indeed, it is the value of "relevancy - (or /) redundancy" for MID (or MIQ) for the current selected feature. Because for classification all selected features will be used together, this score does not indicate anything for classification, thus it is NOT important for you to use. [Return to top] [Return to mRMR main page] 3. How to use the C/C++ version Q3.1 Where is the C/C++ version of mRMR? A. Follow the download links at the web site http://research.janelia.org/peng/proj/mRMR . Q3.2 Can I use the C/C++ version for Linux, Mac, Unix, or other *nix machines? A. Yes. There are precompiled versions. If you cannot find one for your machine, send me email and I will try to compile one for you if possible. You can also compile from the source codes directly by running “make -f mrmr.makefile”. Q3.3 Can I use the C/C++ version Windows machines? A. You should be able to compile and run using MinGW and mSys. Probably also yes when you use Cygwin, but untested. Or you may want to simply consider using the Matlab version or the online version. Q3.4 Is it true that the C/C++ version only considers the mutual information based mRMR? A. Yes. Since the mutual information based mRMR produces most promising results and most used, the released C/C++ program only uses mutual information to define relevance and redundancy of features. Q3.5 What are the input/parameters of the C/C++ program? A. You need an input file with the same format explained in Q2.4. You also need to choose how the relevancy and redundancy terms should be combined (i.e. MID or MIQ), how many features you want to select, the property of your variables (categorical or continuous), and how you want to discretize your data in case you have continuous data. Just type "mrmr" or the appropriate program name for the help information. Q3.6 What should be the input file and its format for the C/C++ version? A. The same of the online version. See Q2.4. Q3.7 What is the meaning of the output of the C/C++ program? A. The same of the online version. See Q2.5. Q3.8 Are the C/C++ program and the online version the same? A. Yes. The online program just provides you a convenient way to run mRMR on relatively small datasets. If you have big datasets, try to use the C/C++ version or the Matlab version. Q3.9 Any help information available when I try to run the C/C++ version? A. Just run "mrmr" (or other appropriate program name if you get a special one from me) and there will be the default help information how to run it. You can also see some examples if you use the web-based online program, which will display the command used at the top of the result page. Q3.10 The C/C++ binary program hangs. Why? A. This program does NOT hang except you feed it an inappropriate input file. For example, some people use variables of hundred of categorical states or continuous values, which cannot yield a meaningful mutual information estimation in many cases. The released mRMR binary versions are using mutual information for discrete variables (if you are interested in the mutual information estimation for continuous variable, you can find the formula in the TPAMI05 paper). If you have continuous data with big dynamic range (say, from 1 to 1000), the binary version mRMR program treats each variable as one with 1000 categories, and thus the computation of mutual information takes a long time to run, that is why you see the program "hangs". I suggest you pre-thershold (discretize) your data using some of your own favorite ways, in case you don't like set the threshold as mean+/- std. [Return to top] [Return to mRMR main page] 4. How to use the Matlab version Q4.1 Where is the Matlab version of mRMR? A. Follow the download links at the web site http://research.janelia.org/peng/proj/mRMR , you will find the Matlab versions released to MatlabCentral. For most commonly used machines, i.e. Linux, Mac, and Windows, the simplest way is to download the mRMR Matlab version with the precompiled mutual information toolbox. Or, you can download the mRMR codes and the mutual information toolbox separately. The mutual information toolbox was also written in C/C++ and can be recompiled as mex functions for Matlab running on other platforms. These codes are essentially similar to the mutual information computation in the C/C++ version of mRMR. Of course, you will find the mutual information toolbox useful for purposes other than mRMR. For the mRMR case, you will need to set the Matlab path to this toolbox. Q4.2 Can I use the Matlab versions for Linux, Mac, Unix, Windows, or other machines? A. Yes. Q4.3 Is it true that the released Matlab version only considers the mutual information based mRMR? A. Yes. If you want to use other variants of mRMR, such as correlation, distance, or t-test/F-test based, you can simply replace the mutual information computing function using the corrcoef(.) function in Matlab (using correlation as an example). All these variants have been described in the JBCB 2005/CSB 2003 papers. Q4.4 What are the input/parameters of the Matlab version? A. Three arrays, D, F, and K. D is an m*n array of your candidate features, each row is a sample, and each column is a feature/attribute/variable. F is an m*1 vector, specifying the classification variable (i.e. the class information for each of the m samples). K is the maximal number of features you want to select. D must be pre-discretized as a categorical array, i.e. containing only integers. F must be categorical, indicating the class information. Q4.5 What is the output of the Matlab version? A. The indexes of the first K features selected. This corresponds to the second column produced in the C/C++ version. Q4.6 What are mrmr_mid_d , mrmr_miq_d , and mrmr_mibase_d? A. The first, mrmr_mid_d, is MID . The second, mrmr_miq_d, is MIQ. The third, mrmr_mibase_d, is maximum relevance selection (for comparison). See Q1.2 for more explanations. You can also read our papers for a comparison of MID and MIQ. Q4.7 Do the Matlab version and C/C++ version produce the same results? A. Theoretically yes. Practically may have slight difference in some cases, as they use different floating number precisions (i.e. one uses double and the other uses float). However we have not observed any dramatic differences between these two versions. 5. Handle discrete/categorical and continuous variables Q5.1 How mRMR handles continuous variables? A. mRMR is a framework where the relevance & redundancy terms should be combined. Mutual information, which is used most of the time, is a useful method to define these two terms, but other options exist. There are three ways for mRMR to handle continuous variables. (1) Use t-test / F-test (bi-class/multiclass) for relevance measure and the correlation among variables as redundancy measure. Other scores such as distances can also be considered. See the CSB03 & JBCB05 papers for details. (2) Use mutual information of discrete variables, - this needs to first discretize variables/features. We have chosen to discretize them using mean+/-alpha*std (alpha=1 or 0 or 2 or 0.5). The choice of alpha will have some influence on the actual features selected (more correctly, the ordering of features you select, - but if you select several more, you may find a lot of them are the same, although may be in different order). This is actually a very robust way to select features. See the TPAMI05, CSB03, JBCB05 papers for details. (3) Use mutual information for continuous variables. The mutual information can be estimated using Parzen windows. See the TPAMI05 paper for the formulas. The computation of Parzen window is more expensive than the discrete version of mutual information computation. See Q1.7 for more information of these papers. Q5.2 My variables are continuous or have as many as 500/1000 discrete/categorical states. Can I use mRMR? A. You can either use mRMR for continuous variables (based on correlation, Parzen-windows mutual information estimation, etc) or first discretize and use the mRMR for discrete variables. Q5.3 What is the best way to discretize the data? A. In our experiments we have discretized data based on their mean values and standard deviations. We thresholded at mean ± alpha*std, where the alpha value usually range from 0.5 to 2. Typically 2, 3 or no more than 5 states for each variable will produce satisfactory results that are quite robust too. But you need to make a decision what would be the most meaningful way to discretize your Q5.4 Why your README file mentions "Results with the continuous variables are not as good as with Discrete variables"? A. We showed some comparison results in our papers (see Q1.7) on both discrete and continuous cases using the same datasets, typically the discretized results are better. There are several reasons, e.g. discretization will often lead to a more robust classification. [Return to top] [Return to mRMR main page] 6. Other questions Q6.1 A typo in Eq. (8) in your JBCB05 paper (and also the CSB03 paper)? A. Yes, there is a typo in JBCB05 paper (and also the CSB03 paper), Eq. (8) on page 189, the term (gk-g) should come with a square like (gk-g)^2. Q6.2 Can I ask questions and what is your contact? A. Yes, you can ask questions if you cannot find answers above. My contact is Hanchuan (dot) Peng (at) gmail (dot ) com. [Return to top] [Return to mRMR main page]
{"url":"http://penglab.janelia.org/proj/mRMR/FAQ_mrmr.htm","timestamp":"2014-04-16T13:36:10Z","content_type":null,"content_length":"92617","record_id":"<urn:uuid:fc56287a-2904-446e-a058-b2dd88e9088b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Andreu Sancho Homepage Genetic Algorithms (GAs), are a search and optimization method inspired in the way nature works with living entities, using evolutionary-based operators. These operators exchange genetic information through different generations until an ending condition, typically the desired solution, is found. In this entry, the formalism of why GAs work is described as proposed by Holland in the middle seventies and later by Goldberg. To do so, we first need to introduce some key concepts, assuming the classical ternary representation {0, 1, *}, where * is the don't care symbol. A fundamental concept in GA theory is the one of schema. A schema is a particular subset among the set of all possible binary strings described by a template composed of the ternary alphabet {0, 1, *}. For instance, the schema 01**1 corresponds to the set of strings of length five (that is, strings composed of five symbols from the ternary alphabet) with a 0 in the first position, an 1 in the second position and an 1 in the last position. Every string that fits a template is called an instance of a schema. In our particular example, the strings 01001, 01011 and 01111, among others, are instances of the schema 01**1. It is very common to use the term schema to denote the subset as well as the template defining the subset itself. Two more concepts are needed in order to understand the intrinsics of GAs: (1) the order of a schema and (2) the length of a schema. (1) The order of a schema is defined as the number of bits that are non-don't care symbols. For example, the order of the schema 01**1 is 3. (2) The length of a schema is defined as the distance between the first and the last non-don't care symbols. For example, the length of the schema 01**1 is 4. With these definitions in mind we can model how different schemas evolve along a GA run using the schema theorem. This theorem states how the frequency of schema instances changes considering the effects of the evolutionary operators of selection, crossover and mutation. Going further, schema theorem states that the frequency of schemata with fitness higher than the average, short length and low-order ones increases in the following generation. One can find the mathematical foundations elsewhere in the bibliography. Anyway, schemata theorem is not enough to comprehend why GAs work; we need another conceptual leap. That is the case of building block hypothesis, which states that the combination of low-order, highly fit schemata (the building blocks) generate higher-order schemata. These new schemata will form high performance solutions. As in Goldberg's own words, the primary idea of GAs in that they work through a mechanism of decomposition and reassembly. O. Corón, F. Herrera, F. Hoffmann, and L. Magdalena. Genetic fuzzy systems: Evolutionary tuning and learning of fuzzy knowledge bases, volume 19 of Advances in Fuzzy Systems— Aplications and Theory. World Scientific, 2001. D. E. Goldberg. The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, 1 edition, 2002. D. E. Goldberg. Genetic algorithms in search, optimization & machine learning. Addison Wesley, 1 edition, 1989. J. H. Holland. Adaptation in natural and artificial systems. The University of Michigan Press, 1975. A. Orriols-Puing. New Challenges in Learning Classifier Systems: Mining Rarities and Evolving Fuzzy Models. PhD thesis, Arquitectura i Enginyeria La Salle, Universitat Ramon Llull, Passeig de la Bonanova 8, 08022 - Barcelona, Nov 2008. What do the customers buy? Which products are bought together? With these two short questions the field of association rule (AR) mining makes its appearance. In this field of ML, the original aim was to find associations and correlations between the different items that customers place in their shopping market. More generally, the goal of AR is to find frequent and interesting patterns, associations, correlations, or causal structures among sets of items or elements in large databases and put these relationships in terms of association rules. AR is an important part of the unsupervised learning paradigm, so the algorithm has not the presence of an expert to teach it during the training stage. Why AR mining may be so important? Many commercial applications generate huge amounts of unlabeled data (just think of Facebook for a moment), so our favorite classifier system will not work in this environment. With AR we can exploit such databases and extract any kind of useful information, and rule-based systems are a well known to us... Note that there are two main types of ARs: (1) binary or qualitative rules, and (2) quantitative rules. For this article I will use qualitative due to its simplicity. Association Rule Mining with Apriori Apriori is the most famous AR miner out there. With this algorithm we can find the most frequent itemsets, that is, those itemsets which appear most often in the database. Its simplicity can be summarized by the following code: k := 1 generate frequent itemsets (support count >= min support count) of length 1 repeat until no frequent itemsets are found: k := k + 1 generate itemsets of size k from the k - 1 frequent itemsets compute the support of each candidate by scanning through the database But this article is focused on rule generation. How do we obtain the association rules from the previously found frequent itemsets? That is definitely tricky. To obtain the desired rules we have to obtain all the possible combinations (without repetition) from every frequent itemset and check if the confidence of every rule formed has a confidence greater or equal than a minimum threshold, the minimum confidence. Let's see this with the following example: suppose we have the frequent itemset {B,C,D}, where B, C, and D are items obtained from the transaction database. How many rules will this itemset generate? The answer is a maximum of six rules: because we work with qualitative rules, there are (2^k)-2, where k is the size of the set, in our case k = 3. The rules are: BC => E, BE => C, CE => B, B => CE, C => BE, and E => BC. The idea is the following: for itemsInAntecedent := 1 to k first := getFirstItemFromItemset element := first setConsequentItemActive( first ) for j := 1 to itemsInAntecedent element := getNextElementFromItemset( element ) setConsequentItemActive( element ) if minConf <= ruleConfidence then store rule while first <> lastItemFromItemset and itemsInAntecedent > 1 then first := getNextElementFromItemset( first ) element := first while first <> lastItemFromItemset if itemsInAntecedent = 1 and numElements > 2 then setConsequentItemActive( lastElement ) if minConf <= ruleConfidence then store rule As it can be appreciated, the algorithm is tricky enough to have a wannabe PhD for a couple of days wondering (from the very scratch) how to obtain the desired rules. R. Agrawal and R. Srikant. Fast Algorithms for Mining Association Rules. Technical report, IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120, Sep 1994.
{"url":"http://andreusancho.blogspot.com/2011_03_01_archive.html","timestamp":"2014-04-20T13:24:37Z","content_type":null,"content_length":"110497","record_id":"<urn:uuid:f6bf0f56-6646-4444-9246-a348207690bc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Secant line secant line of a In mathematics, a curve is, generally speaking, an object similar to a line but which is not required to be straight... is a line that (locally) intersects two In geometry, topology and related branches of mathematics a spatial point is a primitive notion upon which other concepts may be defined. In geometry, points are zero-dimensional; i.e., they do not have volume, area, length, or any other higher-dimensional analogue. In branches of mathematics... s on the curve. The word comes from the Latin is an Italic language originally spoken in Latium and Ancient Rome. It, along with most European languages, is a descendant of the ancient Proto-Indo-European language. Although it is considered a dead language, a number of scholars and members of the Christian clergy speak it fluently, and... to cut It can be used to approximate the In geometry, the tangent line to a plane curve at a given point is the straight line that "just touches" the curve at that point. More precisely, a straight line is said to be a tangent of a curve at a point on the curve if the line passes through the point on the curve and has slope where f... to a In mathematics, a curve is, generally speaking, an object similar to a line but which is not required to be straight... , at some point . If the secant to a curve is defined by two In geometry, topology and related branches of mathematics a spatial point is a primitive notion upon which other concepts may be defined. In geometry, points are zero-dimensional; i.e., they do not have volume, area, length, or any other higher-dimensional analogue. In branches of mathematics... , with fixed and variable, as along the curve, the direction of the secant approaches that of the tangent at , (assuming that the first-derivative of the curve is continuous at point so that there is only one tangent). As a consequence, one could say that the In mathematics, the concept of a "limit" is used to describe the value that a function or sequence "approaches" as the input or index approaches some value. The concept of limit allows mathematicians to define a new point from a Cauchy sequence of previously defined points within a complete metric... of the secant's In mathematics, the slope or gradient of a line describes its steepness, incline, or grade. A higher slope value indicates a steeper incline.... , or direction, is that of the tangent. In calculus, this idea is the basis of the geometric definition of the In calculus, a branch of mathematics, the derivative is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a... A chord of a circle is a geometric line segment whose endpoints both lie on the circumference of the circle.A secant or a secant line is the line extension of a chord. More generally, a chord is a line segment joining two points on any curve, such as but not limited to an ellipse... is the portion of a secant that lies within the curve. See also • Chord A chord of a circle is a geometric line segment whose endpoints both lie on the circumference of the circle.A secant or a secant line is the line extension of a chord. More generally, a chord is a line segment joining two points on any curve, such as but not limited to an ellipse... • Radius In classical geometry, a radius of a circle or sphere is any line segment from its center to its perimeter. By extension, the radius of a circle or sphere is the length of any such segment, which is half the diameter. If the object does not have an obvious center, the term may refer to its... • Diameter In geometry, a diameter of a circle is any straight line segment that passes through the center of the circle and whose endpoints are on the circle. The diameters are the longest chords of the • Tangent
{"url":"http://www.absoluteastronomy.com/topics/Secant_line","timestamp":"2014-04-21T02:22:31Z","content_type":null,"content_length":"17940","record_id":"<urn:uuid:75c6f9bf-c574-4e86-bf3b-6ad22f5f22e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Tuesday, March 25, 2008 Mathematical physicist wins 2008 Templeton Prize The 2008 Templeton Prize has been awarded to Polish mathematical physicist Michael Heller. Heller has worked for more than 40 years in theology, philosophy, mathematics and cosmology, and intends to use the £820,000 prize to set up a cross-university and inter-disciplinary institute to investigate questions in science, theology and philosophy. 16th century depiction of Genesis (Michelangelo, Sistine Chapel): God creates Adam. Like Galileo, Heller thinks that mathematics is the "language of God." The Templeton Prize was founded in 1972 by philanthropist Sir John Templeton, and is awarded annually to a living person for "progress toward research or discoveries about spiritual realities". It is the world's largest annual monetary prize of any kind given to an individual (£820,000). Plus reported on John Barrow's success in 2006. Heller has been rewarded for "developing sharply focused and strikingly original concepts on the origin and cause of the Universe, often under intense (communist Poland) governmental repression." Heller's work these days is largely in non-commutative geometry, which he uses to attempt to remove the problem of a cosmological singularity at the origin of the Universe. "If on the fundamental level of physics there is no space and no time, as many physicists think," says Heller, "non-commutative geometry could be a suitable tool to deal with such a situation." You can read more on non-commutative geometry in the Plus article Quantum Geometry. posted by westius @ 2:00 PM 0 Comments:
{"url":"http://plus.maths.org/content/comment/reply/4174","timestamp":"2014-04-17T07:07:01Z","content_type":null,"content_length":"23260","record_id":"<urn:uuid:0f60f84f-d879-46ca-9927-f0a47dde456c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Graphs vs. Ord Andrew Brooke-Taylor andrewbt at gmail.com Thu Mar 28 02:27:41 EDT 2013 Dear Jan, [A] is Vopenka's Principle (or equivalent to it, depending on your definitions) and [B] is Weak Vopenka's Principle. Vopenka's Principle directly implies Weak Vopenka's Principle: adding the "not equals" relation to structures turns a counter-example for WVP into a class of structures with no non-identy homomorphisms, and then taking unions gives a counter-example to VP (one does have this sort of liberty with the language thanks to results you can find in Pultr & Trnkova's book "Combinatorial, algebraic, and topological representations of groups, semigroups and categories". These results also imply for example that it doesn't matter whether you mean "symmetric" or "directed" when you say "graph"). It remains open whether WVP implies VP. VP is at the upper end of the large cardinal hierarchy: it implies the existence of a proper class of extendibles, and hence a proper class of supercompact cardinals, but its consistency is implied by the existence of an almost huge cardinal. Note however that an inaccessible kappa such that V_kappa satisfies VP need not even be weakly compact itself. WVP remains more mysterious: it implies the existence of a proper class of measurable cardinals, but as far as we currently know could lie anywhere between there and VP in consistency strength. The best reference for all of this that I know of is Adamek & Rosicky's book "Locally Presentable and Accessible Categories" (London Maths. Soc. Lecture Notes #189), Chapter 6 and the Appendix. Best wishes, Andrew Brooke-Taylor On 27 March 2013 04:34, <pax0 at seznam.cz> wrote: > Hi All, > what is the logical relationship between these two hypotheses: > [A] Ordinals (considered as a thin category) cannot be fully embedded into > the category of Graphs. > (unordered, without loops, with morphisms graph homomorphisms) > [B] The dual of Ordinals is such. > And yet, what is the approximate set theoretic strength of these two > claims? > Thank you, Jan Pax > _______________________________________________ > FOM mailing list > FOM at cs.nyu.edu > http://www.cs.nyu.edu/mailman/listinfo/fom Dr Andrew Brooke-Taylor JSPS Postdoctoral Research Fellow Kobe University -------------- next part -------------- An HTML attachment was scrubbed... URL: </pipermail/fom/attachments/20130328/604bd4b0/attachment.html> More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-March/017175.html","timestamp":"2014-04-20T18:45:39Z","content_type":null,"content_length":"5310","record_id":"<urn:uuid:031b0f86-a9f5-4969-b01e-8218318081b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
A manufacture has been selling 1850 television sets a week at 510 each. A market survey indicates that for each 11... - Homework Help - eNotes.com A manufacture has been selling 1850 television sets a week at 510 each. A market survey indicates that for each 11 rebate offered to a buyer, the number of sets sold will increase by 110 per week. a) Find the demand function, where is the number of the television sets sold per week. ? How large rebat should the company offer to a buyer in order to maximize revenue? If the weekly cost function is 157250+170x, how should it set the size of the rebate to maximixe its profit? 0 Answers | Be the first to answer Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/manufacture-has-been-selling-1850-television-sets-418975","timestamp":"2014-04-19T00:10:29Z","content_type":null,"content_length":"23078","record_id":"<urn:uuid:d941cffa-ffa2-4e53-9823-bbea0181cc68>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Maclaurin Approximation February 26th 2006, 08:41 PM Maclaurin Approximation How would you determine the degree of the Maclaurin polynomial required for the error in the approximation to be less than 0.0001 for f(x)=cos(pi*x^2) approximating f(0.6)? Thanks! February 27th 2006, 12:18 AM Originally Posted by Filthypeasant How would you determine the degree of the Maclaurin polynomial required for the error in the approximation to be less than 0.0001 for f(x)=cos(pi*x^2) approximating f(0.6)? Thanks! Look at the form of the remainder of the Maclaurin polynomial. Set if to be less than 0.0001 then solve to find the minimum degree that satisfies the inequality. This may need to be done by trial and error. And it looks to me as though the required degree is 16. Look at the post on Taylor polynomials just below this one for more details.
{"url":"http://mathhelpforum.com/calculus/2032-maclaurin-approximation-print.html","timestamp":"2014-04-17T20:30:57Z","content_type":null,"content_length":"4225","record_id":"<urn:uuid:ad2ecfa3-23b0-463d-95dd-6d250da93efd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Bernoulli’s Model of Risky Decisions In 1738, Daniel Bernoulli devised a simple model of risk aversion ( English translation here ). Nobel Prize winner and author of Thinking Fast and Slow Daniel Kahneman criticizes Bernoulli’s theory extensively, describing it as “Bernoulli’s Error.” I disagree with Kahneman. I think Kahneman misunderstands Bernoulli’s claims. Bernoulli’s theory of decision-making is best described with some examples. He claims that doubling your net worth is as positive as dividing it by 2 is negative. So, if your net worth is $200,000, receiving another $200,000 is as positive as losing $100,000 is negative. Bernoulli applies the same type of rule to smaller changes as well. Going from $200,000 to $250,000 is multiplying by 1.25. Going from $200,000 to $160,000 is dividing by 1.25. So, winning $50,000 is as good for you as losing $40,000 is bad for you. (For the more mathematically-inclined, the utility of your net worth is proportional to its logarithm.) Kahneman’s extensive research has shown that people don’t think this way. They tend to be more risk-averse than Bernoulli’s model indicates. When trying to avoid losses, people tend to be more risk-seeking than Bernoulli’s model. Across several pages in Thinking Fast and Slow that Kahneman devotes to criticizing Bernoulli’s theory, the argument seems to boil down to the fact that people don’t make decisions consistent with Bernoulli’s model. I agree with this. However, after reading Bernoulli’s paper , I see no evidence that Bernoulli was trying to model human behaviour. He was trying to model rational behaviour. In section 7, Bernoulli looks at how much people “should be willing to venture,” not how much they willing to venture. In section 14, he says that anyone who accepts a certain type of gamble “acts irrationally.” In section 15, he says that offering certain types of insurance is “foolish” and “unwise.” It seems clear that Bernoulli was not trying to model actual decision-making; he was modeling rational decision-making. Some may think that we should make decisions about gambles based on simple mathematical expectation. So, if you’re offered a chance to either win $10,000 or lose $9999.99 on the flip of a coin, it is rational to accept. This is incorrect. We can see this if we take it to the extreme. Imagine you’re given a chance to gamble for everything you own: double (plus a dollar) or nothing. This is a bad bet. The misery you’d face in your future if you lost far exceeds to benefit you’d get if you won. A certain amount of risk-aversion is perfectly rational, and Bernoulli sought a rule to decide which risks are rational to accept. It’s quite true that Bernoulli’s model has its challenges. It will fail in some narrow circumstances such as desperately needing money for a life-saving operation. Another challenge is deciding what counts in your net worth. How do you model future income (human capital)? How do bankruptcy laws factor in? Despite these challenges, I’ve found Bernoulli’s theory to be an excellent model of rational behaviour. I’ve learned from Kahneman’s research that Prospect Theory is an excellent model of the way people actually make decisions. Faced with a chance to make $300 or lose $200 on the flip of a coin, Prospect Theory explains why people turn down this gamble, and Bernoulli explains why it is rational to accept the gamble. Any tension between the two theories is easily explained by the fact that people are sometimes irrational. 12 comments: 1. People are irrational; me included. Money is a part of life equation, accompanied by family, job content and amount, hobby, ego, etc. I think the goal should be, mathematically speaking, to find a cumulative max for all life factors. I'd call it life balance. 1. @AnatoliN: Seeking life balance is important, but I don't think we do it well. The trouble with money is that it affects everything else. If you overspend by $50,000 on your car and $200,000 on your house, there is a good chance that you'll have to stick with a job you hate because it pays well, and your hobbies and family relationships will suffer as well. 2. Returns ReaperJune 25, 2013 at 9:22 AM @Michael: The "prospect theory" link in the last paragraph is a link to Bernoulli's paper. Was this what you had intended? Or did you intend to link to something else? 1. @Returns Reaper: Thanks for pointing out the error. It is fixed now. 3. Using Bernoulli's theory of decision making, I believe it would only be rational to take the chance to make $300 or lose $200 on the flip of a coin if your net worth is above a certain level (somewhere between $800 and $900 I believe). 1. @Blitzer68: The threshold is a net worth of $600. However, any capable of even the most modest work can expect to have $600 sometime in their future. 2. Interesting if you added some zeros though. I.e. would it only be rational to take the chance to make $300,000 or lose $200,000 on the flip of a coin if your net worth is above $600,000? Seems pretty scary to me. 3. @Blitzer68: Yep. We're wired to find such things scary. According to Kahneman's research, even people with a net worth of $600,000 tend to turn down a chance to flip for +$30 or -$20. 4. Suppose someone with a net worth of $600,000 borrowed $600,000 (interest free say for simplicity) and invested the whole $1.2 million in the stock market expecting to either earn 25% (300,000) or lose 10% (120,000) yearly. This is based on a historical 7% expected yearly return for the stock market with an 18% standard deviation. Using Bernoulli's theory of decision making, this would be a rational bet I believe. Seems kinda scary though. 1. @Blitzer68: If the stock market actually worked that way it might well be a good bet (although the compounded yearly return is only 6.1%). However, as Mandelbrot showed (in his book The Misbehavior of Markets), the distribution of stock market returns has fat tails. This means that big swings are far more likely to happen than a normal (Gaussian) distribution allows. Your hypothetical investor faces the possibility of complete ruin. For example, if he had been unfortunate enough to have begun his leveraged investing in the S&P 500 on 2007 Oct. 12, he would have lost 57% of his investment (including dividends) by 2009 Mar. 9. If the bank had demanded repayment on that day, he'd have been totally wiped out and left with some debt. 5. wrt the flip for +300 or -200, it would make most sense for a series of say 10 flips. 1. @Anonymous: Interestingly, Kahneman gives a detailed explanation of why people would accept 10 such bets, but not one. This is irrational, but consistent with how our brains are wired.
{"url":"http://www.michaeljamesonmoney.com/2013/06/bernoullis-model-of-risky-decisions.html","timestamp":"2014-04-18T21:38:03Z","content_type":null,"content_length":"140145","record_id":"<urn:uuid:6f6e5a0a-4485-4236-9897-99a7586627d8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with displacement X-X0=V0t + .5at^2 X-X0=12.2(6.5) + .5(1.87)(6.5)^2 = 118 Am i doing this right? Please help, ive been trying to figure this problem out for 3 hours now Here you are calculating the displacement in the first segment. But you are using the wrong value for V0. V0 is the velocity, which for this segment is 0 (since it starts from rest). (12.2 is the final velocity.) To use this method, which is perfectly fine, you'll need the initial velocities for each segment of the motion. (Use what you've already figured out in post #1.)
{"url":"http://www.physicsforums.com/showthread.php?t=131210","timestamp":"2014-04-20T01:05:28Z","content_type":null,"content_length":"58855","record_id":"<urn:uuid:215c67c7-5079-4152-bd2d-4dc0813f296e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a Factor? As we begin our study of fractions, we need to have a clear understanding of factors. So, what is a factor? A factor is a number that divides into another number without a remainder. So, for example, 5 is a factor of 20 because 20/5 = 4. There is no remainder. You can also think of factors as the numbers that you multiply together in order to obtain a product. For example, 4 and 5 are factors of 20 because 4(5) = 20. Many numbers, like 20, have a lot of different factors. You may need to think of all of the different factors for a number when working with fractions. Let's take a look at Example 1, where we will list all of the factors for a particular number. Example 1 - Identifying Factors As you can see from Example 1, when asked to find factors, simply think of the whole numbers that you multiply together in order to obtain the product. Yes, this is where knowing those multiplication facts really comes in handy! As you begin to compare two numbers, you may be asked to find common factors. When thinking of the word common, how would you define common factors? Yes, common factors are the factors that are the same for a set of numbers. First you must find all of the factors and then identify the "common" factors. Take a look at Example 2. Example 2 - Common Factors Common factors are simply the factors that are the same for two or more numbers. Not too hard, right? As we continue our study of factors, we will take a look at the greatest common factor and using factor trees to help identify the greatest common factor. However, we will first study prime and composite numbers in the next lesson. More Pre-Algebra Fraction Lessons And.... there's more to come! We would love to hear what you have to say about this page! Like Us on Facebook Recommend this on Google Algebra Class E-course Members Sign Up for Algebra Class E-courses Click here to renew or retrieve a lost password. Search This Site Custom Search Do You Need Help With Your Basic Algebra Skills? Get access to your FREE Algebraic Expressions E-course. You'll have instant access to video lessons, practice problems, quizzes, and read more.... It's FREE, so try it today! What People Are Saying... "I'd like to start off by relaying my sincerest gratitude for your dedication in teaching algebra. Your methodology is by far the simplest to follow, primarily because you take the time to include the small steps in between that most other teachers leave out. It helps to know why you are doing something. I am 45 and heading to college to get my BS in Business. I need to brush up, hence the visit to your site. Great Job!" Jimmy - United States "I stumbled onto your site after I found out that I needed to use some fundamental algebra for an assignment. Turns out I had forgotten some things and your great site helped me remember them like "that" (snap of fingers). The organization of the site let me find exactly what I was looking for so easily. Kudos to you for maintaining such a great resource for students of all ages!" Tom - United States "I just wanted to write and basically thank you for making such a wonderful website! I'm 20 years old and about to take a basic placement test for college. I wanted to brush up on my Algebra skills and I stumbled upon your site. I'm amazed at how simple you make it and how fast I'm remembering Algebra! I don't remember getting most of the answers right when I had an actual teacher in front of me teaching this. Thanks a lot!" Elizabeth - United States "I am a pensioner living in South Africa. I stumbled on your website, the best thing that could ever happen to me! Your course in Algebra has helped me a lot to better understand the different concepts. Thank you very much for sharing your skills for teaching math to even people like me. Please do not stop, as I am sure that your teachings have helped thousands of people like me all over the world." Noel - South Africa This is an amazing program. In one weekend I used it to teach my Grade 9 daughter most of the introductory topics in Linear Relations. I took her up to Rate of Change and now she can do her homework by herself. Reg - United States Subscribe To This Site
{"url":"http://www.algebra-class.com/what-is-a-factor.html","timestamp":"2014-04-20T08:46:59Z","content_type":null,"content_length":"31281","record_id":"<urn:uuid:8da456dd-db2c-4663-929d-6c9829214649>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Adventures in Mathematical Knitting Adventures in Mathematical Knitting Rendering mathematical surfaces and objects in tactile form requires both time and creativity The Design Process So, what exactly does the design process for a mathematical object entail? Here is how I proceed. After deciding on an object to model, I articulate my mathematical goals (in practice, I often do this unconsciously). The chosen goals impose knitting constraints. This gives me a frame in which to create the overall knitting construction for the large-scale structure of the object. Then I must consider the object’s fine structure. Are there particular aspects of the mathematics that I can emphasize with color or surface design? Are particular textures needed? While solving the resulting discretization problem, I usually produce a pattern I can follow—my memory is terrible and I would otherwise lose the work. A recent mathematical creation can serve as a case study. A diagram in Allen Hatcher’s Algebraic Topology had caught my eye, and I thought it would look fantastic knitted. The object shown is an equilateral Y extruded to be a three-finned thing with one end rotated by 1/3 and glued to the other end. Although, unlike a Möbius band, the object is not a manifold, it is a generalization of a Möbius band. So I thought I could use a similar construction—if only I could devise a way to knit outward from the central circle (the center of the Y). I wanted the knitted object to be created from a single strand of yarn, because the mathematical object has a single edge. Thus, I had to create a way to use a single strand of yarn to produce three interlocking sets of free stitches. (Ordinary knitting has only two sets of stitches, upper and lower, per strand of yarn.) Once I had solved that problem—and it took me a while—I decided to use a texture that would look the same from all viewpoints, so that the central circle would be less visible. For my first attempt at the object, I decided to keep things simple and add no more requirements. The result is shown in Figure 12. After my first attempt was done, I took one look at it and realized that it resembled a cowl. I resized the next version to produce a garment. A wearable mathematical object is a rare, and welcome, practical result. Although I have worked on various knitting projects, I’m still not finished fiddling with designs for the Klein bottle—and it’s been about 20 years since I began. I have been asked to adapt my construction into a wearable hat. It’s one among many mathematical knitting challenges I look forward to completing. • belcastro, s.-m. The Home of Mathematical Knitting. http://www.toroidalsnark.net/mathknit.html. • belcastro, s.-m. 2009. Every topological surface can be knit: A proof. Journal of Mathematics and the Arts 3:2, 67–83. • belcastro, s.-m., and C. Yackel, eds. 2007. Making Mathematics with Needlework: Ten Papers and Ten Projects. Natick, MA: AK Peters. • belcastro, s.-m., and C. Yackel, eds. 2011. Crafting by Concepts: Fiber Arts and Mathematics. Natick, MA: AK Peters. • Dayne, B. 2003. Geek chic. Interweave Knits, Fall, 68–71 and 118. • Doyle, W. P. 2011. Past Professors: Alexander Crum Brown (1838–1922). University of Edinburgh School of Chemistry website. http://www.chem.ed.ac.uk/about/professors/crum-brown.html. • Hatcher, A. 2002. Algebraic Topology. Cambridge: Cambridge University Press. Also available at http://www.math.cornell.edu/~hatcher/AT/ATpage.html. • Reid, M. O. 1971. The knitting of surfaces. Eureka–The Journal of the Archimedeans 34:21–26. • Walker, J. 1923. Obituary notices: Alexander Crum Brown. Journal of the Chemical Society, Transactions 123:3422–3431. • Yuksel, C., J. M. Kaldor, D. L. James and S. Marschner. 2012. Stitch meshes for modeling knitted clothing with yarn-level detail. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2012) 31:3. Available with video at http://www.cemyuksel.com/research/stitchmeshes/. • Zimmerman, E. 1989. Knitting Around. Pittsville, WI: Schoolhouse Press.
{"url":"http://www.americanscientist.org/issues/pub/adventures-in-mathematical-knitting/5","timestamp":"2014-04-17T15:30:37Z","content_type":null,"content_length":"134337","record_id":"<urn:uuid:f3da84dd-a55e-4fe8-8dac-d4e970b99c20>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00507-ip-10-147-4-33.ec2.internal.warc.gz"}