content
stringlengths
86
994k
meta
stringlengths
288
619
Augmented Matricies Problem Can someone help me with this augmented matries question please ? Solve the equation (using augmented matrix) 2x-3y+z=9 4x-6y+kz=m thanks youll need to use elementary row operations first id suggest switching rows 1 and 2, then multiplying the first row by the appropriate scalar as to cancle everything possible in row 2 (in this case -2) and then add row 1 to row 2, resulting in a row looking like |0 0 x | y| so in this case x =y, this will be different for yours
{"url":"http://mathhelpforum.com/advanced-algebra/106190-augmented-matricies-problem.html","timestamp":"2014-04-16T07:22:31Z","content_type":null,"content_length":"35532","record_id":"<urn:uuid:27e0e0d9-1614-43b4-9440-2f553e9a683a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Analysis for mixed-effect models in R September 18, 2009 By toddjobe The power of a statistical test is the probability that a null hypothesis will be rejected when the alternative hypothesis is true. In lay terms, power is your ability to refine or "prove" your expectations from the data you collect. The most frequent motivation for estimating the power of a study is to figure out what sample size will be needed to observe a treatment effect. Given a set of pilot data or some other estimate of the variation in a sample, we can use power analysis to inform how much additional data we should collect. I recently did a power analysis on a set of pilot data for a long-term monitoring study of the US National Park Service. I thought I would share some of the things I learned and a bit of R code for others that might need to do something like this. If you aren't into power analysis, the code below may still be useful as examples of how to use the error handling functions in R (withCallingHandlers, withRestarts), parallel programming using the snow package, and linear mixed effect regression using nlme. If you have any suggestions for improvement or if I got something wrong on the analysis, I'd love to hear from you. 1 The Study The study system was cobblebars along the Cumberland river in Big South Fork National Park (Kentucky and Tennessee, United States). Cobblebars are typically dominated by grassy vegetation that include disjunct tall-grass prairie species. It is hypothesized that woody species will encroach onto cobblebars if they are not seasonally scoured by floods. The purpose of the NPS sampling was to observe changes in woody cover through time. The study design consisted of two-stages of clustering: the first being cobblebars, and the second being transects within cobblebars. The response variable was the percentage of the transect that was woody vegetation. Because of the clustered design, the inferential model for this study design has mixed-effects. I used a simple varying intercept model: where y is the percent of each transect in woody vegetation sampled n times within J cobblebars, each with K transects. The parameter of inference for the purpose of monitoring change in woody vegetation through time is β, the rate at which cover changes as a function of time. α, γ, σ^2[γ], and σ^2[y] are hyper-parameters that describe the hierarchical variance structure inherent in the clustered sampling design. Below is the function code used I used to regress the pilot data. It should be noted that with this function you can log or logit transform the response variable (percentage of transect that is woody). I put this in because the responses are proportions (0,1) and errors should technically follow a beta-distribution. Log and logit transforms with Gaussian errors could approximate this. I ran all the models with transformed and untransformed response, and the results did not vary at all. So, I stuck with untransformed responses: Model <- function(x = cobblebars, type = c("normal","log","logit")){ ## Transforms if (type[1] == "log") x$prop.woody <- log(x$prop.woody) else if (type[1] == "logit") x$prop.woody <- log(x$prop.woody / (1 - x$prop.woody)) mod <- lme(prop.woody ~ year, data = x, random = ~ 1 | cobblebar/transect, na.action = na.omit, control = lmeControl(opt = "optim", maxIter = 800, msMaxIter = 800) mod$type <- type[1] Here are the results from this regression of the pilot data: Linear mixed-effects model fit by REML Data: x AIC BIC logLik -134.4319 -124.1297 72.21595 Random effects: Formula: ~1 | cobblebar StdDev: 0.03668416 Formula: ~1 | transect %in% cobblebar (Intercept) Residual StdDev: 0.02625062 0.05663784 Fixed effects: prop.woody ~ year Value Std.Error DF t-value p-value (Intercept) 0.12966667 0.01881983 29 6.889896 0.0000 year -0.00704598 0.01462383 29 -0.481815 0.6336 year -0.389 Number of Observations: 60 Number of Groups: cobblebar transect %in% cobblebar 2 We don't learn about power analysis and complex models When I decided upon the inferential model the first thing that occurred to me was that I never learned in any statistics course I had taken how to do such a power analysis on a multi-level model. I've taken more statistics courses than I'd like to count and taught my own statistics courses for undergrads and graduate students, and the only exposure to power analysis that I had was in the context of simple t-tests or ANOVA. You learn about it in your first 2 statistics courses, then it rarely if ever comes up again until you actually need it. I was, however, able to find a great resource on power analysis from a Bayesian perspective in the excellent book "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman and Jennifer Hill. Andrew Gelman has thought and debated about power analysis and you can get more from his blog. The approach in the book is a simulation-based one and I have adopted it for this 3 Analysis Procedure For the current analysis we needed to know three things: effect size, sample size, and estimates of population variance. We set effect size beforehand. In this context, the parameter of interest is the rate of change in woody cover through time β, and effect size is simply how large or small a value of β you want to distinguish with a regression. Sample size is also set a priori. In the analysis we want to vary sample size by varying the number of cobblebars, the number of transects per cobblebar or the number of years the study is conducted. The population variance cannot be known precisely, and this is where the pilot data come in. By regressing the pilot data using the model we can obtain estimates of all the different components of the variance (cobblebars, transects within cobblebars, and the residual variance). Below is the R function that will return all the hyperparameters (and β) from the regression: ## Get the hyperparameters from the mixed effect model fe <- fixef(x) b<-fe[2] # use the data effect size if not supplied mu.a <- fe[1] vc <- VarCorr(x) sigma.y <- as.numeric(vc[5, 2]) # Residual StdDev sigma.a <- as.numeric(vc[2, 2]) # Cobblebar StdDev sigma.g <- as.numeric(vc[4, 2]) # Cobblebar:transect StdDev hp<-c(b, mu.a, sigma.y, sigma.a, sigma.g) names(hp)<-c("b", "mu.a", "sigma.y", "sigma.a", "sigma.g") To calculate power we to regress the simulated data in the same way we did the pilot data, and check for a significant β. Since optimization is done using numeric methods there is always the chance that the optimization will not work. So, we make sure the regression on the fake data catches and recovers from all errors. The solution for error recovery is to simply try the regression on a new set of fake data. This function is a pretty good example of using the R error handling function withCallingHandlers and withRestarts. fakeModWithRestarts <- function(m.o, n = 100, ...){ ## A Fake Model i <- 0 mod <- NULL while (i < n & is.null(mod)){ mod <- withRestarts({ f <- fake(m.orig = m.o, transform = F, ...) return(update(m.o, data = f)) rs = function(){ i <<- i + 1 error = function(e){ warning = function(w){ if(w$message == "ExceededIterations") cat("\n", w$message, "\n") To calculate the power of a particular design we run fakeModWithRestarts 1000 times and look at the proportion of significant β values: dt.power <- function (m, n.sims = 1000, alpha=0.05, ...){ ## Calculate power for a particular sampling design signif<-rep(NA, n.sims) for(i in 1:n.sims){ lme.power <- fakeModWithRestarts(m.o = m, ...) signif[i] <- summary(lme.power)$tTable[2, 5] < alpha power <- mean(signif, na.rm = T) Finally, we want to perform this analysis on many different sampling designs. In my case I did all combinations of set of effect sizes, cobblebars, transects, and years. So, I generated the appropriate designs: factoredDesign <- function(Elevs = 0.2/c(1,5,10,20), Nlevs = seq(2, 10, by = 2), Jlevs = seq(4, 10, by = 2), Klevs = c(3, 5, 7), ...){ ## Generates factored series of sampling designs for simulation ## of data that follow a particular model. ## Inputs: ## Elevs - vector of effect sizes for the slope parameter. ## Nlevs - vector of number of years to sample. ## Jlevs - vector of number of cobblebars to sample. ## Klevs - vector of number of transects to sample. ## Results: ## Data frame with where columns are the factors and ## rows are the designs. # Level lengths lE <- length(Elevs) lN <- length(Nlevs) lJ <- length(Jlevs) lK <- length(Klevs) # Generate repeated vectors for each factor E <- rep(Elevs, each = lN*lJ*lK) N <- rep(rep(Nlevs, each = lJ*lK), times = lE) J <- rep(rep(Jlevs, each = lK), times = lE*lN) K <- rep(Klevs, times = lE*lN*lJ) return(data.frame(E, N, J, K)) Once we know our effect sizes, the different sample sizes we want, and the estimates of population variance we can generate simulated dataset that are similar to the pilot data. To calculate power we simply simulate a large number of dataset and calculate the proportion of slopes, β that are significantly different from zero (p-value < 0.05). This procedure is repeated for all the effect sizes and sample sizes of interest. Here is the code for generating a simulated dataset. It also does the work of doing the inverse transform of the response variables if necessary. fake <- function(N = 2, J = 6, K = 5, b = NULL, m.orig = mod, transform = TRUE, ...){ ## Simulated Data for power analysis ## N = Number of years ## J = Number of cobblebars ## K = Number of transects within cobblebars year <- rep(0:(N-1), each = J*K) cobblebar <- factor(rep(rep(1:J, each = K), times = N)) transect <- factor(rep(1:K, times = N*J)) ## Simulated parameters b <- hp['b'] g <- rnorm(J*K, 0, hp['sigma.g']) a <- rnorm(J*K, hp['mu.a'] + g, hp['sigma.a']) ## Simulated responses eta <- rnorm(J*K*N, a + b * year, hp['sigma.y']) if (transform){ if (m.orig$type == "normal"){ y <- eta y[y > 1] <- 1 # Fix any boundary problems. y[y < 0] <- 0 else if (m.orig$type == "log"){ y <- exp(eta) y[y > 1] <- 1 else if (m.orig$type == "logit") y <- exp(eta) / (1 + exp(eta)) y <- eta return(data.frame(prop.woody = y, year, transect, cobblebar)) Then I performed the power calculations on each of these designs. This could take a long time, so I set this procedure to use parallel processing if needed. Note that I had to re-~source~ the file with all the necessary functions for each processor. powerAnalysis <- function(parallel = T, ...){ ## Full Power Analysis ## Parallel cl <- makeCluster(7, type = "SOCK") clusterEvalQ(cl, source("cobblebars2.r")) ## The simulations dat <- factoredDesign(...) if (parallel){ dat$power <- parRapply(cl, dat, function(x,...){ dt.power(N = x[2], J = x[3], K = x[4], b = x[1], ...) }, ...) } else { dat$power <- apply(dat, 1, function(x, ...){ dt.power(N = x[2], J = x[3], K = x[4], b = x[1], ...) }, ...) The output of the powerAnalysis function is a data frame with columns for the power and all the sample design settings. So, I wrote a custom plotting function for this data frame: plotPower <- function(dt){ xyplot(power~N|J*K, data = dt, groups = E, panel = function(...){panel.xyplot(...) panel.abline(h = 0.8, lty = 2)}, type = c("p", "l"), xlab = "sampling years", ylab = "power", strip = strip.custom(var.name = c("C", "T"), strip.levels = c(T, T)), auto.key = T Below is the figure for the cobblebar power analysis. I won't go into detail on what the results mean since I am concerned here with illustrating the technique and the R code. Obviously, as the number of cobblebars and transects per year increase, so does power. And, as the effect size increases, observing it with a test is easier. Date: 2009-09-18 Fri HTML generated by org-mode 6.30trans in emacs 22 for the author, please follow the link and comment on his blog: Computational Ecology daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/power-analysis-for-mixed-effect-models-in-r-2/","timestamp":"2014-04-21T02:14:07Z","content_type":null,"content_length":"57886","record_id":"<urn:uuid:7ea54581-c5f7-49b9-be7b-7a87e3ca95a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Lokal Präsentierbare Kategorien Results 11 - 20 of 52 , 1996 "... We take another look at the Chu construction and show how to simplify it by looking at ..." - Diff. Catég "... Abstract. An intrinsic, combinatorial homotopy theory has been developed in [G3] for simplicial complexes; these form the cartesian closed subcategory of simple presheaves in!Smp, the topos of symmetric simplicial sets, or presheaves on the category!å of finite, positive cardinals. We show here how ..." Cited by 11 (8 self) Add to MetaCart Abstract. An intrinsic, combinatorial homotopy theory has been developed in [G3] for simplicial complexes; these form the cartesian closed subcategory of simple presheaves in!Smp, the topos of symmetric simplicial sets, or presheaves on the category!å of finite, positive cardinals. We show here how this homotopy theory can be extended to the topos itself,!Smp. As a crucial advantage, the fundamental groupoid Π1:!Smp = Gpd is left adjoint to a natural functor M1: Gpd =!Smp, the symmetric nerve of a groupoid, and preserves all colimits – a strong van Kampen property. Similar results hold in all higher dimensions. Analogously, a notion of (non-reversible) directed homotopy can be developed in the ordinary simplicial topos Smp, with applications to image analysis as in [G3]. We have now a homotopy n-category functor ↑Πn: Smp = n-Cat, left adjoint to a nerve Nn = n-Cat(↑Πn(∆[n]), –). This construction can be applied to various presheaf categories; the basic requirements seem to be: finite products of representables are finitely presentable and there is a representable 'standard interval'. - Max Kelly volume, J. Pure Appl. Alg , 1996 "... For a suitable collection D of small categories, we define the D-accessible categories, generalizing the #-accessible categories of Lair, Makkai, and Pare; here the ..." Cited by 11 (0 self) Add to MetaCart For a suitable collection D of small categories, we define the D-accessible categories, generalizing the #-accessible categories of Lair, Makkai, and Pare; here the - DPP06] [DPP10] [DS97] [DS03] [EM06 "... ..." , 1999 "... This paper presents an (abstract) model theoretic semantics for ECLP, without directly addressing the computational aspect. This is a rather novel approach on the area of constraints where almost all efforts have been devoted to computational and operational issues; it is important the reader unders ..." Cited by 7 (3 self) Add to MetaCart This paper presents an (abstract) model theoretic semantics for ECLP, without directly addressing the computational aspect. This is a rather novel approach on the area of constraints where almost all efforts have been devoted to computational and operational issues; it is important the reader understands the model-theoretic and foundational orientation of this paper. However, we plan to gradually develop the computational side based on these foundations as further research (Section 7.2 sketches some of the directions of such further research). Some computational aspects of this theory can already be found in (Diaconescu, 1996c). This semantics is , 2002 "... For deterministic systems, expressed as coalgebras over polynomial functors, every tree t (an element of the final coalgebra) turns out to represent a new coalgebra A t . The universal property of these coalgebras, resembling freeness, is that for every state s of every system S there exists a uniqu ..." Cited by 7 (1 self) Add to MetaCart For deterministic systems, expressed as coalgebras over polynomial functors, every tree t (an element of the final coalgebra) turns out to represent a new coalgebra A t . The universal property of these coalgebras, resembling freeness, is that for every state s of every system S there exists a unique coalgebra homomorphism from a unique A t which takes the root of t to s. Moreover, the tree coalgebras are finitely presentable and form a strong generator. Thus, these categories of coalgebras are locally finitely presentable; in particular every system is a filtered colimit of finitely presentable systems. , 1997 "... . Each full reflective subcategory X of a finitely-complete category C gives rise to a factorization system (E; M) on C, where E consists of the morphisms of C inverted by the reflexion I : C ! X . Under a simplifying assumption which is satisfied in many practical examples, a morphism f : A ! B lie ..." Cited by 7 (5 self) Add to MetaCart . Each full reflective subcategory X of a finitely-complete category C gives rise to a factorization system (E; M) on C, where E consists of the morphisms of C inverted by the reflexion I : C ! X . Under a simplifying assumption which is satisfied in many practical examples, a morphism f : A ! B lies in M precisely when it is the pullback along the unit jB : B ! IB of its reflexion If : IA ! IB; whereupon f is said to be a trivial covering of B. Finally, the morphism f : A ! B is said to be a covering of B if, for some effective descent morphism p : E ! B, the pullback p f of f along p is a trivial covering of E. This is the absolute notion of covering; there is also a more general relative one, where some class \Theta of morphisms of C is given, and the class Cov(B) of coverings of B is a subclass -- or rather a subcategory -- of the category C #B ae C=B whose objects are those f : A ! B with f 2 \Theta. Many questions in mathematics can be reduced to asking whether Cov(B) is re... , 1996 "... For the specification of abstract data types, quite a number of logical systems have been developed. In this work, we will try to give an overview over this variety. As a prerequisite, we first study notions of {\em representation} and embedding between logical systems, which are formalized as {\em ..." Cited by 5 (4 self) Add to MetaCart For the specification of abstract data types, quite a number of logical systems have been developed. In this work, we will try to give an overview over this variety. As a prerequisite, we first study notions of {\em representation} and embedding between logical systems, which are formalized as {\em institutions} here. Different kinds of representations will lead to a looser or tighter connection of the institutions, with more or less good possibilities of faithfully embedding the semantics and of re-using proof support. In the second part, we then perform a detailed ``empirical'' study of the relations among various well-known institutions of total, order-sorted and partial algebras and first-order structures (all with Horn style, i.e.\ universally quantified conditional, axioms). We thus obtain a {\em graph} of institutions, with different kinds of edges according to the different kinds of representations between institutions studied in the first part. We also prove some separation results, leading to a {\em hierarchy} of institutions, which in turn naturally leads to five subgraphs of the above graph of institutions. They correspond to five different levels of expressiveness in the hierarchy, which can be characterized by different kinds of conditional generation principles. We introduce a systematic notation for institutions of total, order-sorted and partial algebras and first-order structures. The notation closely follows the combination of features that are present in the respective institution. This raises the question whether these combinations of features can be made mathematically precise in some way. In the third part, we therefore study the combination of institutions with the help of so-called parchments (which are certain algebraic presentations of institutions) and parchment morphisms. The present book is a revised version of the author's thesis, where a number of mathematical problems (pointed out by Andrzej Tarlecki) and a number of misuses of the English language (pointed out by Bernd Krieg-Br\"uckner) have been corrected. Also, the syntax of specifications has been adopted to that of the recently developed Common Algebraic Specification Language {\sc Casl} \cite{CASL/Summary,Mosses97TAPSOFT}. , 2005 "... Abstract. We define support varieties in an axiomatic setting using the prime spectrum of a lattice of ideals. A key observation is the functoriality of the spectrum and that this functor admits an adjoint. We assign to each ideal its support and can classify ideals in terms of their support. Applic ..." Cited by 5 (0 self) Add to MetaCart Abstract. We define support varieties in an axiomatic setting using the prime spectrum of a lattice of ideals. A key observation is the functoriality of the spectrum and that this functor admits an adjoint. We assign to each ideal its support and can classify ideals in terms of their support. Applications arise from studying abelian or triangulated tensor categories. Specific examples from algebraic geometry and modular representation theory are discussed, illustrating the power of this approach which is inspired by recent work of Balmer. Contents - Journal of Algebra , 1998 "... Analogously to the fact that Lawvere’s algebraic theories of (finitary) varieties are precisely the small categories with finite products, we prove that (i) algebraic theories of many–sorted quasivarieties are precisely the small, left exact categories with enough regular injectives and (ii) algebra ..." Cited by 5 (1 self) Add to MetaCart Analogously to the fact that Lawvere’s algebraic theories of (finitary) varieties are precisely the small categories with finite products, we prove that (i) algebraic theories of many–sorted quasivarieties are precisely the small, left exact categories with enough regular injectives and (ii) algebraic theories of many–sorted Horn classes are precisely the small left exact categories with enough M–injectives, where M is a class of monomorphisms closed under finite products and containing all regular monomorphisms. We also present a Gabriel–Ulmer–type duality theory for quasivarieties and Horn classes. 1 Quasivarieties and Horn Classes The aim of the present paper is to describe, via algebraic theories, classes of finitary algebras, or finitary structures, which are presentable by implications. We work with finitary many–sorted algebras and structures, but we also mention the restricted version to the one–sorted case on the one hand, and the generalization to infinitary structures on the other hand. Recall that Lawvere’s thesis [11] states that Lawvere–theories of varieties, i.e., classes of algebras presented by equations, are precisely the small categories with finite products, (in the one sorted case moreover product–generated by a single object; for many–sorted varieties the analogous statement can be found in [4, 3.16, 3.17]). More in detail: If we denote, for small categories A, by P rodωA the full subcategory of Set A formed by all functors preserving finite products, we obtain the following: (i) If K is a variety, then its Lawvere–theory L (K), which is the full subcategory of K op of all finitely generated free K–algebras, is essentially small, and has finite products. The variety K is equivalent to P rodωL(K).
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=127386&sort=cite&start=10","timestamp":"2014-04-18T05:58:03Z","content_type":null,"content_length":"37200","record_id":"<urn:uuid:733d3874-ca30-4db7-af98-67aeac4e9c53>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Jessup, MD Geometry Tutor Find a Jessup, MD Geometry Tutor ...After college, I took on two students in physics and calculus. While earning my graduate degree in Math Education, I tutored student athletes at the University of Pittsburgh in subjects ranging from Algebra 1 to business calculus. I'm currently employed at Quaker Valley high school teaching Geometry. 19 Subjects: including geometry, English, reading, physics ...I have years of experience tutoring all subject areas for the GED, including high school math, English grammar, and writing. SAT Reading is a little bit of a misleading title, as there are no points awarded for actually reading the material! Instead, test-takers earn points for properly analyzing the text they are given. 37 Subjects: including geometry, chemistry, English, reading ...As a parent, I understand the demands that students are under and I can relate well to the students. I am highly skilled in biology and math for all grade levels. I can also guide your child in becoming more organized and making their study time more efficient. 18 Subjects: including geometry, reading, calculus, writing ...I have a lot of experience playing volleyball! I started in high school where I played 4 years (2 on junior varsity and the remaining two on girls varsity). I also did co-ed in high school for 2 seasons. During my high school career I was captain and I received the senior athletic award. 13 Subjects: including geometry, chemistry, calculus, biology ...I currently help children with their math, science, writing, language arts and other general homework subjects on a daily basis. As a tutor, I am very patient and very positive. I try to bring some fun into the mix to encourage the students and improve their focus on the work at hand. 16 Subjects: including geometry, reading, algebra 1, English Related Jessup, MD Tutors Jessup, MD Accounting Tutors Jessup, MD ACT Tutors Jessup, MD Algebra Tutors Jessup, MD Algebra 2 Tutors Jessup, MD Calculus Tutors Jessup, MD Geometry Tutors Jessup, MD Math Tutors Jessup, MD Prealgebra Tutors Jessup, MD Precalculus Tutors Jessup, MD SAT Tutors Jessup, MD SAT Math Tutors Jessup, MD Science Tutors Jessup, MD Statistics Tutors Jessup, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Jessup_MD_geometry_tutors.php","timestamp":"2014-04-18T13:40:09Z","content_type":null,"content_length":"23972","record_id":"<urn:uuid:acf5f4fd-b7db-4f69-8700-b0aab0123ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
FIltered colimits of truncated objects in $\infty$-topoi up vote 6 down vote favorite The bare question: Let $\mathcal{C}$ be an $\infty$-topos, and let $\tau_{\leq 0}\mathcal{C}$ be the subcategory of 0-truncated objects (which is the nerve of an ordinary Grothendieck topos: see HTT 6.4.1.3). Does the inclusion $\tau_{\leq 0}\mathcal{C} \hookrightarrow \mathcal{C}$ preserve filtered (or at least directed) ($\infty$-)colimits? Let $Aff_\mathbb{C}$ the Grothendieck site of complex affine schemes. We can then consider the topos of sheaves, $Shv(Aff_\mathbb{C})$, and the $\infty$-topos of $\infty$-stacks, $Shv_\infty(Aff_\ mathbb{C})$. The nerve of the first is equivalent to the subcategory of 0-truncated objects in the second. Given a scheme $X$ and a closed subscheme $Y$ in it defined by a sheaf of ideals $\mathcal{I}$, we can construct the so-called formal completion of $X$ along $Y$ as the directed colimit $X_Y^{\mbox {^}} = \mathrm{colim}\:V(\mathcal{I}^n)$. Typically this is done in $Shv(Aff_\mathbb{C})$. Since the homotopy theory in the latter is trivial, it is also the homotopy colimit of the same diagram. But how does this play with the inclusion $Shv(Aff_\mathbb{C}) \hookrightarrow Shv_\infty(Aff_\mathbb{C})$? Is $X_Y^{\mbox{^}}$ still the homotopy colimit of the same diagram in $Shv_\infty(Aff_\mathbb An affirmative answer to my last question would be enough for me, but I suppose it is a natural question to ask whether this holds for general formal schemes —i.e., sheaves that are locally formal completions as above. ag.algebraic-geometry at.algebraic-topology infinity-topos-theory add comment 1 Answer active oldest votes I believe the answer is YES and, more generally, that $\tau_{\leq n}\mathcal{C}\subset\mathcal{C}$ preserves filtered colimits for any $\infty$-topos $\mathcal{C}$. For the $\ infty$-topos of $\infty$-groupoids this is well-known. This implies the result in any presheaf $\infty$-topos since colimits and truncations are computed objectwise. Finally, if the up vote 9 result is true in $\mathcal{C}$ and $\mathcal{D}\subset\mathcal{C}$ is a left exact localization, then the result is true in $\mathcal{D}$ as well because left exact functors preserve down vote $n$-truncated objects (HTT, Prop. 5.5.6.16). Hi Marc, how does the fact that left exact functors preserve n-truncated objects imply the result for $D$. Maybe I am being slow. – David Carchedi Mar 7 '13 at 3:10 4 @David #1: You need to apply this fact to both the inclusion $i:\mathcal{D}\subset\mathcal{C}$ and its left exact left adjoint $a:\mathcal{C}\to\mathcal{D}$. Let $(X_k)$ be a filtered diagram in $\mathcal{D}$. If each $X_k$ is $n$-truncated, so is each $iX_k$, whence also $colim (iX_k)$ by the assumption on $\mathcal{C}$, whence $a(colim (iX_k))=colim (X_k)$. – Marc Hoyois Mar 7 '13 at 3:26 David: We can compute the colimit by computing in $\mathcal{C}$ and then localizing back down to $\mathcal{D}$. So... truncating a filtered colimit of stuff in $\mathcal{D}$ is the 1 same as truncating the localization of a filtered colimit of stuff in $\mathcal{C}$, which is the same as localizing the truncation of a filtered colimit of stuff in $\mathcal{C}$, which is the same as localizing the filtered colimit of truncations, which is the same taking the filtered colimit of truncations. (Is this what it felt like when people used to write out equations in words?) – Dylan Wilson Mar 7 '13 at 3:29 @David #2: You're right that for sifted colimits you get different things, but sifted colimits are much more general than filtered colimits. Sifted colimits, unlike filtered ones, do 2 not preserve truncated objects: any $\infty$-groupoid whatsoever is a sifted colimit of discrete ones (e.g. a simplicial set representing it). Another difference with the filtered case is that, for a $1$-category, being sifted as an $\infty$-category is strictly stronger than being sifted in the usual sense (e.g. a reflexive coequalizer is $1$-sifted but not $\ infty$-sifted). – Marc Hoyois Mar 7 '13 at 3:39 @Alberto: I can't think of a reference but here's why it's true. Any filtered colimit of $\infty$-groupoids can be computed as the $1$-colimit of a filtered diagram of simplicial sets 3 (this uses HTT 5.3.1.16 and the fact that filtered colimits of simplicial sets are homotopy colimits). Then just use that the functors $\pi_i(-,x)$ preserve filtered colimits. – Marc Hoyois Mar 7 '13 at 15:11 show 6 more comments Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology infinity-topos-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/123826/filtered-colimits-of-truncated-objects-in-infty-topoi/123831","timestamp":"2014-04-18T19:02:53Z","content_type":null,"content_length":"59641","record_id":"<urn:uuid:d52dd973-66cd-4364-a1f9-6f9f9140b81e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network Journal of Combustion Volume 2012 (2012), Article ID 635190, 6 pages Research Article Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network The Fifth Department, Xi'an Research Institute of Hi-Tech, Xi'an, Shaanxi 710025, China Received 29 July 2011; Revised 21 October 2011; Accepted 21 October 2011 Academic Editor: Constantine D. Rakopoulos Copyright © 2012 Wu Wan'e and Zhu Zuoming. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than %; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%), the variation of the calculation data is consistent with the experimental results. 1. Introduction Boron-based fuel-rich propellant belongs to composite solid propellants and is used for solid rocket ramjet engine. The basic requirements of the propellant for the engine are high burning rate and appropriate pressure index at low pressure. In its combustion, there are multiphase physical and chemical reactions. The former low-pressure combustion model can only be used for qualitative analysis but not for simulation because many of the parameters cannot be measured by experiments, and thus primary combustion property research and formulation design are excessively dependent on experimental study [1–3]. Therefore, applying neural network to simulation of propellant combustion characteristics has become an important research direction, and, in recent years, the neural network method has been applied to HTPB composite solid propellant, NEPE propellant, and so forth [4–9]. But no public reports on the application of the method to calculation for primary combustion characteristics (burning rate and pressure index) of boron-based fuel-rich propellant can be found at home and abroad. BP neural network model can achieve a very close approximation to a complex nonlinear function and is suitable to deal with those problems in which causal relationship is not clear, therefore, in this paper, the concrete combustion process is not taken into account, and calculation for primary combustion characteristics is realized by training BP neural network with formulations and corresponding burning rate data directly. 2. Preferences of Propellant Formulation BP neural network is applied to calculation for primary combustion characteristics with inputs of pressure and characterization parameters of boron-based fuel-rich propellant formulation and output of corresponding burning rate; through training BP neural network, the complex function between input and output can be simulated; eventually, with the given pressure and propellant formulation, corresponding burning rate can be obtained. Therefore, selecting parameters which can reflect the characteristics of boron-based fuel-rich propellant formulation essentially is a top priority. Based on the in-depth study of primary combustion characteristics of boron-based fuel-rich propellant [1–3], the main factors which can affect primary combustion characteristics are analyzed, and a practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward. The characterization parameters are as follows.(1)HTPB content. HTPB (hydroxyl-terminated polybutadiene) is the flexible matrix of the propellant as well as an organic fuel, accounting for about 30% of the total mass of propellant.(2)AP content. AP (ammonium perchlorate) is the only oxidant in the propellant, accounting for about 30% of the total mass of propellant.(3)AP weight-average particle size. Weight-average particle size can reflect particle size gradation of AP. Selecting weight-average particle size as a characterization parameter can avoid setting out the different particle sizes and their corresponding content in the characterization parameters one by one, thereby reducing the number of the characterization parameters.(4)B content. B (boron) is one of the main fuels in the propellant. Due to low primary combustion efficiency and propellant manufacturing problem brought by pure boron powder, coated boron, and reunion boron are usually adopted in current boron-based fuel-rich propellant. In the propellant formulation characterization, pure boron, coated boron and reunion boron must be separate. In addition, the type and amount of coating material in coated boron, the adhesive in reunion boron, and the particle size of reunion boron can also affect primary combustion characteristics of the propellant significantly; they should be characterized as parameters if necessary.(5)Flammable metal content. In the propellant formulation characterization, Mg (magnesium), Al (aluminum), and MA (magnalium alloy) must be separate. In addition, the particle size of flammable metal can also affect primary combustion characteristics of the propellant; it should be characterized as a parameter if necessary.(6)Burning rate catalyst content. The function of burning rate catalyst is to affect combustion characteristics of the propellant. Besides, when other substances and factors in the propellant formulations can also affect primary combustion characteristics of the propellant, they should also be characterized as parameters if necessary. It should be highlighted that, in the simulation, characterization parameters of the propellant formulations should be selected according to actual needs and there is no need to use all the above parameters to characterize the propellant formulations. Selecting appropriate characterization parameters can greatly reduce the computing amount of neural network. 3. Establishment and Validation of BP Neural Network Model Backpropagation is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function or associate input vectors with specific output vectors. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities. Standard backpropagation is a gradient descent algorithm, as is the Widrow-Hoff learning rule, in which the network weights are moved along the negative of the gradient of the performance function. The term backpropagation refers to the manner in which the gradient is computed for nonlinear multilayer networks. Properly trained backpropagation networks tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output, which is similar to that, the input vectors used in training lead to the correct output. This generalization property makes it possible to train a network on a representative set of input/target pairs and get good results without training the network on all possible input/output pairs [10]. Figure 1 shows the computing process of standard backpropagation neural network. Kosmogorov’s theorem shows that, with appropriate structure and weights-three-layer feedforward neural network can approximate any continuous function, so a three-layer structure is adopted in BP neural network model in this paper. In order to overcome the shortcomings of BP neural network (easy falling into local optimum and slow convergence speed), gradient descent with momentum and adaptive learning rate backpropagation is adopted as learning algorithm of BP neural network. Momentum allows the network to ignore small features in the error surface. Without momentum, a network can get stuck in a shallow local minimum. With momentum, a network can slide through such a minimum. An adaptive learning rate attempts to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface. In this paper, the experimental burning rate data of boron-based fuel-rich propellant in Chapter III of [3] are adopted for simulation. The propellant formulation is as follows: HTPB (hydroxyl-terminated polybutadiene) 28%–32%, AP (ammonium perchlorate) 30%–35%, MA (magnalium alloy, Mg-Al ratio of 1:1) 4%–8%, GFP (catocene) 0%–5%, B (boron) 30%. The detailed composition of 15 propellant formulations adopted for simulation is shown in Table 1. Accordingly, the following six parameters are selected as the training sample input:(1)Pressure (MPa);(2)HTPB content (%);(3)MA content (%);(4)AP content (%);(5)AP particle size (mm);(6)GFP content (%). The corresponding output of the input sample composed of these six parameters is burning rate (mm/s). Based on the previous design of BP neural network, a calculation model for primary combustion characteristics of boron-based fuel-rich propellant can be established. The setting of the basic parameters is shown in Table 2. In 45 sets of burning rate data (15 propellant formulations at 3 pressures) adopted in this paper, 36 sets were selected as training samples and 9 sets as validation samples. First, use training samples to train BP neural network 10 times, respectively, and save corresponding network one by one; then use validation samples to validate these neural networks. The network which has the minimum mean square error of validation samples was saved as the calculation model for primary combustion characteristics and used to calculate burning rate of 15 experimental formulations under three different pressures. The comparison between calculated data and tested data showed that in 45 sets of data, 35 sets’ relative deviation were within ±3%, accounting for 77.8%; 8 sets’ relative deviation were more than ±3% and less than ±5%, accounting for 17.8%; only 2 sets’ relative deviation were more than ±5%, accounting for 4.4%; all the data’s relative deviation were within ±7.3%; 9 sets of validation sample data’s relative deviation were totally within ±5%. This shows that BP neural network model has rather high calculation accuracy and it can meet the need of calculation for primary combustion characteristics. To validate further accuracy and effectiveness of BP neural network model, multiple linear regression (MLR), radial basis network (RBF), and generalized regression neural network (GRNN) are adopted to calculation based on the same samples and compared with BP neural network model. Table 3 shows mean square error comparison among the four calculation models. As can be seen in Table 3, compared with the other three calculation models, BP neural network model has the highest accuracy. This is the advantage of BP neural network model. Besides, it should be highlighted that, in the simulation, if more data at more pressure levels can be used to train the network model, the calculation accuracy usually can be further improved. 4. Prediction of Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant In this paper, based on the established BP neural network model, in the pressure range (0.5–1.5MPa) and in the formulation range (HTPB 28%–32%, AP 30%–35%, MA 4%–8%, GFP 0%–5%, and B 30%), primary combustion characteristics of boron-based fuel-rich propellant were predicted, and the variation was preliminarily summed up. The following burning rate data were obtained by BP neural network model, and the pressure index data were obtained by fitting of the burning rate data under different pressures with the empirical equation listed as follows: In (1), refers to burning rate at some pressure; P refers to pressure; n refers to pressure index. When the total amount of HTPB/MA is fixed, effect of HTPB content (wt%) on burning rate and pressure index can be shown in Figures 2(a) and 2(b), respectively. In Figures 2–5, C represents calculation data and E represents experimental data. It can be seen that when increasing HTPB content with a corresponding reduction in MA content, burning rate (under a certain pressure) decreases; pressure index first increases, then decreases and finally rises slightly. When the total amount of AP/MA and AP particle size are fixed, effect of AP content (wt%) on burning rate and pressure index can be shown in Figures 3(a) and 3(b), respectively. It can be seen that when increasing AP content with a corresponding reduction in MA content, both burning rate (under a certain pressure) and pressure index increase. When AP content is fixed, effect of AP particle size on burning rate and pressure index can be shown in Figures 4(a) and 4(b), respectively. It can be seen that when increasing AP particle size, burning rate (under a certain pressure) decreases; pressure index first increases and then decreases. When the total amount of GFP/HTPB is fixed, effect of GFP content (wt%) on burning rate and pressure index can be shown in Figures 5(a), and 5(b), respectively. It can be seen that when increasing GFP content with a corresponding reduction in HTPB content, both burning rate (under a certain pressure) and pressure index increase. In addition, it can be seen from Figures 2–5 that the variation of the calculation data tallies well with experimental results, which validates further accuracy and effectiveness of BP neural network model. Furthermore, BP neural network model can simulate burning rate and pressure index of propellant formulations not tested by experiments, which makes important sense in combustion property researches and formulation design of boron-based fuel-rich propellant. 5. Conclusions (1)In this paper, a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on BP neural network was established. The simulation results showed that, BP neural network model is superior to multiple linear regression, radial basis network, and generalized regression neural network; the relative deviation of 95.6% of the calculation data obtained by using BP neural network model was less than ±5%, and all the calculation data’s relative deviation was within ±7.3%.(2)The established BP neural network model was used to predict primary combustion characteristics of boron-based fuel-rich propellant. HTPB content, AP content, AP particle size, and GFP content in the propellant formulation were changed, and burning rate and pressure index of corresponding formulations were calculated. The results showed that the variation was consistent with the experimental results.(3)In the adjustable formulation range, the calculation results of different formulations under different pressures can be directly used to optimize the design of propellant formulations, which can reduce the experimental workload, shorten the research cycle, and improve reproducibility of the research. 1. H. Songqi, Study on primary combustion of boron-based fuel-rich propellant, Ph.D. thesis, Northwestern Polytechnical University, Xi'an, China, 2004. 2. W. Yinghong, Researches on combustion of boron-based fuel-rich solid propellants in lower pressure, Ph.D. thesis, Northwestern Polytechnical University, Xi'an, China, 2004. 3. W. Wan'e, Researches on mechanism of pressure index increase of boron-based fuel-rich propellant, Ph.D. thesis, Northwestern Polytechnical University, Xi'an, China, 2009. 4. P. Deng, D. Tian, and F. Zhuang, “A neural network for modeling calculations for composite propellants,” Journal of Propulsion Technology, vol. 17, no. 4, pp. 69–76, 1996 (Chinese). 5. G. Baohua, P. Baolin, and W. Guangtian, “Application of artificial neural network in predicting mechanical property of HTPB propellant,” Journal of Solid Rocket Technology, vol. 20, no. 1, pp. 51–56, 1997 (Chinese). 6. P.-J. Liu, X. Lu, and G.-Q. He, “Burning rate relativity investigation using artificial neural network,” Journal of Propulsion Technology, vol. 25, no. 2, pp. 156–158, 2004 (Chinese). 7. L. Jianmin, T. Shaochun, X. Fuming, et al., “Prediction of burning rate of HTPB propellant by artificial neural network model,” Chinese Journal of Explosives and Propellants, vol. 29, no. 3, pp. 13–16, 2006 (Chinese). 8. H. Yongjun and L. Jinxian, “Burning rate relativity investigation using BP neural network based on genetic algorithm,” Journal of Projectiles Rockets Missiles and Guidance, vol. 26, no. 1, pp. 144–146, 2006 (Chinese). 9. X.-P. Zhang and Z.-L. Dai, “Calculation for high-pressure combustion properties of high-energy solid propellant based on GA-BP neural network,” Journal of Solid Rocket Technology, vol. 30, no. 3, pp. 229–232, 2007. 10. M. T. Hagan, H. B Demuth, and M. Beale, Neural Network Design, PWS Publishing Company, 1996.
{"url":"http://www.hindawi.com/journals/jc/2012/635190/","timestamp":"2014-04-17T02:07:39Z","content_type":null,"content_length":"39945","record_id":"<urn:uuid:adba529a-fb7c-4d5e-8fc3-e5fde602b37b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Definitions This page: Dr. Math See also the Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Definitions Stars indicate particularly interesting answers or good places to begin browsing. I don't understand the difference between cos(x)^2, cos^2(x) and (cos (x))^2. Are they all same? What is a quadrant? How do you find it? I had to identify 9 ordered pairs on a graph. Now I need to also name the quadrant each one is in. Finding the inverse of a function and graphing yields a graph that has been reflected in the line y = x relative to the function. Inverse proportionality, however, yields a reciprocal relation graphically. Why do these two things have similar names yet mean different things? A circle is not a polygon. Can equilateral triangles also be classified as isosceles? In other words, does an isosceles triangle have exactly two equal sides or at least two equal sides? Is a sphere a two- or a three-dimensional object? Is kite the true math name for this shape, or is there another? Can you give me an explanation and a nice example of isomorphism? In the complex plane, zero (0 + 0i) is on both the real and pure imaginary axes. Is 0 therefore a pure imaginary number as well as a real number? I've been taught that infinity is just a concept, not an actual number. I'm wondering if the same thing can be said of zero? Every other person at a table is eliminated until there is only one person left. Who is the survivor? Do you have information on Konigsberg's bridge? I have a problem with Lagrange Multipliers - can you help? If the percussion section has 6 people, does the expression "the brass section is three times larger than the percussion section" mean that the brass section is 3*6 = 18 people or 4*6 = 24 What are the etymologies of the six trigonometric functions: sine, cosine, tangent, cosecant, secant, and cotangent? The Distributive Law, Associative Law, Identity, and Commutative Equations. I need to know where the name "leg" of a triangle comes from, or what its origin is. How far is a light year in miles or kilometers? I assumed from the graph that the function had a limit at x=0 of 0, but since it involves sin(1/0) I can not prove this using the basic trigonometric limits (sin x/x and (1-cos x)/x), L'Hopital's rule, or by rearranging the equation. Can you help? Show that the transition matrix P from one orthonormal basis to another is unitary, that is, P*P = I. Can you explain the terms linear foot and board foot as they are used in the lumber industry? There is a fence I want to buy, but the ad says '4 foot tall, 50 linear feet. What is a linear foot compared to a regular foot? Are a best-fit line, a line of best fit, and a fitted line synonymous? Which is longer, a ray or a line? What is a locus? Why are mass and weight both measured with the same unit? If weight is different on the moon and mass is not, what is the difference? Do mass and weight refer to the same thing on earth? Is mass equal to weight? Can you give me a definition of mass and how it differs from weight? What is mathematical induction? Can you give an example of the ideas of math induction? What is the meaning of the word 'undefined' in mathematical terms? What is a mathematical model, and how would it be used? How is math related to logic and intuition? I am puzzled by one symbol of typing math. What does the upper case letter C mean? As in (2C1) (3C1) / (47C2) = 6/1081. What do the common math symbols (backward E, upside-down A, etc.) mean? I am doing a project in Algebra 2 and must research matrix multiplication. What is the locus of points equidistant from two parallel lines 8 meters apart? We are drawing pictures of dominoes, triominoes, tetrominoes, and pentominoes. What is the meaning of the root "ominoe"? If a logic statement says, 'James is taking fencing or algebra,' does that mean he is taking one class or the other, or could he be taking both of them? What do the symbols R_{+} and R_{++} mean? Is the term 'Space' a mathematical object? I have seen it in "Vector Space," "Banach Space," and "Hilbert Space," but are they the same thing? Page: [<prev] 1 2 3 4 5 6 7 8 9 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_defs.html?s_keyid=39756850&f_keyid=39756852&start_at=161&num_to_see=40","timestamp":"2014-04-19T12:37:41Z","content_type":null,"content_length":"22662","record_id":"<urn:uuid:8e986d71-e5ad-43c9-8f45-2e17d33904a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] Abstract vectors in optimization Charles R Harris charlesr.harris@gmail.... Thu Jan 8 13:41:24 CST 2009 On Thu, Jan 8, 2009 at 7:59 AM, Ben FrantzDale <benfrantzdale@gmail.com>wrote: > On Tue, Jan 6, 2009 at 10:56 PM, Robert Kern <robert.kern@gmail.com>wrote: >> 2009/1/6 Ben FrantzDale <benfrantzdale@gmail.com>: >> > David, Robert, et al. >> > >> > I think you understand what I am talking about. This is just about >> > optimization (although in general I would argue that functions should >> take >> > as general an interface as possible for the same reasons). >> I'm not sure I would push it that far. Premature generalization can be >> just as bad as premature optimization. I find that a use case-based >> approach is a better principle. Write the code for the problem in >> front of you. Generalize when you have a use case for doing so. > I completely agree; I was just speculating :-) >> Now, we do have a handful of use cases here, but I think we should >> have some care before modifying the functions in place. Generalization >> could cost us performance in the typical case. >> > Attached is all the code I changed to make fmin_cg work with a >> > SimpleHilbertVector class. >> Hmm, can you show us a .diff of the changes that you made? I'd like to >> see more clearly exactly what you changed. > See attached. > Run this: > $ patch Hilbert_cg_example_original.py Hilbert_cg_example.diff -o > Hilbert_cg_example.py > to get Hilbert_example.py. (I can't fit all the files in one email to this > list.) > I cleaned it up a bit and included all the functions I changed, all in one > file to make a diff make sense. The diff lines are almost all just > numpy.dot(u,v) -> inner_product(u,v) and vecnorm(v,norm) -> v.norm(norm). >> > If people are receptive to this approach, I have a few next questions: >> > 1. What is the right interface to provide? Is the interface I described >> the >> > right (Pythonic) way to do it, or is there a better interface? (If it >> were >> > Templated C++, I'd do it with overloaded free functions, but I think it >> > needs to/should be done with class methods.) >> I still don't think this interface supports the manifold-optimization >> use case. The inner product and norm implementations need more >> information than just the vectors. > Good point. I've been focused on optimizing within a vector space not on a > manifold. For nonlinear CG, I suppose the gradient needs to be a vector in > the tangent space, which itself must be a Hilbert space, but the state > "vector" can be in an affine space or on a manifold. What is the > mathematically-correct operation for moving such a "vector" on its manifold? Moving vectors on manifolds is a can of worms, leading to covariant differentiation via connections, parallel transport, Christoffel symbols, etc., etc.. The general case also probably needs a set of coordinate maps for computation. I'm not sure we want to go there. It's easier to embed the manifold in a higher dimensional space and use vectors and inner products, moving the vectors in the embedding space and projecting onto the manifold. Control systems do quaternions ( 3 sphere) that way, reducing the problem to constrained optimization. I suppose much depends much on the problem at -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-dev/attachments/20090108/a87fbbff/attachment.html More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-January/010787.html","timestamp":"2014-04-17T01:51:49Z","content_type":null,"content_length":"7049","record_id":"<urn:uuid:1eb36c1f-72e4-4339-b03d-00e5ce3f91b6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
1999 U.S. Championship Puzzles 1. Balancing Act Assign the values 1 to 10 to the weights in the diagram so that everything balances as shown. Each number will be used exactly once. 2. Rule of 72 Place the numbers from 1 to 9 in the blanks to make the equation correct. Each digit will be used exactly once. Operations are to be performed strictly from left to right. All intermediate results are positive whole numbers. __ - __ / __ + __ / __ + __ / __ x __ - __ = 72 3. Hex Aspiration Find a looped path through the diagram subject to the following constraints. The path proceeds from one cell to an adjacent cell, passes through no cell more than once, does not go through any numbered cells, and never makes a sharp-angled turn (i.e., a turn at a 60^o angle). Each number indicates how many of the adjacent cells are part of the path. Click here for the answers. Click here to go back to the Puzzle Palace.
{"url":"http://www2.stetson.edu/~efriedma/champ/US1999/","timestamp":"2014-04-20T01:29:39Z","content_type":null,"content_length":"1684","record_id":"<urn:uuid:ae80c2e6-7381-4138-bd8f-0083ff36ba7d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515c50c9e4b07077e0c1ceb0","timestamp":"2014-04-19T19:42:39Z","content_type":null,"content_length":"53421","record_id":"<urn:uuid:fd3ff14c-def2-4f71-b0a9-2f89c2b66c44>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Cole Valley, San Francisco, CA San Francisco, CA 94132 Personable, effective English, Math and Science Tutor ...In addition, twenty years of marketing has honed my understanding and expertise in the art of writing. I enjoy working in a teaching mode with kids. I have been active with the Boy Scouts for 20 years, and counseled scouts in several merit badge categories,... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Cole_Valley_San_Francisco_CA_Algebra_1_tutors.aspx","timestamp":"2014-04-23T23:39:39Z","content_type":null,"content_length":"61768","record_id":"<urn:uuid:87500691-d7e1-4ee0-b26d-96f3b1614fa8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this the love child of Glantz and Gilmore? The other day I was thinking of running a worst-junk-science-of-the-year poll. Thank God I bided my time otherwise I would have missed the chance to nominate this beauty Isle of Man smoking ban 'cuts heart attacks' A ban on smoking in public places has reduced heart attack admissions, according to research commissioned by the Isle of Man's Department of Health. The department has compared admissions in the two years prior to introduction of the ban on 30 March 2008 and the two years since. It discovered that the number of men over 55 admitted for heart attacks had dropped since the ban. But if we take a look at the ' ' (unpublished and not peer-reviewed, not that that makes a lot of difference these days), a very different picture emerges: Do my eyes deceive me or does this graph show that there significantly more heart attacks after the smoking ban? They don't and there were. In the 23 months before the smoking ban, there were 109 heart attack admissions, or 4.7 per month. In the 23 months after the smoking ban, there were 153 heart attacks, or 6.65 per month.In what universe does this count as a drop in heart attacks? In the crazy world of tobacco control, that's where. Note the regression lines, designed to take your eye off what is actually happening. Note how the second half of the graph has a line that is driven down by the lowish figure for the last month shown (since the next month needed to make it a full two years has mysteriously gone missing). This is a method taken straight out of the Anna Gilmore's box of tricks , with a dash of Glantz's Helena magic thrown in for good measure (small community, inaccessible hospital records, data mining etc.). If there isn't a drop in heart attacks, you simply 'predict' how many would have occurred if the smoking come in and make sure your prediction is higher than the real number. And before you know it the BBC will be falling over itself to report that "a ban on smoking in public places has reduced heart attack admissions" and the New England Journal of Medicine will be beating a path to your door. And the feeble effort shown above is the best this researcher—a maths student at [S:Rutherford Polytechnic:S] the University of Northumbria—could conjure up. The graph that shows all heart attack admissions, (ie. the relevant, non-cherry-picked data set) is even less compelling. Notice that before the ban, there were usually fewer than ten heart attack admissions. Notice, too, that after the ban the rate was usually well ten. And, of course, there were more heart attacks in total after the ban than before it. And, as the flat black line shows, the monthly rate of admissions did not go down one bit in the nigh-on two years after the ban. But you're not supposed to look at any of that. Instead, you are invited to look at the upwards line in the pre-ban period and assume that the rate would have continued rising, even though that line only goes up because of a big jump (by Isle of Man standards) to 14 cases shortly before the ban. Nor are you supposed to notice that any responsible statistician would identify that unusual leap as a statistical artifact. The fact that more than two-thirds of the data points are below the regression shows that it's being contorted by an outlier. It's truly unbelievable that this sort of stuff gets taken seriously. Or it would be if it didn't happen every few months. This is a world where a flat line equals a decline, and a 50% increase in heart attacks equals a reduction in heart attacks. In a year that has seen fierce competition for the title, Ms Howda Jwad of Northumbria University—for it is she—may just have clinched the inaugural World's Worst Junk Science Award in the dying days of the year. Glantz, Pell, Gilmore, Winickoff—it's time to up your game. Thanks to Brian Bond for the tip 9 comments: Eh.. Was SG an abbreviation for Surgeon General or for Stanton Glantz? Doesn't matter, the result is the same... Evidently, blindness has increased since implementation of the smoking ban too. But our heart and soul can't be decieved and the stress caused by an overregulated society and economic woes will only increase the rate of heart attack. It looks too, as if heart attack increases around November and December, which are high-stress times when it would be nice to unwind at a pub with a little cigarette and a pint. I don't know if Christmas time is tax time in the U.K., but in the U.S. everyone gets their tax bills about the same time they're trying to figure out how to afford their Christmas shopping. It's quite stressful having a tax bill for several thousand dollars which leaves only a couple hundred for gifts for the family and food for the feasts -- while at the same time heating and other extra costs of winter are also in need of payment. With all of this stress it would actually lower heart attack if a person were allowed to get away for a couple hours and relax with friends and exhale some of that stress out with a cigarette. The best repair for a heart is freedom and time with friends who can help ameliorate the tension -- rather than sitting alone at home in front of the television with a cheap beer. Beer and tobacco aren't evil if used respectully and responsibly. They're enablers of communion and thoughtfulness with our fellows, but I guess, some people are jealous of this and want to cut off all access to the things that minister to broken and stressed hearts. It breaks my heart. Chris, here’s your critical mistake. When looking at the graph your head needs to be tilted 45° to the right, i.e., “statistical correction” (closing your left eye substantially will also help. In fact, closing both eyes should work perfectly). Howda was top of the one-student class - Mathematics for Profit: Facts are Overrated. With this gem that clearly demonstrates a mastery of Public Health® “ethics” and an entrepreneurial mindset, there’s no doubt that she’ll soon be invited to some TC conference to collect one of the many awards they bestow on each other. She might even meet the heroes of haughtiness, Glantz and Pell. As non-smoker, you opened up my eyes and changed my mind... Keep on keeping on... There are plenty of flaws in reasoning behind this method of fitting separate regression lines to different sections of the data, but what invalidates it completely is the lack of a control population. From memory I think even Gilmore acknowledged that in the "instant heart attack" paper to which you referred. Why it was then published, I don't know. JB "what invalidates it completely is the lack of a control population" No, what invalidates it completely is its total failure to demonstrate any mathematical and statistical competence. It is an exercise in deceit, pure and simple. From a statistical perspective, it doesn't even get as far as needing a control population - as they are measuring nothing that can sensibly be 'controlled' for. As far as comparing two regression lines goes, even if it were a valid technique to use, the conclusion is completely dishonest. This is what is said in the paper: "Before the smoke free legislation was introduced, myocardial infarctions episodes were increasing at a rate of 0.23 per month. Which denotes that roughly every 4 months, there would be on average one more patient admitted with myocardial infarction in the Isle of Man than the previous 4 months? In the 2 years after the legislation was put in place, the results show that there was no longer an average increase. This decrease in inclination, 22.5%, is significantly different (p-value 0.04)" Which of course will sound impressive to those who are fooled by the "P<0.05" mantra into thinking that it means anything at all. However... I repeated the "All patients" linear regression calculations, and the resulting R-squared values (strangely not reported in the paper) for the two lines were 0.15 (pre-ban) and 0.00 (post-ban). R-squared is a residual error statistic that indicates how good a fit the line is to the data points. It has a value range between 0 (no fit) and 1 (perfect fit). Some take the R value (square root) to mean the 'probability of fit', which isn't entirely true, but close. Now 0.15 shows a pretty pathetic fit, but 0.00 shows no fit at all! In other words, it is totally dishonest to claim that there was a statistically signifant difference between the two regression lines, since neither of them was a statistically significant measure of its respective trend in the first place. You can't subtract one wrong from another wrong and claim that the answer is a right! It all goes to show that Public Health practitioners are all too often ignorant in the use of statistical methods, but are entirely dependent on them for their existence - especially on the dreaded 'P<0.05' crutch! People will choose to believe them - they are 'Doctors' after all! As a footnote to the above regression calculations. I also carried out a regression calculation over the whole data range (4 years), after first calculating 12-point moving averages (to smooth out seasonal variability). This time the R-squared value was a whopping 0.8, which translates (roughly) to a 90% confidence that the line fits the points. In other words there is a reliable straight line trend over the whole 4 years, whilst there isn't in either of the two 2-year periods either side of the ban. In other words, it is quite obvious that nothing changed after March 2008, so the smoking ban's headline statistical achievement was actually... Go figure! BB: “You can't subtract one wrong from another wrong and claim that the answer is a right!” If I may clarify a major problem here. Most people are familiar with the standard or usual arithmetic functions, e.g., addition, division, etc. Most, however, are unaware that Public Health® Arithmetik includes a number of additional "functions". For example, breaking the data into two lots – as has been done - is called “subdivision”. Subtracting a wrong from a wrong is called “estrelation”. Performing estrelation to produce the right answer is called “strangulation”: “Strangulation” involves the function of suspending or canceling other rules of arithmetic inference. Therefore, having performed estrelation on the data, strangulation defines the result as “right”. Hope this helps :) In other words: "Torture the data until it confesses!" Yes that explains it!
{"url":"http://velvetgloveironfist.blogspot.com/2010/12/is-this-love-child-of-glantz-and.html?showComment=1292478631326","timestamp":"2014-04-16T07:20:45Z","content_type":null,"content_length":"170197","record_id":"<urn:uuid:83d4a43f-5d25-4d46-8c17-4da285d65f41>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
What do you think? Re: What do you think? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? I am getting a mean of about 3.023 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? According to the simulation, a guess would be 3ab / pi . But when I calculate the integral as done for the triangle, I get 6ab / pi . "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Instead of running a simulation. Plot the 4 vertices a couple of dozen times. I think that will convince you that the answer is very close to 3.025 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Yes, I need to rethink the analytical way then. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Yes, I need to rethink the analytical way then. That is probably true. Looking at each quadrilateral created they were all simple and all had vertices on the ellipse. I have a geogebra demonstration too. I have a chore to do so I will be offline for awhile. See you then. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Okay, see you later. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Hi gAr; Any results yet? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Nothing other than the results I've told earlier. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? I am having a little bit of trouble with it too. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? It's okay.. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? I am wondering why. Is it because there are many types of quadrilaterals? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? I'm not sure why. Need to take a break, see you later. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Okay, see you later. Thanks for coming in. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? B is about to fly of on a plane. He's got his ticket, but, because of wasting time in the snack bar, he was the last passenger in the line for boarding the plane. He gets in line and, thinking about how long he'll have to wait, he remembers that the tickets for his plane, which holds a hundred passengers, have been sold out. He also notices that the first person in line is a stubborn, old lady from his neighborhood whose eyesight is not the best. He knows that she will enter the plane and take a random seat and also that each passenger after will take his own seat if it is free and a random seat if his own is taken. B is wandering what his chances are of sitting in his own seat. Same problem was asked in brilliant by a member there! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Hi gAr; Probably got it while reading some book. It is a fairly standard problem. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? That's right, just saying. Because I looked back here to enter the answer. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? Where did you find it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? Find what? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? The quote in post #2465. Where did you find it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: What do you think? That was asked by anonimnystefy in this thread. Searched it using keywords: seat site:mathisfunforum.com "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: What do you think? I got it thanks. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=287091","timestamp":"2014-04-19T17:24:37Z","content_type":null,"content_length":"34187","record_id":"<urn:uuid:798b199a-8662-4246-8f1a-9c7f4bb6a481>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
guide to series and sequences... arithmetic and geometric I want to make this more clear for people who stumble on this post in the future. The following is meant to help one understand the entire topic that this falls under. Topic: Sequences and Series There are two types of sequences and two types of series. They are geometric sequences arithmetic sequences , and geometric series arithmetic series Geometric sequence vs arithmetic sequenceAn arithmetic sequence is a sequence of numbers where each new term after the first is formed by adding a fixed amount called the common difference to the previous term in the sequence. Set A={1,2,3,4,5,6,7,8,9,10} Set B={2,4,6,8,10,12,14} Set C={3,8,13,18,23,28} In 'set A', the common difference is the fixed amount of one. In 'set B' the common difference is the fixed amount of two, and in 'set C' the common difference is the fixed amount of five. As you most likely noticed already, the common difference is found by finding the difference between two consecutive terms within the sequence. For example, in 'set C', to find the common difference compute A geometric sequence on the other hand, is a sequence of numbers where each term after the first is found by multiplying the previous term by a fixed number called the common ratio Set D={2,4,8,16,32} Set E={3,9,27,81} Set F={5,10,20,40,80} You might notice that the difference between consecutive numbers in the above three sets are not a fixed amount. For instance, in 'set F', the first two terms (5 and 10) have a smaller difference than the last two terms (40 and 80). Therefore, the above sets are geometric sequences. The difference between two consecutive numbers is therefor the common ratio. To find the common ratio you simply take the ratio one consecutive number to the one before it . In 'set F' this would be (10/5=2). Therefore, n 'set F' the common ratio is two. In 'set E' the common ratio is (27/9=3). In 'set' D' the common ratio is two (32/16=2). The difference between the two types of sequences is that in arithmetic sequences the consecutive numbers in a set differ by a fixed amount known as the common difference whereas in a geometric sequence the consecutive numbers in a set differ by a fixed number known as the common ratio. Sequence vs Series This is quite simple. A sequence is a list of numbers. A series is created by adding terms in the sequence. There you go, now you know the difference. So if you take 'set A' and add the terms then you have an arithmetic series. If you take 'set D' and add the terms, then you have a geometric series. Sequence: {1, 3, 5, 7, 9, …} Series: {1+3+5+7+9+…} What the GMAT could ask us to do with sequences and series and how to do it! There is no limit to what the GMAT can ask you to find when dealing with series and sequences. Here are some examples of things you may be asked to find/do with them. (1) The sum of numbers in a series (which can be asked in many tricky ways such as the sum of all the numbers, sum of just the even numbers, sum of just the odd numbers, sum of only the numbers which are multiples of 7, sum of the first 10 numbers, and many more tricky ways!) (2) The nth term in a sequence (3) How many integers are there in a sequence Anyway, now that you get the point... lets give you the formulas that will allow you to answer any question regarding series and sequences. I will then show you how to use the formulas to answer some questions that might not be intuitive of non math geniuses. Formula for geometric sequence (when there is a common ratio) dark green means subscript Recursive (to find just the next term): = a * r Explicit (to find any nth term): = a = nth term = the first term r = common ratio In reality, you only need to know the explicit formula, because you can find any term with it. I only put the recursive formula for understanding. Formula for arithmetic sequence (when there is a common difference) dark green means subscript = a + d = a + (n-1)d = nth term = first term d = common difference Again, you only need to know the explicit formula, because you can find any term with it. I only put the recursive formula for understanding. Formula for geometric series (when there is a common ratio) dark green means subscript = a = Sum of first nth terms = first term r = common ratio n = nth term Formula for arithmetic series (when there is a common ratio) dark green means subscript + a (First term + Last term) The above two equations are the same (I put them in both ways because some prep programs teach "first + last" but it is important to see that in the first of the two, the last term is identified as a . Well what if you do not know the last term? Then you have to calculate it using the equation for the nth term (solving for a ) of an arithmetic sequence which is listed above... or you can substitute the formula for a into the first one of these two by replacing an with what is equals and simplifying. You get the following: + (n-1)d] Sn = sum of the series = the first term = the nth term n = the number of terms d = the common difference PART 2 Now that we know all this information, there are some important things that are understood as well to ensure that the formulas are used correctly. How to find the number of integers in a set (Last term - First term) + 1 *A mistake is that people will forget to add the 1. The number of terms between 3 and 10 is not 7, it is 8. A common mistake is that people will calculate (10-3=7)... but this is wrong. Remember, as Manhattan GMAT says, "Add one before you are done". *Notice how I used the word "term" and not number. This is important because sometimes you don't always just put the first and last number you are given. For example, If you are asked to find the number of even integers between 1 and 30, you don't use the "first number" in the set. The first number is "1", which is odd, and we are only speaking about even numbers. Therefore, the first term is "2", not "1", even though the set or question might have stated "from 1-30". Same goes with the last term. There is another step needed to answer this question though.Find number of odd integers (or even) in a set \frac{(Last term - First term)}{2} + 1 *If the question is to find the number of odd integers between 2 and 30, then your first term is 3, and your last term is 29. They must be odd to fit in the set you are asked to analyze. *If the question is find the number of even integers between 3 and 29, then your first term is 4, and your last term is 28. Find number of integers that are a multiple of a certain number in a set GMAT questions can get tricky, but luckily not too tricky. For example... What if you are asked to "find the number of multiples of 7 between 2 and 120"? \frac{(Last term - First term)}{increment} ) + 1 [\frac{(119 - 7)}{7}] + 1 All you have to do is instead of dividing our old formula by 2, you divide it by the increment. Also, notice how my first and last terms are the first term that is a multiple of 7 and the last term that is a multiple of seven within the set!Sum of odd numbers in a series This seems to be a popular topic on GMAT forums. Its quite simple. You already know everything you need to after reading this post. It is a two step problem. Here are the two steps: (1) Find the number of odd terms. This is you "n" value now. (2) Plug in the "n" value into the formula for an arithmetic series. There are some short cuts and concepts that you should know about this topic. (1) The mean and the medium of any arithmetic sequence is equal to the average of the first and last terms. (2) The sum of an arthritic sequence is equal to the mean (average) times the number of terms. (3) The product of n consecutive integers is always divisible by n! So, 4x5x6 (4*5*6=120) is divisible by 3! (4) If you have an odd number of terms in consecutive set, the sum of those numbers is divisible by the number of terms. (5) number four (above) does not hold true for consecutive sets with an even amount of terms. Additional Exercises Thank you
{"url":"http://gmatclub.com/forum/guide-to-series-and-sequences-arithmetic-and-geometric-85969.html","timestamp":"2014-04-18T11:27:30Z","content_type":null,"content_length":"212946","record_id":"<urn:uuid:9645778a-fad8-4e97-aa68-7f7881c1837e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
First Order System's Time Constant [tex]\omega [/tex] has units of [tex] \frac{rad}{sec}[/tex] (s = jw+sigma) , Hz has units of [tex] \frac{1}{s}[/tex] so the connection you made between the derivative, 1/s and, Hz for the s domain is Thank u for the reply , but Hz [1/s] and omega's units [rad/s] are not the same, u should divide\multiply it by 2*pi. This is exactly my question, the units don't match (according to the theory I've learned). In theoretical problems it doesn't matter, but when i use actual numbers i need to decide how to use the data, and how to convert the units accordingly. Yanai Barr
{"url":"http://www.physicsforums.com/showthread.php?p=3577180","timestamp":"2014-04-21T14:40:14Z","content_type":null,"content_length":"48953","record_id":"<urn:uuid:110ce2db-9279-40cb-b0e5-7d7b00cd4ecd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Quadratic Equations Using Square Roots - Problem 2 Here I have a quadratic equation with no b term so right away I’m thinking I’m going to want to take the square root of both sides of an equation but first I want to get x isolated. I’m going to have to get rid of this -81 piece by adding 81 to both sides, 100x² equals 81. Next thing I want to do is to get rid of that 100. I’m going to divide both sides by 100 so that I have x² equals 81 over 100. Okay I’m almost there in order to have regular old x and not x² I need to square root both sides. X is going to be equal to the positive and negative square root of 81 over 100. So depending on what your textbook says you might grab a calculator and do that on your calculator or you might be able to know in your head that the square root of 81 over 100 is 9 over 10. So x is going to be equal to positive and negative 9/10. The way I would check that is by substituting in one at a time positive 9/10 and then negative 9/10 here and making sure that’s equal to 0. So guys these problems aren’t too bad if there’s no b term just undo everything that’s been done to x including taking the square root of both sides at the end. square root
{"url":"https://www.brightstorm.com/math/algebra-2/quadratic-equations-and-inequalities/solving-quadratic-equations-using-square-roots-problem-2/","timestamp":"2014-04-20T05:51:00Z","content_type":null,"content_length":"68535","record_id":"<urn:uuid:58fa330e-1f58-4958-b85d-4393d92d4fa9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
PCTS, Princeton Center for Theoretical Sciences, Princeton University, Princeton, New Jersey Igor Klebanov Associate Director Eugene Higgins Professor of Physics 402-A | 336 Jadwin Hall Much of my work since 1996 has focused on the exact relations between quantum field theories in four and three space-time dimensions, and higher dimensional theories which include gravity. My work on thermodynamics and absorption cross-sections by stacks of extended objects called D-branes paved the way to the formulation of the Anti-de Sitter/Conformal field Theory (AdS/CFT) correspondence. I made many contributions to the formulation of the “AdS/CFT dictionary,” which relates scaling dimensions and correlation functions in strongly interacting field theory to semi-classical dynamics in AdS space. My collaborators and I have also constructed a tractable gravitational description of a gauge theory which is nearly conformal at short distances but exhibits color confinement at long In flat space there are problems with interacting fields of spin greater than two, but Mikhail Vasiliev and his collaborators have succeeded in constructing consistent interacting higher-spin theories in Anti-de Sitter space. These theories necessarily include gravity (spin two particles). In a 2002 paper, Alexander Polyakov and I conjectured that the simplest of Vasiliev's theories in 4-dimensional AdS space is dual, in the sense of the AdS/CFT correspondence, to O(N) symmetric theories of scalar fields in three dimensions. These field theories are very well-known; they generalize the theories that describe second-order phase transitions observed in real world, like the water-vapor critical point. They are rather simple because the basic fields are N-dimensional vectors rather than N by N matrices. By now there is considerable evidence that the conjecture we made in 2002 is correct. Notably, this provides a purely bosonic example of exact AdS/CFT correspondence (unlike the many earlier examples, it does not rely on supersymmetry). My most recent papers with Simone Giombi and others describe how the higher spin AdS/CFT correspondence works at one loop. In 2011 I was involved in proposing a positive quantity that decreases along any renormalization group flow from one three-dimensional CFT to another: it is minus the logarithm of the path integral on the three-dimensional sphere. This quantity is related to quantum entanglement entropy which has been the subject of much recent work by me and many others. Selected Publications I.R. Klebanov and A.A. Tseytlin, "Entropy of near-extremal black p-branes," Nucl. Phys. B475 (1996) 164, hep-th/9604089. I.R. Klebanov, "World volume approach to absorption by nondilatonic branes," Nucl. Phys. B496 (1997) 231, hep-th/9702076. S.S. Gubser, I.R. Klebanov, and A.M. Polyakov, "Gauge theory correlators from noncritical string theory," Phys. Lett. B428 (1998) 105, hep-th/9802109 I.R. Klebanov and E. Witten, "Superconformal Field Theory on Threebranes at a Calabi-Yau Singularity," Nucl. Phys. B536 (1998) 199, hep-th/9807080 I.R. Klebanov and E. Witten, "AdS/CFT Correspondence and Symmetry Breaking," Nucl. Phys. B556 (1999) 89, hep-th/9905104 I.R. Klebanov and A.A. Tseytlin, "Gravity Duals of Supersymmetric SU(N)xSU(N+M) Gauge Theories," Nucl. Phys. B578 (2000) 123, hep-th/0002159 I.R. Klebanov and M. Strassler, "Supergravity and a Confining Gauge Theory: Duality Cascades and Chiral Symmetry Breaking Resolution of Naked Singularities," JHEP 0008 (2000) 052, hep-th/0007191 S.S. Gubser, I.R. Klebanov, and A.M. Polyakov, "A Semiclassical limit of the gauge/string correspondence," Nucl. Phys. B636 (2002) 99, hep-th/0204051 I.R. Klebanov and A.M. Polyakov, "AdS Dual of the Critical O(N) Vector Model," Phys. Lett. B550 (2002) 213, hep-th/0210114 I.R. Klebanov and J.M. Maldacena, "Solving Quantum Field Theories via Curved Spacetimes," Physics Today 62 (2009) 28 D. Jafferis, I.R. Klebanov, S. Pufu and B. Safdi, "Towards the F-Theorem: N=2 Theories on the Three-Sphere," JHEP 1106 (2011) 102, ArXiv:1103.1181 I.R. Klebanov, S. Pufu and B. Safdi, "Towards the F-Theorem: N=2 Theories on the Three-Sphere," JHEP 1110 (2011) 038, ArXiv:1105.4598 I.R. Klebanov, S. Pufu, S. Sachdev and B. Safdi, "Entanglement Entropy of 3-d Conformal Gauge Theories with Many Flavors," JHEP 1205 (2012) 036, ArXiv:1112.534 S. Giombi and I.R. Klebanov, "One Loop Tests of Higher Spin AdS/CFT," JHEP 1312 (2013) 068, ArXiv:1308.2337 S. Giombi, I.R. Klebanov and B. Safdi, "Higher Spin AdSd+1 /CFTd at One Loop," ArXiv: 1401.0825
{"url":"http://www.physics.princeton.edu/pcts/people/klebanov_igor.html","timestamp":"2014-04-20T17:14:28Z","content_type":null,"content_length":"11417","record_id":"<urn:uuid:78f6fe8c-ea7d-4a28-a2c3-3c095b080d02>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
January 26th 2011, 09:33 AM #1 Junior Member Oct 2009 Suppose $f_1(x) \sim g_1(x)$ and $f_2(x) \sim g_2(x)$ as $x \rightarrow \infty$ for some functions $f_1,f_2,g_1,g_2$. Then I know that $f_1(x) + f_2(x) \sim g_1(x) + g_2(x)$ as $x \rightarrow \ infty$, and that more generally we have $\sum_{i=1}^k f_i(x) \sim \sum_{i=1}^k g_i(x)$ as $x \rightarrow \infty$ for any fixed $k \in \mathbb{N}$ if each $f_i(x) \sim g_i(x)$. But what if $k$ is not fixed, and actually depends on $x$? For example, is it true that $\sum_{0 \leq k \leq x} f_i(x) \sim \sum_{0 \leq k \leq x} g_i(x)$? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/169411-asymptotics.html","timestamp":"2014-04-21T02:25:26Z","content_type":null,"content_length":"28784","record_id":"<urn:uuid:f9294d5f-7c17-41a2-83c2-37731746b429>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
2000 kilometers equals how many miles You asked: 2000 kilometers equals how many miles Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/2000_kilometers_equals_how_many_miles","timestamp":"2014-04-17T18:56:37Z","content_type":null,"content_length":"51440","record_id":"<urn:uuid:ec1f3caa-5f12-4352-932c-8f57a394f134>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
differentiating a function... January 29th 2009, 09:00 AM differentiating a function... I think I have this, but want to make sure... Let g(x)= x^2 f(x) What does g'(x)=? Using the power rule I come up with (2x * f(x) * f'(x) )*(x^2 * f'(x)) Is that right? Any help would be greatly appreciated. January 29th 2009, 09:02 AM Nope, you have to use the product rule : $[uv]'=u'v+uv'$, where $u=x^2$ and $v=f$ January 29th 2009, 09:03 AM could you refresh me on how to work through that? January 29th 2009, 09:08 AM January 29th 2009, 09:11 AM yeah, that's what i had, right? January 29th 2009, 09:16 AM January 29th 2009, 09:33 AM I had a * not a + thanks a bunch MOO!
{"url":"http://mathhelpforum.com/calculus/70618-differentiating-function-print.html","timestamp":"2014-04-17T23:27:11Z","content_type":null,"content_length":"8129","record_id":"<urn:uuid:89e5cf3d-efe5-4f32-858b-bfb875c11607>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Quail Heights, FL Math Tutor Find a Quail Heights, FL Math Tutor ...I am required to maintain a 3.0 average and have a 3.7 unweighted and a 6.06 weighted GPA. I also have over 500 community service hours which I obtained from volunteering at a daycare in Georgia over the course of 3 consecutive summers. I assisted teaching several of the classes for 10 hours a day, 5 days a week. 11 Subjects: including algebra 1, algebra 2, vocabulary, geometry ...I am a very approachable and positive person. Please feel free to contact me any time.I have taught and tutored algebra for about 5 years. With my patience and positive attitude, my success rate is high. 18 Subjects: including geometry, study skills, biochemistry, cooking ...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive... 14 Subjects: including precalculus, biochemistry, MCAT, organic chemistry ...I look forward to working with you in the future.I have tutored Algebra for 7 years at the Sylvan Learning Center in Miami Lakes. During my time at Sylvan, I was the "go to" advanced math tutor taking on all accelerated Algebra tutoring sessions. I also taught night school for three years at Hi... 18 Subjects: including SAT math, geometry, ACT Math, prealgebra ...I am a CPA and teaching assistant. I have both public accounting experience from a Regional CPA Firm as well as 20 years of industry experience serving as Controller and CFO. I can take you to the next level. 2 Subjects: including prealgebra, accounting Related Quail Heights, FL Tutors Quail Heights, FL Accounting Tutors Quail Heights, FL ACT Tutors Quail Heights, FL Algebra Tutors Quail Heights, FL Algebra 2 Tutors Quail Heights, FL Calculus Tutors Quail Heights, FL Geometry Tutors Quail Heights, FL Math Tutors Quail Heights, FL Prealgebra Tutors Quail Heights, FL Precalculus Tutors Quail Heights, FL SAT Tutors Quail Heights, FL SAT Math Tutors Quail Heights, FL Science Tutors Quail Heights, FL Statistics Tutors Quail Heights, FL Trigonometry Tutors Nearby Cities With Math Tutor Crossings, FL Math Tutors Gables By The Sea, FL Math Tutors Goulds, FL Math Tutors Kendall, FL Math Tutors Leisure City, FL Math Tutors Modello, FL Math Tutors Naranja, FL Math Tutors Perrine, FL Math Tutors Princeton, FL Math Tutors Redland, FL Math Tutors Richmond Heights, FL Math Tutors Snapper Creek, FL Math Tutors South Miami Heights, FL Math Tutors Village Of Palmetto Bay, FL Math Tutors Westchester, FL Math Tutors
{"url":"http://www.purplemath.com/Quail_Heights_FL_Math_tutors.php","timestamp":"2014-04-21T13:00:10Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:e6a9de2b-3561-4ec5-b1a0-e8c4cae7b7f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
LyX’s detailed Math manual by the LyX Team Version 2.0.x 2General Instructions Only inline formulas are allowed inside tables. Command Scheme Most of the LaTeX-commands for math constructs have the following scheme: Syntax Explanation • The symbol denotes a space character to be input. • An arrow like → denotes the usage of the corresponding arrow key on the keyboard. Available units 3Basic Functions 3.1Exponents↓ and Indices↓↓↓ 3.2Fractions ↓ The command for the example above is: 3.4Binomial Coefficients↓ 3.5Case Differentiations↓ 3.7Placeholders ↓↓↓ It is possible to place up to 6 lines above or below characters. 3.9Ellipses ↓ The number of columns specifies how many columns should be spanned. Distance is a factor for the distance between the dots. To use the commands for text, they have to be inserted in TeX-mode. 4Matrices ↓ Horizontal alignment: 5Brackets and Delimiters↓↓ 5.1Vertical Brackets and Delimiters↓ 5.1.1Manual Bracket Size ↓ These commands are used to emphasize levels of brackets: Here is an overview about all bracket sizes: In the following table is a comparison of the variants: 5.1.2Automatic Bracket Size ↓ 5.2Horizontal Brackets↓ 6.1Horizontal Arrows↓ 6.2Vertical and diagonal Arrows↓↓ 7Accents ↓ 7.1Accents for one Character\texorpdfstring ↓ In mathematical text, umlauts and other accented characters can directly be inserted. 7.2Accents for Operators↓ 7.3Accents for several Characters↓ 8Space ↓ 8.1Predefined Space ↓ The last size seem to produce no space. It is displayed red in LyX contrary to the other sizes, because it is a negative space. There are two more negative spaces: Negative spaces can lead to characters overlapping each other. Thus they can be used to enforce ligatures, what is e.g. useful for summation operators: Relations like for example equal signs, are always surrounded by space. To suppress this, the equal sign is placed into a TeX-brace. The following example demonstrates this: An example to visualize the difference: 8.2Variable Space\texorpdfstring ↓ 8.3Space besides inline Formulas↓ 9Boxes and Frames↓↓ 9.1Boxes with Frame ↓ The frame thickness can also be adjusted. To do this the following commands have to be inserted in TeX-mode before the formula were inserted in TeX-mode. The given values are used for all following boxes. To return to the standard frame size, the command is inserted in TeX-mode before the next formula. 9.2Boxes without Frame ↓ 9.3Colored Boxes ↓ One of the following predefined colors can be chosen: An example: 9.4Paragraph Boxes ↓ The following example shows a framed parbox in a line: 10.1Big Operators ↓↓↓ The operators are called big because they are bigger than the sometimes equal looking binary operators. All big operators can have limits as described in the next subsection. Advice for Integrals 10.2Operator Limits ↓ Limits are created by super- and subscripts: Limits of inline formulas are set right beside the operator. Limits in displayed formulas are set above or below the operator, except for integral limits. To avoid this, the following macro can be used in the LaTeX-preamble: where the limit can consist of several conditions. 10.3Binary Operators ↓ 10.4Self-defined Operators ↓ For example the LaTeX-preamble line 11.1Font Styles ↓ 11.2Bold Formulas ↓ 11.3Colored Formulas↓ The following example was colored completely dark green and partly red: 11.4Font Sizes ↓ For characters in formulas there are, analog to characters in text, the following size commands: Within a formula the size can be changed using the following size commands: After entering these commands, a blue box appears in which the formula parts are inserted. If a symbol cannot be displayed in different sizes, it will always be displayed in the default size. 12Greek Letters↓ 12.1Small Letters↓ 12.2Big Letters↓ Then all big Greek letters in a document will automatically be typeset italic. 12.3Bold Letters↓ 13.1Mathematical Symbols ↓ 13.2Miscellaneous Symbols ↓ 13.3The Euro-Symbol €↓↓ An overview about the different Euro symbols: 14Relations ↓↓ Relations are, in contrary to symbols, always surrounded by space. 15.1Predefined Functions ↓ The following functions are predefined: 15.2Self-defined Functions↓ To use a function that is not predefined, like for example the sign function sgn(x), there are two possibilities: • Define the function by inserting the following line to the LaTeX-preamble Now the new defined function can be called with the command \sgn. • Write the formula as usual, mark the formula name, in our example the letters sgn, and change it to mathematical text. At last a space is inserted between prefactor and function. The first method is more suitable when the self-defined function should be used several times. The modulo-function is special, because it exists in four variants. In an inline formula less space is set before the function names for all variants. 16Special Characters↓ 16.1Special Characters in Mathematical Text The following commands can only be used in mathematical text or in TeX-mode: 16.2Accents in Text ↓ With the following commands all letters can be accented. The commands must be inserted in TeX-mode. 16.3Minuscule Numbers↓ 16.4Miscellaneous special Characters ↓ The following characters can only be inserted to formulas by using commands: 17Formula Styles ↓ • There are two different alignment styles: Centeredis the predefined standard Indentedfor this the option fleqn must be inserted in the menu Document ▷ Settings under Document Class is used, the indentation can be adjusted with the length . Should the distance be 15mm, the following command line is inserted in the LaTeX-preamble When no length is specified, the predefined value of 30pt will be used. • And two different numbering styles: Rightis the predefined standard Leftfor this the option leqno must be inserted in the menu Document ▷ Settings under Document Class 18Multiline Formulas ↓ 18.1.1Line Separation ↓ 18.1.2Column Separation ↓ Multiline formulas form a matrix. A formula in the eqnarray environment is for example a matrix with three columns. By changing the column separation in this environment, the space beside the relation sign can be changed. Long formulas can be typeset using these methods: • When one side of the equation is much shorther than the line width, this one is chosen for the left side and the right side is typeset over two lines: (7) H = W[SB]+W[mv]+W[D]−(ℏ^2)/(2m[0])Δ−(ℏ^2)/(2m[1])Δ[1]−(ℏ^2)/(2m[2])Δ[2]−(e^2)/(4πε[0]|r−R[1]|) − (e^2)/(4πε[0]|r−R[2]|)+(e^2)/(4πε[0]|R[1]−R[2]|) The minus sign at the beginning of the second line does normally not appear as operator because it is the first character of the line. Thus it would not be surrounded by space and could not be distinguished from the fraction bar. To avoid this, 3pt space was inserted behind the minus sign with the command \hspace.↓ • When both sides of the equation are too long, the command \lefteqn↓ is used. It is inserted to the first column of the first line and effects that all further insertions overwrite the following (8) \lefteqn4x^2(B^2+x^2[0])+4x[0]x(D−B^2)+B^2(B^2−2r^2[g]+2x^2[0]−2r^2[k])+D^2 − B^2−2B√(r^2[g]−x^2+2x[0]x−x^2[0])+r^2[g]−x^2+2x[0]x−x^2[0] =B^2+2(r After the insertion of \lefteqn, the cursor is in a purple box that is a bit shifted to the left from the blue one. In this the formula is inserted. The content of the further lines is inserted to the second or another formula column. The greater the column number where it was inserted, the larger the indentation. Note the following when using \lefteqn: □ The formula doesn’t use the full page width. When e.g. the term −B^2 is added to the first line in the above example, it would have been outside the page margin. To better use the width, negative space can be inserted at the beginning of the first line. □ Due to a bug in LyX the cursor cannot be set with the mouse into the first line. One can only set the cursor at the beginning of the line and move it with the arrow keys. • Other methods to set long formulas are offered by the environments described in 18.5↓ and 18.6↓. 18.1.4Multiline Brackets ↓ The closing bracket is smaller than the opening bracket because brackets with variable size may not span multiple lines. 18.2Align Environments Align environments can be used for every kind of multiline formulas. They are specially useful to set several formulas side by side. Align environments consist of columns. The odd columns are right aligned, the even ones left aligned. Every line in an Align environment can be numbered. 18.2.1Standard align Environment ↓ 18.2.2Alignat Environment↓ 18.2.3Flalign Environment ↓ 18.3Eqnarray Environment ↓ 18.4Gather Environment↓ 18.5Multline Environment ↓ As example the above formula where the command 18.6Multiline Formula Parts ↓ 18.7Text in multiline Formulas ↓↓ 19Formula Numbering↓↓ The name of the label is displayed in LyX within two parentheses behind formula. A formula with a label is always numbered. Here are as examples cross-references to formulas of the following subsections: To create the example, the following is done: 1. first formula is inserted 2. \addtocounter{equation}{-1} \begin{subequations}↓ is inserted after the first formula 3. second formula is inserted 4. third formula is inserted 5. \end{subequations} is inserted after the third formula 19.4User-defined Numbering ↓ The formula number should start with every section with “1”. Counter denotes what kind of numbering is affected, sectioning denotes what number is before the dot. Thus in our case the following LaTeX-preamble or TeX-Code line is used: To go back to the standard numbering or to prevent this kind of numbering when it is defined by the document class, the following command is inserted as TeX-Code or to the LaTeX-preamble: 19.5Numbering with Roman Numbers and Letters↓↓ Formulas can also be numbered with Roman numbers and Latin letters. To number for example with small Roman numbers, the command To switch back to the default numbering, insert the command: 20Chemical Symbols and Equations ↓↓↓ An example text from chemistry: The SO^2−[4]-ion reacts with two Na^+-ions to sodium sulfate (Na[2]SO[4]). The chemical equation for this is: (26) 2 Na^++SO^2−[4]⟶Na[2]SO[4] 21.1Amscd Diagrams↓ To create the relations there are the following commands: • @<<< creates a left arrow, @>>> a right arrow and @= a long equal sign • @AAA creates an up arrow, @VVV an down arrow and @| a vertical equal sign • @. is a placeholder for non-existent relations All arrows can be labeled as follows: • Is text inserted between the first and second < or >, resp., it is placed above the arrow. When it is inserted between the second and third one, it appears under the arrow. • When text for vertical arrows is inserted between the first and second A or V, resp., it is placed left beside the arrow. When it is inserted between the second and third one, it appears right beside the arrow. If the text contains an A or V, these letters must be set into a TeX-brace. As example a diagram with all possible relations: 21.2Xymatrix Diagrams↓ 21.3Feynman Diagrams↓ 22User-defined Commands↓ 22.1The Command \newcommand ↓↓ Here are some examples: • To define the command \gr for \Longrightarrow, the LaTeX-preamble line is: • To define the command \us for \underline, the argument (that should be underlined) must be taken into account. For this the preamble line is: The character # acts as argument placeholder, the 1 behind it denotes that it is the placeholder for the first argument. • For \framebox one can e.g. define the command \fb: The two Dollar signs creates the extra formula needed for \framebox, see 9.1↑. • To create a new command for \fcolorbox where the color for the box needn’t to be specified, the argument for the color is defined optional: When the color is not specified when using \cb, the predefined color white will be used. A test of the new defined commands: When the cursor is in a macro definition box, you will see the macro toolbar in LyX: The macro toolbar contains from left to right the following buttons: 23Computer Algebra Systems • (37)/(3)*2−∑^3[i=1]i^i=−(22)/(3) • (37.0)/(3)=12.33333333333333 • ∫^2[1]sin(x)dx=cos1−cos2 • det⎡⎢⎢⎢⎣ 1 6 7 2 5 8 3 4 17 ⎤⎥⎥⎥⎦=−56 • lim[x→0]⎛⎝(sin(x))/(x)⎞⎠=1 23.2Keyboard shortcut 24.1Negative Numbers↓ Negative numbers often look ugly in formulas because the minus sign before the number is set with the same length as the minus operator sign. When writing the negative number in normal text, the minus sign appears correctly. Thus, the problem disappears when converting the minus sign to mathematical text. An example to visualize the problem: 24.2Comma as decimal Separator↓ In LaTeX a comma inside a formula is used, according to the English convention, as number group separator. So there will be space added behind all commas in formulas. 24.3Physical Vectors ↓ The following commands are defined: 24.4Self-defined Fractions ↓ The style is a number in the range of 0-3. When no fraction bar thickness is given, the predefined value of 0.4pt will be used. 24.5Canceled Formulas↓ There are four ways to cancel formulas: 24.6Formulas in Section Headings ↓ When formulas are used in section headings, the following has to be taken into account: Part is the part of the heading that shouldn’t appear in the PDF-bookmark. This can be characters, formulas, footnotes, but also cross-references. The alternative is used instead of the part for the 24.6.1Heading without formula in table of contents √(−1)=i 24.6.2Heading with formula in table of contents\texorpdfstring √(−1)=i 24.7Formulas in multi-column Text↓ Before the multi-column text the command is written in TeX-mode. The column number is a number in the range of 2-10. Before the formula the multi-column text is ended by inserting the command in TeX-mode. As example a multi-column text with a displayed formula: 24.8Formulas with Description of Variables↓ 24.9Upright small Greek Letters ↓ The upright letters are more bold and wider than the italic ones. They should therefore not be used for units like “µm”. 24.10Text Characters in Formulas ↓ ATypographic Advice↓ • Physical units are always set upright, no matter if they appear in italic text: 30km/h Between the value and the unit is the smallest space, see 8.1↑. This convention is automatically fulfilled when the command \unittwo is used. When it is entered to a formula, two boxes appear. In the first one the value is inserted, in the second one the unit, and one gets as above: 30km⁄h. Note that \unittwo is not a real LaTeX command but the command \unit[value]{unit}, therefore you cannot use it in TeX code. • Percent and perthousand signs are set like physical units: 1,2‰ alcohol in blood • The degree sign follows directly on the value: 15°, but not when it is used in units: 15°C • In numbers with more than four digits the smallest space is inserted before every third digit to group them: 18473588 • For dimensions like 120×90×40cm the multiplication sign “×” is used. It is available either via the command \times or via the menu Insert ▷ Special Character ▷ Symbols. • Functions with names consisting of several letters are set upright to avoid confusions, see 15.1↑. • Indices consisting of several letters, are set upright: E[kin] Components of matrices are set italic: Ĥ[kl] • The differentiation/integration operator ’d’, the Euler’s number ’e’ and the imaginary unit ’i’ should be set upright, to avoid mixing them up with other variables. • The character that denotes a Fourier transformation is inserted either by the command \mathscr F or via the menu Insert ▷ Special Character ▷ Symbols ▷ Letterlike Symbols: ℱ Some characters and symbols can be created with several commands. Here is a list of the synonym commands: [1] Mittelbach, F. ; Goossens, M.: The LaTeX Companion. Addison Wesley, 2004 [2] Description of LaTeX’s math abilities [3] Description of AmS-LaTeX [4] List of all symbols available with LaTeX-packages [5] Documentation of the LaTeX-package hyperref↓ [6] Documentation of the LaTeX-package mhchem↓ [7] Description of the command \mathclap, described in 10.2↑↓ [8] Duden Band 1. 22. Auflage, Duden 2001 [9] Check list for reviewing manuscripts Accents: ↑ Arrows: ↑ Binomial coefficients: ↑ Boxes: ↑ Brackets: ↑ Case differentiations: ↑ Chemical equations: ↑ Comma: ↑ Comparisons|seeRelations: ↑ Delimiters: ↑ Ellipses: ↑ Exponents: ↑ Fonts: ↑ Formula numbering: ↑ Fractions: ↑ Frames | seeBoxes: ↑ Greek letters: ↑ Indices: ↑ Integrals: ↑ Isotopes|seeChemical characters: ↑ L@LaTeX-preamble: ↑ Limits: ↑ Lines: ↑ Macros: ↑ Mathematical text: ↑ Matrices: ↑ Minuscule numbers: ↑ Negations: ↑ Operators: ↑ Placeholders: ↑ Relations: ↑ Roots: ↑ Special characters: ↑ Subscripts|seeIndices: ↑ Sums: ↑ Superscripts|seeExponents: ↑ Symbols: ↑ Synonyms: ↑ T@TeX-braces: ↑ T@TeX-mode: ↑ Tilde: ↑ Tips: ↑ Typographic advice: ↑ Umlauts: ↑ User-defined commands: ↑ Vectors: ↑ \@: ↑, ↑ \@°: ↑
{"url":"http://elyxer.nongnu.org/lyx/Math.html","timestamp":"2014-04-16T08:38:57Z","content_type":null,"content_length":"446719","record_id":"<urn:uuid:99249c19-ca81-47a0-95e8-5628319b4e66>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
130 degrees celsius to fahrenheit You asked: 130 degrees celsius to fahrenheit Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/130_degrees_celsius_to_fahrenheit","timestamp":"2014-04-17T11:04:30Z","content_type":null,"content_length":"54508","record_id":"<urn:uuid:a7ef2020-9ce9-4454-879d-c19182d2020c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
A new Coq tactic for inversion With Pierre Boutillier, we have been working on a new Coq tactic lately, called invert. From my point of view, it started as a quest to build a replacement to the inversion tactic. inversion is a pain to use, as it generates sub-goals with many (dependent) equalities that must be substituted, which force the use of subst, which in turns also has its quirks, making the mantra inversion H; clear H; subst quite fragile. Furthermore, inversion has efficiency problems, being quite slow and generating big proof terms. From Pierre's point of view, this work was a good way to implement a better destruct tactic, based on what he did during an internship (report in French (PDF)). In a nutshell, the idea behind a destruction and an inversion is quite similar: it boils down to a case analysis over a given hypothesis. And there are quite a few tactics that follow this scheme: elim, case, destruct, inversion, dependent destruction, injection and discriminate (it is true that the last two tactics are quite specialized, but fit the bill nevertheless). Why on Earth would we need to add a new element to this list? Well, it turns out that building on ideas by Jean-Francois Monin to make so called "small inversions", one can unify the inner-working of most of the aforementioned list: it suffices to build the right return clause for the case analysis. Let's take an example. Variable A : Type. Inductive vector: nat -> Type := | nil : vector 0 | cons: forall n (h:A) (v: vector n), vector (S n). Inductive P : forall n, vector n -> Prop := | Pnil : P 0 nil | Pcons: forall n h v, P n v -> P (S n) (cons n h v). Lemma test n h v (H: P (S n) (cons n h v)) : P n v. At this point, doing inversion H generates 4 new hypotheses: H2 : P n v0 H0 : n0 = n H1 : h0 = h H3 : existT (fun n : nat => vector n) n v0 = existT (fun n : nat => vector n) n v P n v Yuck: first, H0 and H1 are just cruft. Then, the goal isn't very palatable, because the equality H3 between v and v0 is defined in terms of a dependent equality: in order to go further, one need to assume axioms about dependent equality^1, equivalent to Streicher's axiom K. (Just to keep tabs, note that running the Show Proof command in Coq outputs a partial proof term that is already 73 lines long at this point.) If we use dependent destruction H instead of inversion, we get the expected hypothesis H: P n v (which is far better from an usability point of view). Yet, there is no magic here: dependent destruction simply used a dependent equality axiom internally to get rid of the dependent equality, and generates a 64 lines long proof term that is not very pretty. At this point, one may wonder: what should the proof term look like? and, is it necessary to use the K axiom here? A black belt Coq user versed in dependent types could write the following one. let diag := fun n0 : nat => match n0 as n' return (forall v0 : vector n', P n' v0 -> Prop) with | 0 => fun (v0 : vector 0) (_ : P 0 v0) => True | S m => fun v0 : vector (S m) => match v0 as v1 in (vector m0) return (P m0 v1 -> Prop) with | nil => fun _ : P 0 nil => True | cons p x v1 => fun _ : P (S p) (cons p x v1) => P p v1 end in match H as H' in (P x y) return (diag x y H') with | Pnil => I | Pcons n0 h0 v0 Pv => Pv Wow, 15 lines long. Let's demystify it a bit. First, recall that the return type of a match is dictated by its return clause (the as ... in ... return ... part). This is basically a function that binds the arguments of the inductive (S n as x, cons n h v as y in our case), H' of type P x y, and which body is the return part. Usually, the return part is a constant (e.g., nat for the match in the List.length), but it is not mandatory. Here, the diag term packs some computations, such that diag (S n) (cons n h v) H reduces to P n v, the conclusion of the goal. (In general, this kind of return clauses make it possible to eliminate impossible branches in a match, as done here by marking them with the trivial return type True; we direct the interested readers to the online CPDT book by Adam Chlipala for more informations on this, especially this chapter.) Then, what is diag? Well, it is a function that follows the structure of the arguments of P to single out impossible cases, and to refine the context in the other ones using dependent pattern matching, in order to reduce to the right type (the type of the initial conclusion of the goal). The idea behind such "small-scale inversions" was described by Monin in 2010 and is out of the scope of this blog post. What is new here is that we have mechanized the construction of the diag functions as a Coq tactic, making this whole approach practical. All in all, using our new tactic, we can just use the following proof script: invert H; tauto. At this point, Show Proof. outputs the following complete proof term (where invert_subgoal is the type of the subgoal solved by tauto): let diag := fun n0 : nat => match n0 as n1 return (forall v0 : vector n1, P n1 v0 -> Prop) with | 0 => fun (v0 : vector 0) (_ : P 0 v0) => False -> True | S x => fun v0 : vector (S x) => v0 as v1 in (vector n1) (match n1 as n2 return (vector n2 -> Type) with | 0 => fun _ : vector 0 => False -> True | S x0 => fun v2 : vector (S x0) => P (S x0) v2 -> Prop end v1) | nil => fun H0 : False => False_rect True H0 | cons n1 h0 v1 => fun _ : P (S n1) (cons n1 h0 v1) => P n1 v1 end in invert_subgoal : forall (n0 : nat) (h0 : A) (v0 : vector n0) (H0 : P n0 v0), diag (S n0) (cons n0 h0 v0) (Pcons n0 h0 v0 H0) => match H as p in (P n0 v0) return (diag n0 v0 p) with | Pnil => fun H0 : False => False_rect True H0 | Pcons x x0 x1 x2 => invert_subgoal x x0 x1 x2 end) (fun (n0 : nat) (_ : A) (v0 : vector n0) (H0 : P n0 v0) => H0)) Some of the differences with the proof term above come from the fact that we generate it interactively, rather than writing it once at all. A legitimate question: how do we compare to destruct and inversion and dependent destruction? First, we aim at producing a "better" destruct: that is, we might resolve the situation in which destruct fails, in order to avoid producing ill-typed terms. Then, the situation with respect to inversion and dependent destruction is less clear. Right now, we would rather not assume the K axiom (the right thing to do if homotopy is the future). In that case, we would fail for inversion problems that require K, and inversion and dependent destruction would be more powerful than our tactic. For problems that do no require to use K, invert would be equivalent to dependent destruction with better looking proof terms^2. We are still working on our prototype, but we are quite confident that we got the main thing right: mechanizing the construction of the return clause. We will come back to this blog when we need 1. See the following FAQ question (Can I prove that the second components of equal dependent pairs are equal?). You may also be interested in this other question (What is Streicher's axiom K?). The EqdepFacts standard library module, that has the equivalence proofs between all those subtle notions. Finally, if you want to finish this proof using these axioms, you can use Require Import Eqdep. then the inj_pair2 lemma. Once you're done, Print Assumptions test. will let you check that you relied on an additional axiom -- or Print Assumptions inj_pair2.↩ 2. Moreover, since our proof terms are less cluttered, it seems less likely than recursive definitions made in "proof mode" with invert will fail to pass the termination check once Coq's guard condition deals properly with such commutative cuts, another part of Pierre's thesis work.↩
{"url":"http://gallium.inria.fr/blog/a-new-Coq-tactic-for-inversion/","timestamp":"2014-04-16T13:28:58Z","content_type":null,"content_length":"18446","record_id":"<urn:uuid:753f60b0-f501-48e5-861a-620353ee3689>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Re: psmatch2 question Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Re: psmatch2 question From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Re: psmatch2 question Date Wed, 25 Aug 2010 10:07:22 -0400 Steve, Anna, et al.-- The bootstrap is not a priori a good idea: http://www.nber.org/papers/t0325 But if you use nonparametric propensity scores, or equivalently a logit with only mutually exclusive exhaustive dummies as explanatory variables, and reweight instead of matching 1:1 or somesuch, you will be better off in many ways; see e.g. Hirano, K., G. Imbens, and G. Ridder. (2003). “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,” Econometrica, 71(4): 1161-1189. The t-stat produced by -psmatch2- is not particularly reliable, compared to one produced by a double-robust regression, say, where you regress the outcome on treatment and other explanatory variables using weights based on propensity scores. But the t-stat on the ATT is intended to guide you to reject or fail to reject the hypothesis that the effect of treatment on those who received treatment is zero. If you decide to bootstrap, save each estimated ATT and its SE and see how the matching estimator's SE compares to the observed standard deviation of estimates; then do the same with the nonparametric propensity score reweighting estimator and you will probably decide not to match but to reweight. A minor point: estimated propensity scores are never "more accurate" than the unknown true scores, but even if you knew the true propensity scores, you could get more efficient estimates in many cases by throwing that information away and estimating propensity scores. This is why computing a SE as if the propensity scores are fixed and known is reasonable. Instead of the presentation, you may want the papers: On Wed, Aug 25, 2010 at 5:51 AM, Steve Samuels <sjsamuels@gmail.com> wrote: > -- > Anna, > First, I must apologize. I showed a method for estimating the ATT > (weighting by propensity scores) which is different from the one that > -psmatch2- uses (matching on propensity scores). So my program does > not apply to your analysis. The -help- for -psmatch2- illustrates a > bootstrap approach to estimating the standard error, although it > states that it is "unclear whether the bootstrap is valid in this > context." > Also, there is literature (see page 26 of Austin Nichols's presentation > http://repec.org/dcon09/dc09_nichols.pdf) which suggests that > estimated propensity scores might be more accurate than the unknown > true scores; if so, then standard errors which consider the estimated > scores as known might be better than the bootstrapped standard errors! > So there is apparently no right answer. I think that a conservative > approach would be to use the -bootstrap- technique shown in the > -psmatch2- help, followed by "estat bootstrap, all" to get confidence > intervals for ATT. > ATT is one of several methods for describing the causal effect of a > treatment: To quote Austin's presentation (p 15): "For evaluating the > effect of a treatment/intervention/program, we may want to estimate the > ATE for participants (the average treatment effect on the treated, or > ATT) or for potential participants who are currently not treated (the > average treatment effect on controls, or ATC), or the ATE across the > whole population (or even for just the sample under study)." > Best wishes > Steve > On Tue, Aug 24, 2010 at 7:23 PM, anna bargagliotti <abargag@yahoo.com> wrote: >> Thank you for your insights about bootstrapping. I wiill try adjusting your >> code to my situation to reproduce the T-stat and compute the p-value. >> I am, however, still confused about two very simple things: >> 1. What is the T-stat for the ATT actually telling us? Is this the T-stat for >> the comparison of treatment vs control matched groups? >> 2. How do we determine if there is a treatment effect? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-08/msg01268.html","timestamp":"2014-04-17T10:05:09Z","content_type":null,"content_length":"13855","record_id":"<urn:uuid:5846e99a-e249-4e97-9c18-a15f67814174>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
scipy.sparse.linalg.bicg(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None)[source]¶ Use BIConjugate Gradient iteration to solve A x = b A : {sparse matrix, dense matrix, LinearOperator} The real or complex N-by-N matrix of the linear system It is required that the linear operator can produce Ax and A^T x. : b : {array, matrix} Right hand side of the linear system. Has shape (N,) or (N,1). x : {array, matrix} The converged solution. Returns : info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved, number of iterations <0 : illegal input or breakdown Other Parameters: x0 : {array, matrix} Starting guess for the solution. tol : float Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol. maxiter : integer Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved. M : {sparse matrix, dense matrix, LinearOperator} Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning dramatically improves the rate of convergence, which implies that fewer iterations are needed to reach a given error tolerance. callback : function User-supplied function to call after each iteration. It is called as callback(xk), where xk is the current solution vector. xtype : {‘f’,’d’,’F’,’D’} This parameter is deprecated – avoid using it. The type of the result. If None, then it will be determined from A.dtype.char and b. If A does not have a typecode method then it will compute A.matvec(x0) to get a typecode. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’,’d’,’F’,or ‘D’. This parameter has been superceeded by
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/generated/scipy.sparse.linalg.bicg.html","timestamp":"2014-04-20T18:30:54Z","content_type":null,"content_length":"9853","record_id":"<urn:uuid:19413519-1912-468a-9fb0-44d36561d725>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that P(n) o P(m) = T (ab) February 18th 2010, 03:17 PM #1 Feb 2010 Prove that P(n) o P(m) = T (ab) I need help to prove the following proposition. Proposition: Let m and n be two parallel straight lines. Let AB be a line segment that first intersects m and then n, that is perpendicular to both m and n, and whose length is twice the distance between m and n. Then Pn o Pm = T (AB) Note: P is reflection and T is translation Thank you! Since the problem can be translated and rotated at will, choose m and n so that the question is most easily answered. I choose m as the x=0 line, and n as the x=a line. Let your test point A be (x,0). Then P(m) on A gives (-x,0), and P(n) on (-x,0) gives (x+2a,0). Clearly that's T(ab). I went to someone for this problem, and that person showed me a different way of proof. Your is good, but it is still unclear why it is T(ab) for the T(ab) is not defined. Thank you. February 19th 2010, 08:13 AM #2 Senior Member Nov 2009 February 21st 2010, 04:16 PM #3 Feb 2010
{"url":"http://mathhelpforum.com/geometry/129519-prove-p-n-o-p-m-t-ab.html","timestamp":"2014-04-20T02:15:58Z","content_type":null,"content_length":"33566","record_id":"<urn:uuid:b5f0d23b-479f-4f2b-8aa0-1aa05f740ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
asymptotic or approximate formula for a combination expression up vote 0 down vote favorite Let 0<=p<=1, I want the value of q1 and q2 where $q1=\sum_{k=0}^n [C(n,k)p^k(1-p)^{n-k}*\sum_{i=0}^{k-1} C(m,i)p^i(1-p)^{m-i}]$ $q2=\sum_{k=0}^n [C(n,k)p^k(1-p)^{n-k}*\sum_{i=k}^{m} C(m,i)p^i(1-p)^{m-i}]$ where C(m,i) is the number of i-combination of a set of m elements. Obviously q1+q2=1. For special m,n, for example, n=8, m=4, p=0.5, q1 is about 0.75, q2 is about 0.25. I guess that q1 will be much greater than q2 when n>>m. So, I want an approximate estimation on q1 and q2. If we let p=0.5, then we have $q1=p^{n+m}\sum_{k=0}^n [C(n,k)*\sum_{i=0}^{k-1} C(m,i)]$ This transformation may make the problem easier. add comment 1 Answer active oldest votes If independent variables $X,Y$ are distributed Binom$(n,p)$, Binom$(m,p)$, respectively, then $q_1$ is the probability that $X>Y$. If $mp,np$ are large and the line $X=Y$ is not too far up vote 1 from the point $(np,mp)$, then the normal approximation of $X$ and $Y$ will give a reasonable answer since $X-Y$ has a 1-dimensional normal distribution. Namely, $X-Y\sim{}$N$(\mu,\ down vote sigma^2)$ where $\mu=(n-m)p$ and $\sigma^2=(n+m)p(1-p)$. If the normal approximation is not good, the same mean and variance are true so you can still get a fair idea. add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/120220/asymptotic-or-approximate-formula-for-a-combination-expression","timestamp":"2014-04-16T22:31:17Z","content_type":null,"content_length":"49956","record_id":"<urn:uuid:1cc5a12a-f386-48a6-81de-7912b0a46ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Darby, PA Trigonometry Tutor Find a Darby, PA Trigonometry Tutor ...I taught Algebra 2 with a national tutoring chain for five years. I have taught Algebra 2 as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including trigonometry, calculus, writing, geometry What makes me most happy about tutoring is the emotional reward: seeing someone I help feel and perform better with their subject is what keeps me going and wanting to help more people. My name is Michael, and I am an experienced young professional conducting independent research at UPenn. I studi... 9 Subjects: including trigonometry, calculus, physics, geometry I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching. 13 Subjects: including trigonometry, calculus, algebra 1, geometry ...My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. I have also worked with accelerated groups in Camden with students that have gone on to receive scholarships and success at highly accredited local high schools. 8 Subjects: including trigonometry, geometry, algebra 1, SAT math ...At graduate school, I had the opportunity of being a teaching assistant. This provided me the opportunity to meet with students on a one-one basis regarding their home works. In the industry, I have passionately taught operators not only how to get things done but did explain from a scientific point of view why things are done the way they are done. 17 Subjects: including trigonometry, chemistry, physics, calculus Related Darby, PA Tutors Darby, PA Accounting Tutors Darby, PA ACT Tutors Darby, PA Algebra Tutors Darby, PA Algebra 2 Tutors Darby, PA Calculus Tutors Darby, PA Geometry Tutors Darby, PA Math Tutors Darby, PA Prealgebra Tutors Darby, PA Precalculus Tutors Darby, PA SAT Tutors Darby, PA SAT Math Tutors Darby, PA Science Tutors Darby, PA Statistics Tutors Darby, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Darby_PA_Trigonometry_tutors.php","timestamp":"2014-04-20T02:05:37Z","content_type":null,"content_length":"24239","record_id":"<urn:uuid:c32fe2a4-d2b4-4d6c-b671-f549c865c55f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract Algebra help September 19th 2010, 10:13 PM #1 Junior Member Dec 2008 Abstract Algebra help I need help with the following problem: Let G be a finite group, H and N are subgroups of G such that N is normal, and |N| and [G:N] are relatively prime. Show that if H is contained in N, then |H| divides |N|. I would really help anyone's help on this. Thank you in advance This seems like an easy application of Lagrange's Theorem to me; the order of a subgroup of will always divide the order the the group. Here, if you forget the blurb about N normal and stuff and just look at the last sentence, then N is the group and H is the subgroup, so... Hello Swlabr, Thank you for your response. In regards to your comment, I was thinking about using the Lagrange's theorem too. So if we assume that H is contained in N, we would have to show that H is a subgroup of N. If this is true, then we can conclude that |H| divides |N|, right? But, I already tried to prove that H is a subgroup of N, and I am kinda stuck now. Hello Swlabr, Thank you for your response. In regards to your comment, I was thinking about using the Lagrange's theorem too. So if we assume that H is contained in N, we would have to show that H is a subgroup of N. If this is true, then we can conclude that |H| divides |N|, right? But, I already tried to prove that H is a subgroup of N, and I am kinda stuck now. H is a subgroup of N because it is also a subgroup of G and because it is a subset of N. You need to use both of these facts. What have you done to prove it so far? September 19th 2010, 11:16 PM #2 September 19th 2010, 11:35 PM #3 Junior Member Dec 2008 September 19th 2010, 11:46 PM #4 September 20th 2010, 12:11 AM #5 Junior Member Dec 2008
{"url":"http://mathhelpforum.com/advanced-algebra/156767-abstract-algebra-help.html","timestamp":"2014-04-20T19:27:09Z","content_type":null,"content_length":"43168","record_id":"<urn:uuid:89ce032d-55e7-490c-a142-b29256b231cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Maple is a computer algebra system offering many possibilities for math problems. This book, based loosely on a previous work at http://alamanya.free.fr/, aims to give all tools needed to be autonomous with this software. The owner has given permission to translate it for this book. Table of Contents 1. Polynomials and rational fractions in Maple 2. Using Maple in Geometry 3. Module and Package Last modified on 17 November 2013, at 15:00
{"url":"https://en.m.wikibooks.org/wiki/Maple","timestamp":"2014-04-16T16:16:13Z","content_type":null,"content_length":"15653","record_id":"<urn:uuid:4480227e-c9e0-4b62-afcf-0e15c3e624e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x K.L. Clark, "PARLOG and its Applications," IEEE Transactions on Software Engineering, vol. 14, no. 12, pp. 1792-1804, December, 1988. BibTex x @article{ 10.1109/32.9064, author = {K.L. Clark}, title = {PARLOG and its Applications}, journal ={IEEE Transactions on Software Engineering}, volume = {14}, number = {12}, issn = {0098-5589}, year = {1988}, pages = {1792-1804}, doi = {http://doi.ieeecomputersociety.org/10.1109/32.9064}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Software Engineering TI - PARLOG and its Applications IS - 12 SN - 0098-5589 EPD - 1792-1804 A1 - K.L. Clark, PY - 1988 KW - PARLOG; parallel logic programming language; PARLOG; Prolog; systems programming; object-oriented programming; high level languages; logic programming; object-oriented programming; parallel programming; PROLOG VL - 14 JA - IEEE Transactions on Software Engineering ER - The key concepts of the parallel logic programming language PARLOG are introduced by comparing the language with Prolog. Some familiarity with Prolog and with the concepts of logic programming is assumed. Two major application areas of PARLOG, systems programming and object-oriented programming, are illustrated. Other applications are briefly surveyed. [1] K. Broda and S. Gregory, "PARLOG for discrete event simulation," inProc. 2nd Int. Conf. Logic Programming, S.-A. Tarnlund, Ed., Uppsala, July 1984, pp. 301-312. [2] M. Buckle, "Process modelling in PARLOG," M.Sc. thesis, Dep. Artificial Intell., Univ. Edinburgh, 1987. [3] K. L. Clark, "PARLOG: The language and its applications," inProc. PARLE Conf., Eindoven, The Netherlands. Berlin: Springer-Verlag, 1987, pp. 212-242. [4] K. L. Clark, and I. T. Foster, "A declarative environment for concurrent logic programming," inProc. Tapsoft 87 Conf., Pisa, Italy. Berlin: Springer-Verlag, 1987. [5] K. L. Clark and S. Gregory, "A relational language for parallel programming," inProc. ACM Conf. Functional Languages and Computer Architecture, Portsmouth, NH, Arvind and J. Dennis, Eds., 1981, pp. 171-178. [6] K. L. Clark and S. Gregory, "PARLOG: Parallel programming in logic,"ACM Trans. Program. Lang., vol. 8, pp. 1-49, 1986. [7] K. L. Clark and S. Gregory, "Notes on systems programming in PARLOG," inProc. Int. Conf. Fifth Generation Computer Systems, Tokyo, H. Aiso, Ed. Amsterdam, The Netherlands: Elsevier/ North-Hollland, 1984, pp. 299-306. [8] K. L. Clark and S. Gregory, "Notes on the implementation of PARLOG,"J. Logic Programming, vol. 2, no. 1, pp. 17-42, 1985. [9] K. L. Clark and S. Gregory, "PARLOG and Prolog united," inProc. 4th Int. Logic Programming Conf., Melbourne. Cambridge, MA: MIT Press, 1987, pp. 927-961. [10] J. Crammond, "Implementation of committed choice languages on shared memory multiprocessors," PARLOG Group, Dep. Comput., Imperial College, London, Res. Rep. (in preparation), 1988. [11] M. G. Cutcher and M. J. Rigg, "PARAMEDICL: A computer aided medical diagnosis system for parallel architectures,"ICL Tech. J., vol. 5, no. 3, pp. 376-384, 1987. [12] A. Davison, "POLKA, a PARLOG object oriented language," PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [13] A. Davison, "Representing blackboards in PARLOG," PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [14] J. Darlington and M. Reeve, "Alice--A Multiprocessor Reduction Machine for the Parallel Evaluation of Applicative Languages,"Proc. 1981 ACM Conf. Functional Programming Languages and Computer Architecture, 1981, pp. 65-75. [15] E. W. Dijkstra,A Discipline of Programming. Englewood Cliffs, NJ: Prentice-Hall, 1976. [16] N. A. Elshiewy, "Logic programming of real time control of telecommunication systems," Comput. Sci. Lab., Ericsson Telecom, Sweden, Res. Rep., 1987. [17] I. T. Foster, "The PARLOG programming system: Reference manual," PARLOG Res. Group, Dep. Comput., Imperial College, London, 1986. [18] I. T. Foster, "Logic operating systems: Design issues," inProc. 4th Int. Logic Programming Conf., Melbourne. Cambridge, MA: MIT Press, 1987, pp. 910-926. [19] I. T. Foster, "Parallel implementation of PARLOG," PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [20] I. T. Foster, "Efficient metacontrol in parallel logic programming languages, "PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [21] I. T. Foster, S. Gregory, G. A. Ringwood, and K. Satoh, "A sequential implementation of PARLOG," inProc. 3rd Int. Logic Programming Conf., London. Berlin: Springer-Verlag, 1986, pp. 149- 156. [22] J. Garcia, M. Jourdan, and A. Rizk, "An implementation of PARLOG using high level tools," inProc. ESPRIT 87: Achievements and Impact. Amsterdam, The Netherlands: North-Holland, 1987, pp. [23] D. Gilbert, "Implementing LOTOS in PARLOG," M.Sc. thesis, Dep. Comput., Imperial College, London, 1987. [24] D. Gilbert, "Executable LOTOS: Using PARLOG to implement an FDT," inProc. Protocol Specification, Testing and Verification VII, Zurich, 1987. [25] S. Gregory,Parallel Logic Programming in Parlog--the Language and its Implementation(Int. Series in Logic Program.). London: Addison-Wesley, 1987. [26] S. Gregory, I. Foster, A. D. Burt, and G. A. Ringwood, "An abstract machine for the implementation of PARLOG on uniprocessors," PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [27] S. Gregory, R. Neely, and G. A. Ringwood, "PARLOG for specification, verification and simulation," inProc. 7th Int. Symp. Computer Hardware Description Languages and their Applications, Tokyo, C. J. Koomen and T. Moto-oka, Eds. Amsterdam, The Netherlands: Elsevier/North-Holland, 1985, pp. 139-148. [28] C. A. R. Hoare, "Communicating sequential processes,"Commun. ACM, vol. 21, pp. 666-677, 1978. [29] M. H. Huntbach, "Algorithmic PARLOG debugging," inProc. 1987 Symp. Logic Programming, San Francisco, CA, IEEE Comput. Soc. Press, 1987, pp. 288-297. [30] M. H. Huntbach, "The partial evaluation of PARLOG programs," PARLOG Res. Group, Dep. Comput., Imperial College, London, Res. Rep., 1987. [31] K. Kahn, E. D. Tribble, M. S. Miller, D. G. Bobrow, "Objects in concurrent logic languages," inProc. OOPSLA '86, Portland, OR, ACM, 1986. [32] M. Lam and G. Gregory, "PARLOG and ALICE: A marriage of convenience," inProc. 4th Int. Logic Programming Conf., Melbourne. Cambridge, MA: MIT Press, 1987, pp. 294-310. [33] M. Lam and G. Gregory, "Implementation of PARLOG on DACTL," PARLOG Res. Group, Dep. Comput., Imperial College, London, Draft Paper, 1987. [34] F. G. McCabe, K. L. Clark, and B. D. Steel,micro-PARLOG 3.1 Programmers Reference Manual, Logic Programming Associates Ltd., London, 1984. [35] Y. Matsumoto, "A parallel parsing system for natural language analysis," inProc. 3rd Int. Logic Programming Conf., London. Berlin: Springer-Verlag, 1986, pp. 396-409. [36] C. Mierkowsky, S. Taylor, E. Shapiro, J. Levy, and M. Safra, "The design and Implementation of Flat Concurrent Prolog," Dep. Appl. Math., Weizmann Inst., Tech. Rep. CS85-09, 1985. [37] G. A. Ringwood, "The dining logicians," M.Sc. thesis, Dep. Comput., Imperial College, London, 1984. [38] G. A. Ringwood, "PARLOG86 and the dining logicians," PARLOG Res. Group, Dep. Comput., Imperial College, London, 1987; to appear inCommun. ACM, 1988. [39] E. Y. Shapiro, "A subset of Concurrent Prolog and its interpreter," ICOT, Tech. Rep. TR-003, Tokyo, 1983. [40] E. Y. Shapiro and C. Mierowsky, "Fair, biased, and self balancing merge operators," inProc. IEEE Symp. Logic Programming, Atlantic City, NJ, IEEE Comput. Soc. Press, 1984, pp. 83-90. [41] E. Y. Shapiro and A. Takeuchi, "Object oriented programming in Concurrent Prolog,"New Generation Comput., vol. 1, pp. 25-48, 1983. [42] A. Takeuchi and K. Furakawa, "Bounded buffer communication in Concurrent Prolog,"New Generation Comput., vol. 3, no. 2, pp. 145-155, 1985. [43] A. Takeuchi and K. Furukawa,Parallel Logic Programming Languages(LNCS 225). New York: Springer-Verlag, 1986, pp. 242-254. [44] R. Trehan and P. Wilk, "A parallel shift-reduce parser for committed choice non-deterministic logic languages," Artificial Intell. Applicat. Inst., Edinburgh Univ., Teeh. Rep. A1A1-TR-26, 1987. [45] K. Ueda, "Guarded horn clauses," ICOT, Tokyo, Tech. Rep. TR- 103, 1985. [46] R. Yang and H. Aiso, in "P-Prolog: A parallel logic language based on exclusive relation," inProc. 3rd Int. Logic Programming Conf., London. Berlin: Springer-Verlag, 1986, pp. 259-269. Index Terms: PARLOG; parallel logic programming language; PARLOG; Prolog; systems programming; object-oriented programming; high level languages; logic programming; object-oriented programming; parallel programming; PROLOG K.L. Clark, "PARLOG and its Applications," IEEE Transactions on Software Engineering, vol. 14, no. 12, pp. 1792-1804, Dec. 1988, doi:10.1109/32.9064 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/ts/1988/12/e1792-abs.html","timestamp":"2014-04-17T07:05:39Z","content_type":null,"content_length":"56801","record_id":"<urn:uuid:86cfc3fb-f1ed-411e-b8f3-538477f058e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
A. The model Hamiltonian B. The distribution of final momenta C. Classical perturbation theory 1. The change in the final momenta 2. The angular distribution 3. Some interesting limits 4. The joint angle and energy distribution and the angle dependent final average energy III. CLASSICAL WIGNER THEORY FOR THE SCATTERING OF Ar FROM Ag(111)
{"url":"http://scitation.aip.org/content/aip/journal/jcp/130/19/10.1063/1.3131182","timestamp":"2014-04-19T00:35:58Z","content_type":null,"content_length":"72709","record_id":"<urn:uuid:d3f1e6e0-be7f-4e3a-b895-56f8ac44b86b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Interview Questions 1: Array Pair Sum Once again it’s the college recruiting season of the year and tech companies started the interview process for full time and internship positions. I had many interviews last year these days for a summer internship. Eventually I was an intern at Microsoft Bing, and will be joining there full time next summer. I won’t have any interviews this year, but since most of my friends are actively preparing for them nowadays, I thought it would be useful to share some good quality interview questions and provide my solutions. I come across this particular question pretty often recently: Given an integer array, output all pairs that sum up to a specific value k. Let’s say the array is of size N. The naive way to solve the problem, for each element checking whether k-element is present in the array, which is O(N^2). This is of course far from optimal and you might not want to mention it during an interview as well. A more efficient solution would be to sort the array and having two pointers to scan the array from the beginning and the end at the same time. If the sum of the values in left and right pointers equals to k, we output the pair. If the sum is less than k then we advance the left pointer, else if the sum is greater than k we decrement the right pointer, until both pointers meet at some part of the array. The complexity of this solution is O(NlogN) due to sorting. Here is the Python code: def pairSum1(arr, k): if len(arr)<2: left, right = (0, len(arr)-1) while left<right: if currentSum==k: print arr[left], arr[right] left+=1 #or right-=1 elif currentSum<k: Most of the array based interview questions can be solved in O(NlogN) once we sort the input array. However, interviewers would generally be expecting linear time solutions. So let’s find a more optimal O(N) solution. But first we should clarify a detail with the interviewer, what if there is more than one copy of the same pair, do we output it twice? For example the array is [1, 1, 2, 3, 4] and the desired sum is 4. Should we output the pair (1, 3) twice or just once? Also do we output the reverse of a pair, meaning both (3, 1) and (1, 3)? Let’s keep the output as short as possible and print each pair only once. So, we will output only one copy of (1, 3). Also note that we shouldn’t output (2, 2) because it’s not a pair of two distinct elements. The O(N) algorithm uses the set data structure. We perform a linear pass from the beginning and for each element we check whether k-element is in the set of seen numbers. If it is, then we found a pair of sum k and add it to the output. If not, this element doesn’t belong to a pair yet, and we add it to the set of seen elements. The algorithm is really simple once we figure out using a set. The complexity is O(N) because we do a single linear scan of the array, and for each element we just check whether the corresponding number to form a pair is in the set or add the current element to the set. Insert and find operations of a set are both average O(1), so the algorithm is O(N) in total. Here is the code in full detail: def pairSum2(arr, k): if len(arr)<2: for num in arr: if target not in seen: output.add( (min(num, target), max(num, target)) ) print '\n'.join( map(str, list(output)) ) VN:F [1.9.22_1171] Programming Interview Questions 1: Array Pair Sum, I came up to htable solution in the first place. The complexity is O(n+m) where n is input size, m amount of htable keys. Which is linear. Keep your work going. Nice start. A great solution to a problem that’s seen on many interview routes! Well done! Again, I appreciate the way you present the least optimal solutions first and slowly lead towards the one that’s optimal. This is a great interview strategy too. Very nice! @George I think complexity is O(n * n/m) where m is # keys and n is # elements. Assume m=1, now you have n^2 right? @Arden, I think Python “set” is not always O(1) on find and insert as documented here. http://wiki.python.org/moin/TimeComplexity If you can somehow instantiate the set with specifying number of keys then you can choose m=n and achieve worst case O(1). However if you just instantiate it as set() and do not avoid duplicate pairs and assume (x,y)!=(y,x) then underlying Python “s”et implementation needs to do bucketing which can lead to O(n) for find in worst case, as documented. I think currently not possible to specify # of keys in hash table that is staying under set implementation. Python can be problematic, however Java also maintains hash table with “load factor”. We should definitely implement our own hash table… :) What do you think? You’re totally right Ahmet. As the load factor of the set increases, the worst case complexity of a single operation becomes linear. But I would assume that after a certain load factor python would resize the set by doubling its size. So, the average time for an operation would still be amortized O(1), but still for some elements it can be O(N) in the worst case as you said. However, during an interview I suppose it’s safe to assume O(1) for operations on sets and hashtables. Implementing our own hashtable is a great idea. In my web search course last semester, I remember searching for a hashtable implementation where you can give the size as hint during construction, so that it would perform less resize operations. Because I already knew that I’ll insert millions of elements while implementing a search engine. I think the default size of a dictionary in python is 8, and the load factor threshold for resizing is 2/3. The size is multiplied by 4 during resizing unless the hashtable already big (50,000), otherwise it doubles the size. If I am using C++ STL Set , 1. Insert takes logarithmic time , but it is amortized constant. 2. Find takes logarithmic time. So I think it would be O(nlogn) rather than O(n) The worst case complexity of find in set can be as bad as O(N) as Ahmet mentioned above. But I think it’s safe to use the average case constant complexity for sets and hashtables during an interview, by mentioning the worst case behavior. To be technically precise I should write Omega(N) since big-O is the worst case bound, but these articles are intended to focus more on common interview practices. But you’re right, the very worst case complexity using C++ STL set is O(NlogN). But I don’t think interviewers will object to O(N) as long as you mention the worst case, that’s my experience at least. Hi Arden, In the first solution, how do we decide to use left++ or right–. It might fail if there are duplicates in the array. if we use left++ and input array is a = {-48,96,96,96} and k is 48, then it fails But if we use right– for same input array it works fine. That’s a good point Say, thanks for mentioning. Right– will also fail for some cases, for example input array is a = {-48,-48,-48,96}. So we need additional mechanisms to deal with duplicates, but I omitted them to keep the code simple in the first solution. I address that issue in the second algorithm. Hi Arden, I would like to suggest a slight modification for the first approach. Instead of commenting the right-=1 in the first if condition (currentSum==k), keep it as a part of the code over there and change the last else condition to elseif currentSum>k: right-=1 I believe this should address the issues. Please let me know if I am wrong. Thank you. Hi Arden, using the second algorithm, if you have an array of {1, 2, 5,3} and k=4 at 1: target = 4-1=3 (add 1 to set) set={1} at 2: target = 4-2=2 (add 2 to set) set={1,2} at 5: target = 4-5=-1 (add 5 to set) set={1,2,5} at 3: target = 4-3=1 (add 3 to set 1) set={1,2,5,3} how the logic will output (1,3) ?? At the last iteration since target=1 is in the set we execute the else part. So, we don’t add 3 to the set, we append (1, 3) to the output. I have one doubt about this statement “target=k-num;” in this case you can only create pair whose come is equal to value k. but the question is finding the pair whose sum is upto K. example suppose k=4 and our pair is (1,2) so 1+2=3 but this pair will not come in this algo. Yes this statement needs modification. Target = K -num Was asked this question in Microsoft interview. Things to keep in mind :- - duplicates possible - overflow, underflow cases Approach was to sort the array and then iterate over the array and for each item, do a binary search for k-arr[idx] on the remaining right part of the array. how do you handle overflow, underflow ? Yes, binary search approach is a constant space but O(n logn) time. HashTable solution is O(n) space and time. void Main() int[] inputArray = { 10,12,13,15,4,3,1,8,8}; Hashtable visitedHash = new Hashtable(); int desiredSum = 16; foreach(var number in inputArray) if(visitedHash.Contains(desiredSum – number)) String.Format(“Pair Found {0}-{1}”,number,desiredSum-number).Dump(); // Add it to the hash, Key will be the number, value will be { desiredSum – number } visitedHash.Add(number,(desiredSum – number)); Good one! By the way, what is .Dump(); ? Also, once we have found that visitedHash contains the ‘desiredSum – number’, I think, we need to remove that record from the visitedHash, don’t we? Otherwise following will happen (with the existing code): If the original array is {10, 6, 6, 6} and desiredSum = 16, then it will print following pairs: {10,6}, {10, 6}, {10, 6} instead of printing {10, 6} just once (I believe, in this case printing {10, 6} only once will be desired). //finding no. of pairs of numbers accounting to a sum = k in O(n) and triads in O(n^2) using namespace std; void countpair(int *array,int i,int len,int &count2 , int k) int j=len; if(array[i]+array[j] == k) if(i==0 && j==len ) cout<<array[i]<<" "<<array[j]<<endl; if(array[i]==array[i-1] && array[j]==array[j+1]) cout<<array[i]<<" "<<array[j]< k) if(array[i]+array[j] < k) void counttriad(int *array,int i,int len,int &count3,int k) int count2=0; countpair(array,i+1,len,count2,k-array[i]);//fixing that the value at that index //and searching for the concerned pair in th e right part of the array int main() int count2=0,count3=0; int array[]={1,1,2,2,3,3,6,7,7,8,8,9,9,10,10,11,11}; int len=sizeof(array)/sizeof(int); int k=12; return 0; operation find() with set is not O(1), please considering the underlining design of set — RB tree, how can find() takes O(1) ? For Java, the HashSet is not underlined by a RB Tree. That’ll be a TreeSet. Your analysis of pairSum2 seems to assume that checking if a number is ‘in seen’ can be done in constant time. This entry was posted in Programming Interview. Bookmark the permalink.
{"url":"http://www.ardendertat.com/2011/09/17/programming-interview-questions-1-array-pair-sum/","timestamp":"2014-04-21T09:40:39Z","content_type":null,"content_length":"51450","record_id":"<urn:uuid:09b18eff-b88f-4125-bcc7-6cdd6c4a0d36>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Picard group of a K3 surface generated by a curve up vote 1 down vote favorite In Lazarsfeld's article "Brill Noether Petri without degenerations" he mentions the fact that for any integer $g \geq 2$, one may find a K3 surface $X$ and a curve $C$ of genus $g$ on $X$ such that the Picard group of $X$ is generated by $[C]$. How does one prove that ? ag.algebraic-geometry complex-geometry k3-surfaces algebraic-curves @Youloush: What precisely are you asking? Ottem's answer proves that such surfaces exist. However, if you want to write one down in an explicit way, say over a field like $\overline{\mathbb{Q}}$ and for $g$ moderately large, that is quite difficult. For fields like $\overline{\mathbb{Q}}$, I recommend you look at the work of van Luijk. For large $g$, the results of Gritsenko, Hulek and Sankaran show that one cannot "algebraically parameterize" general K3 surfaces of large genus. – Jason Starr Mar 19 '13 at 12:00 add comment 1 Answer active oldest votes This is equivalent to saying that there exists a K3 surface with an ample generator (a polarization) $L$, with $L^2=2g-2$ and $|L|$ has a smooth member. There are various geometric ways to construct such surfaces, e.g., by using double covers or quartic surfaces in $\mathbb P^3$ containing special curves. You will find this in VIII.15 in Beauville's book 'Complex Algebraic up vote 5 Surfaces". Given this, one can show the existence of a K3 where $L$ is a generator of the Picard group, using the fact that a generic K3 surface has Picard number 1. down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry complex-geometry k3-surfaces algebraic-curves or ask your own question.
{"url":"http://mathoverflow.net/questions/124880/picard-group-of-a-k3-surface-generated-by-a-curve","timestamp":"2014-04-20T16:41:18Z","content_type":null,"content_length":"50842","record_id":"<urn:uuid:d5a48f1a-dc5f-4e08-9bcb-1bc3ffd69475>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Cost analysis for Squid Sushi @ Seraph Hello fellow crafters. Today at work, i was most bored. So i decided to invest a little bit of time in creating a cost analysis for synthing squid sushi. Earth Crystal: 1000 (12) Tarutaru Rice: 3000 (12) Gigant Squid : 3000 (1) (x12 = 36000) Rice Vinegar : 120 (1) (x12 = 1440) Wasabi: 1945 (1) (x12 = 23340) Water: 10 (1) (x12 = 120) Total cost for 12 synths: This brings each synth to Breakdown of possible results: NQ = 6 HQ1 = 2 (+1s) HQ2 = 4 (+1s) HQ3 = 6 (+1s) So to produce 1 stack of NQ needs 2 synths. Assuming the spread of HQs is equal over all 3 results, the average HQ result is 4 sushi. NQ Cost = 10816.67 HQ Cost = 16225 I am skill 100. This is a Lv70 synth. This brings me to 30 lvls over cap. However, irritatingly to break into Tier II requires 31 lvls above cap, and so my current HQ rate is 10% while in Tier I. (I'll recalculate for Tier II after) Assuming a fixed break rate of 5%: Using my Tier I of: NQ = 90% HQ = 10% The adjusted break costs are NQ = 4.50% HQ = 0.5% due to their frequency. The cost of breaks results in: NQ = -243.3 HQ = -27 Bringing the cost after breaks to NQ = 10573.29 HQ = 16179.95833 The Auction House The price on the AH is 9000 for NQ, and 21000 for HQ. This brings the margins to NQ = -1573.29 HQ = 4802.04 So using the NQ/HQ split, brings the respective profits to NQ = -1415.9625 HQ = 480.2042 So the toal profits after 12 synths is -935.75g. Before AH tax. The cost for a stack is 0.5%, with an initial tax of 200g. Lets now assume that i'm skill 100+1, pushing me into the Tier II. Now i'm adjusting several values, but i'm obviously i'm keeping the costs the same. The NQ/HQ% has now shifted to NQ = 70% HQ = 30% My break % has now shifted to: NQ = 3.50% HQ = 1.50% This brings the cost of breaks to: NQ = 189.29 HQ = 81 Bringing total after breaks to: NQ = 10627.38 HQ = 16143.875 This shifts my margins: NQ = -1627.38 HQ = 4856.13 The split profits after is: NQ = -1139.1625 HQ = 1456.8375 Bringing the total profits over 12 synths to 317.675g Slightly Oh wait. -0.5% AH tax and the fixed -200g. This shows what a thin line the crafters ride on who buy everything from the AH (& NPC). The whole reason for me making this detailed analysis of squid sushi, was because i'm now fishing, and i've started to catch Squid, and i was investigating what sort of profits i'd make if i were to synth it. And the results aren't promising. I'm almost better off just selling the squid to hopeful crafters looking to squeeze such a tiny profit margin. I made something similar a while back on Bismarck server to see which (If any) sushi would be the most profitable. For a while I was fishing during work, and making an incredibly sexy sum of gil, something like 150-200K a day. Then the undercutting happened, and it got to the point that crafting the squid netted a loss. I really couldn't understand how something so lucrative could be destroyed so fast. That started me looking at other crafts, and pretty much what I noticed is that all the crafts end up with pretty much the same result. I keep hearing that crafting is broken, in my opinion this is exactly why. I guess it's a result of there only being so few crafts to chose from, and everyone and their sister having a craft to 100 now-a-days. Personally I just BCNM/KSNM/ISNM/HNM for gil, I've never been smart enough to play the crafting market for nickles and dimes. 2 things. 1) AH Space. 2) Inventory You can maximize AH space efficiency if you craft your materials and save up inventory space if they are not stackable. This ofcourse varies from an item to another and changes with time due to undercutting. Yeah, it's pretty much this way all over. In all the crafts. Tavnazian Tacos at 9K a stack on my server? That synth is a total PITA with unstackable ingredients and unstackable intermediate product. I used to make a lot of those because I enjoyed relaxing and fishing in SSG to catch the fishy ingredients, but even that is not worth doing now. I honestly can't figure out why we do this to each other. I believe that most of the RMT have been driven off, so we're doing it to ourselves. I've mostly stopped crafting (luckily I have a bankroll from the good days that should last me about 20 years). Sometimes, I go back and craft again for the love of crafting, but usually when I find a good synth with modest profit, then within maybe 5 days, an asshat show up who completely and unnecessarily undercuts that market down to 0% profitability. From which it does not recover for a very long time, if ever. I feel sad for today's new crafters... ElvaanMonkee wrote: Assuming the spread of HQs is equal over all 3 results, It is not. If you want to adjust your findings, HQs stratification is 12:3:1 for HQ1:HQ2:HQ3. What's with the squid sushi stuff lately? It is/was a good moneymaker. So are tacos. As for why it's so wonky in price. I think it's actually due to the +atk/acc pizza & the 2 handed update. There's just not as much reason to always use sushi as there used to be. Side note...the cost of farmable materials for synthing low level gear seems to have gone up on my server over the last month. That's ended up bumping up the price of the low level gear. Not quite sure what to make of that though. Acturus wrote: ElvaanMonkee wrote: Assuming the spread of HQs is equal over all 3 results, It is not. If you want to adjust your findings, HQs stratification is 12:3:1 for HQ1:HQ2:HQ3. Now thats a helpful little tid-bit. Question is, does this breakdown cover Tier III, Tier II and Tier I? That looks about in line with what i get when making Fish Oil Broths at Tier III. Do you know of an ajustment for Tier II/I/O? Edit for embedded quote fail >.< Edited, Feb 28th 2009 3:46am by ElvaanMonkee ElvaanMonkee wrote: Acturus wrote: ElvaanMonkee wrote: Assuming the spread of HQs is equal over all 3 results, It is not. If you want to adjust your findings, HQs stratification is 12:3:1 for HQ1:HQ2:HQ3. Now thats a helpful little tid-bit. Question is, does this breakdown cover Tier III, Tier II and Tier I? That looks about in line with what i get when making Fish Oil Broths at Tier III. Do you know of an ajustment for Tier II/I/O? HQ stratification is the same across all HQ tiers.
{"url":"http://wow.allakhazam.com/forum.html?forum=24&mid=1235674030271407374","timestamp":"2014-04-19T17:42:16Z","content_type":null,"content_length":"68526","record_id":"<urn:uuid:2e29fee3-0ab5-4bfd-9e06-f03009be0157>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Behaviour of THETA in relation to TIME REMAINING TO EXPIRATION – With Past DATA and CHARTS The following is the behavior of Theta in relation to Time to Expiration: For ATM option, Theta increases as an option gets closer to the expiration date. On the other hand, for ITM & OTM options, Theta decreases as an option is approaching expiration. The above effects are particularly observed in the last few weeks (about 30 days) before expiration. Using the same past actual data as shown in the previous post on the behavior of Delta, namely: Options Chain for Call options of RIMM as at 3 Sep 2010, when the closing price is $44.78 and Implied Volatility (IV) is 54.05, for expiration month of Sep 2010 (10 days to expiration), October 2010 (38 days to expiration) and Dec 2010 (101 days to expiration). The following is the summary of Theta values for different Time to Expiration: For easier analysis, we can plot the Theta values of different Degree of Moneyness across various Time to Expiration, as follows: As can be seen from the table and the chart above: For (near) ATM options (i.e. strike price $45.00, because the stock price is $44.78), Theta (in absolute value) is the higher for the options with expiration month “Sep-10” (nearer to expiration), as compared to “Oct-10” and “Dec-10”. In other words, Theta (in absolute value) increases as time to expiration gets nearer. Whereas for both deep ITM (strike price $35.00 & $37.50) and deep OTM options (strike price $52.50 & $55.00), Theta (in absolute value) is the lower for the options with expiration month “Sep-10” (nearer to expiration), as compared to “Oct-10” and “Dec-10”. In other words, Theta (in absolute value) decreases as time to expiration gets nearer. For Theta, we’re always comparing Theta here (whether it’s high or low) in terms of the absolute value, because the negative sign only represents the decaying effect. Likewise, we’ll also compare Theta of different time to expiration at various strike prices, as shown in the chart below. As can be seen in the chart: For all the three options with different time to expiration, Theta always behaves the same way, i.e. given the same time to expiration, Theta of ATM options is higher, and it gets lower as it moves towards deep ITM and deep OTM options. That means: Given the same time to expiration, ATM options will always decay faster as time goes by, as compared to deeper ITM and OTM options would. However, the blue line (i.e. options with expiration month “Sep-10”) is much steeper than the red line (i.e. options with expiration month “Oct-10”) and green line (i.e. options with expiration month This means: Theta values for options with nearer time to expiration differ more significantly along various strike prices, as compared to those with further time to expiration. The further the time to expiration is, the smaller the difference in the Theta values across different strike prices will be. Given the same time to expiration, ATM options will always decay faster as time goes by (i.e. have higher Theta) than the deeper ITM and OTM options would. Given an ATM option, the option with nearer time to expiration will have the highest Theta (will decay the fastest), as compared to that with longer time to expiration. Given a deeper ITM / OTM option, the option with nearer time to expiration will have the lowest Theta (will decay the slowest), as compared to that with longer time to expiration. To view the list of all the series on the this topic, please refer to: “Behaviour of OPTION GREEKS in relation to TIME REMAINING TO EXPIRATION and IMPLIED VOLATILITY (IV) – With Past DATA and CHARTS.” Other Learning Resources: * FREE Trading Educational Videos with Special Feature * FREE Trading Educational Videos from Trading Experts Related Topics: * Understanding Implied Volatility (IV) * Understanding Option Greek * Understanding Option’s Time Value * Learning Candlestick Charts * Options Trading Basic – Part 1 * Options Trading Basic – Part 2 1 comment: stock said... This blog Is very informative. You explain each and every point very well.
{"url":"http://optionstradingbeginner.blogspot.com/2011/11/behaviour-of-theta-in-relation-to-time.html","timestamp":"2014-04-19T17:21:59Z","content_type":null,"content_length":"98008","record_id":"<urn:uuid:3d0ccd98-2b0f-4654-9404-ba4193d9ffe8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Math in the Media Counting Ben Franklin's magic squares In 1736-1737 Benjamin Franklin constructed a number of 8 by 8 "magic squares" and one of size 16 by 16. The method he used seems still not to be known, but the total number of his type of 8 by 8 square has recently been calculated. Dawn Walton told the story in the story in "Magic math mystery solved at last" (Toronto Globe and Mail, March 3, 2006). There are, it turns out, several kinds of magic squares, even if we stick to the natural n by n squares in which every number from 1 to n^2 appears once : • semi-magic: rows and colums all have the same sum s[n] = (n/2)(n^2 + 1). • (fully) magic: the rows, column and main diagonals all add up to s[n]. • pandiagonal square: main diagonals and all other "broken" diagonals sum up to s[n]. • Franklin's squares; rows, columns and all "bent" diagonals sum up to s[n], and all 2 by 2 sub-blocks sum to 4 s[n]/n. • complete squares: pandiagonal, magic, 2 by 2 sub-blocks sum to 4 s[n]/n and opposite elements (distant n/2) along a diagonal sum to 2s[n]/n. Diagonals: main (a), broken (b) and bent (c). In the modern taxonomy, Franklin's squares are not magic. Nevertheless "how many permutations are possible under his distinct mathematical design" has "stumped mathematicians ever since," according to Walton. M. M. Ahmed in 2004 had given an upper bound of 228,881,701,845,346, but now "a trio of Canadian number crunchers have used modern-day technology" to pin the answer down quite a bit lower: "Using Franklin's rules, there are 1,105,920 variations of his magic square." The result is due to Peter Loly (Physics and Astronomy, Manitoba) and two graduate students, Daniel Schindel and Matthew Rempel. In their paper (available online) they describe their method: using an efficient algorithm to generate all the Franklin squares, and keeping an accurate tally. Remarkably, as they report, one third of the Franklin squares they found were also pandiagonal, and 368,640 is exactly what had been calculated in 1998 as the number of 8 by 8 complete squares. They show that this is not a coincidence by giving an algorithm for transforming one type to the other. Franklin was very proud of his 16 by 16 square; he described it in his autobiography as "the most magically magical of any magic square ever made by any magician." (See Franklin's Magic Squares at Mathpages.com, which also gives the square itself). Unfortunately, it does not look like the Loly team's method, which involved searching (albeit efficiently) through 64! permutations, will be able to crack the 16 by 16 problem; 256! is a much bigger number. Chaos in the deep "Reduced mixing generates oscillations and chaos in the oceanic deep chlorophyll maximum" appeared in the January 19 2006 Nature. The authors, an Amsterdam-Honolulu collaboration led by Jef Huisman and Nga N. Pham Thi, investigated the stability of deep chlorophyll maxima (DCMs) -layers of high concentration of phytoplankton who flourish where there are sufficient nutrients welling up from the bottom and sufficient light filtering down from the top. The point of the article: "we extend recent phytoplankton models to show that the phytoplankton populations of DCMs can show sustained fluctuations." The authors set up a mathematical model, a reaction-advection-diffusion equation for the phytoplankton population density P coupled to a partial differential equation for the nutrient availability N. A common parameter in both equations is the "turbulent diffusivity" κ , the coefficient of the second-derivative terms. If κ is sufficiently large, "nutrients in the top layer are gradually depleted by the phytoplankton. The nutricline moves downwards, tracked by the phytoplankton population, until the population settles at a stable equilibrium at which the downward flux of consumed nutrients equals the upward flux of new nutrients." To investigate the behavior for lower κ, the authors ran "numerous simulations using a wide range of turbulent diffusivities." "The model simulations predict that the DCM becomes unstable when turbulent diffusivity is in the lower end of the realistic range. By a cascade of period doublings, reduced turbulent mixing can even generate chaos in the DCM." The numerical solution of the coupled P-N differential equations shows bifurcation and eventually chaos as the mixing parameter is decreased. This is a close-up picture of the evolution of the local maxima and minima of the phytoplankton population as a function of turbulent diffusivity, near the low end of the realistic range 0.1 < κ < 1. Image from Nature 439 324, used with permission. Their explanation for the periodic behavior: if κ is low, the phytoplankton sink faster than the nutrients are welling up; without sufficient light their numbers decline. This lets more nutrients through up to more luminous layers, and "fuels the next peak in the DCM." An ominous note: "Climate models predict that global warming will reduce vertical mixing in the oceans." The differential geometry of quantum computation "Quantum computers have the potential to solve efficiently some problems that are considered intractable on conventional classical computers." This is the start of "Quantum Computation as Geometry," a report in the February 4 2006 issue of Science. The authors are four members of the School of Physical Sciences, University of Queensland; a team led by Michael Nielsen. They continue: "Despite this great promise, as yet there is no general method for constructing good quantum algorithms, and very little is known about the potential power (or limitations) of quantum computers." What they propose in this report is an alternative approach to understanding the difficulty of an n-qubit computation, i.e. the complexity of the quantum algorithm that would be needed to carry it out. Such a computation corresponds to a unitary operator U (a 2^n x 2^n matrix with complex entries). The authors' definition of difficulty is the length d(I,U) of the shortest path from the indentity matrix to U, where length is measured in a metric which penalizes all computational moves which require gates with more than two inputs. They show that this distance is is "essentially equivalent to the number of gates required to synthesize U." "Our result allows the tools of Riemannian geometry to be applied to understand quantum computation. In particular we can use a powerful tool --the calculus of variation-- to find the geodesics of the space." They remark that thinking of an algorithm as a geodesic "is in contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit." Finally they show how "to construct explicitly a quantum circuit containing a number of [one and two-cubit] gates that is polynomial in d(I,U) and which approximates U closely." Tony Phillips Stony Brook University tony at math.sunysb.edu
{"url":"http://cust-serv@ams.org/news/math-in-the-media/mmarc-04-2006-media","timestamp":"2014-04-21T15:01:24Z","content_type":null,"content_length":"18941","record_id":"<urn:uuid:f8bc1c76-63c2-4362-b694-d8e62b966378>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
How far apart do they place the pylons for parallel parking test? Alternate wordings how far apart are the poles for parallel parking test? what distance apart should pylons be placed to practice parallel parking? how far apart do the poles have to be for parallel parking? how far apart do the pilons have to be to practice parrell parking? How to parallel parking with pylons? how far are the pylons for drivers test? how far apart are the poles for parallel parking? how far away should pylons be for parallel parking? how far apart should parellel parking lines be? how far apart are the poles for parallel parking in texas? how far apart does the cones have to be apart to practice parallel parking for? how far apart are pylons for parallel parking tests? how to practice parallel parking? how far apart should parallel parking practice be? how do you parrallel park with Follow this question By Email: Once you sign in you will be able to subscribe for any updates here By RSS: Markdown Basics • *italic* or __italic__ • **bold** or __bold__ • link:[text](http://url.com/ "title") • image? • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported Asked: Mar 23 '12 at 09:57 Seen: 594 times Last updated: Mar 23 '12 at 09:57
{"url":"http://qnapal.com/questions/15412/how-far-apart-do-they-place-the-pylons-for-parallel-parking-test","timestamp":"2014-04-16T04:43:48Z","content_type":null,"content_length":"44355","record_id":"<urn:uuid:b66a423f-6adb-41c8-8482-b41671291d08>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of a circle joining 2 connected lines May 10th 2012, 08:15 PM #1 May 2012 cape girardeau, missouri Area of a circle joining 2 connected lines 2 lines of length L1 and L2 (not equal) are at right angles. At the ends furthest from the right angle, a curve, a section of a circle of radius r, joins the ends to form a closed figure. What is the method in terms of L1, L2 and r to determine the area of the figure? Re: Area of a circle joining 2 connected lines There isn't any method. You only can determine the length of x (if the lengthes of $L_1$ and $L_2$ are known!) and that the $r\geq x$. Re: Area of a circle joining 2 connected lines Two obvious points: the area of the triangle is $(1/2)L_1L_2$ and the length of the hypotenuse is $h= \sqrt{L_1^2+ L_2^2}$. So the rest of the problem is to find the area of a portion of a circle of radius r, outside a chord of length h. One way to analyze that is to use the "cosine law". The hypotenuse of the right triangle and two radii of the circle form an isosceles triangle with two sides of length r and one side of length x. According to the cosine law, $x^2= 2r^2(1- cos(\theta))$ where $\theta$ is the central angle. That tells us that the central angle is given by $\theta= arccos(1- \frac{x^2}{r^2})$. The area of a "sector" of a circle of radius r and central angle $\theta$ is $\frac{1}{2}\theta r^2$ so the area of the sector here is $\frac{1}{2}r^2 arccos(1- \frac{x^2}{r^2})$. That isosceles triangle, having two sides of length r and one of length x, can be divided into two right triangles each having hypotenuse r and one leg of length x/2 so the other leg has length $\sqrt{r^2- x^2/4}$. The area of that isosceles triangle, then, is $x\sqrt{r^2- x^2/4}$ and so the area of the circular sector beyond the chord has length $\frac{1}{2}r^2arccos(1- \frac{x^2}{r^2})- x\sqrt{r^2- x^2/4}$. May 10th 2012, 09:20 PM #2 Senior Member Nov 2011 Crna Gora May 11th 2012, 07:24 AM #3 May 11th 2012, 08:44 AM #4 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/geometry/198664-area-circle-joining-2-connected-lines.html","timestamp":"2014-04-17T13:47:00Z","content_type":null,"content_length":"44491","record_id":"<urn:uuid:f56408c5-de50-4fa7-a68e-1551d26db8fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Fun Giveaway: $3/1 Gain Fabric Softener Coupons (10 Winners!) - Common Sense With Money Fun Giveaway: $3/1 Gain Fabric Softener Coupons (10 Winners!) March 2, 2011 There is a HOT coupon available for $3/1 Gain Fabric Softener product. You can use this high value coupon at stores like Walmart to get the 40ct box that sells for $1.98 for FREE after coupon. I even hear the coupon doesn’t even beep. Thanks you fellow reader Mickey who graciously donated 50 of these coupons to giveaway on this blog, 10 readers will have a chance to get some free Gain Fabric Softener! Each winner will get five coupons mailed to them. Here is how to enter this giveaway: 1. Leave a comment sharing how many loads of laundry you usually do in a week. I think we are up to six loads a week for our family of five, crazy! For additional entries: If you are reading this entry on your RSS reader or email newsletter make sure to CLICK HERE to enter this giveaway. Emailing me is not a valid way to enter this giveaway. This giveaway is open to residents of the US 18 years and older. This giveaway ends 3/4 at 3PM EST This giveaway comes to you courtesy of fellow reader Mickey. 1. Beverly S. Clark says: I do about 4 loads per week. Now, if they would only magically take themselves out of the dryer and put themselves away, all would be great in the world. 2. Amanda S says: We do about 5-6 loads a week for a family of 4. This is due to the boy (i.e. dirt monger) and the girl (i.e. fashion diva). 3. Michelle says: I do at least 6-7 loads of laundry a week 4. Gwen says: I do about 12 loads of laundry for my house of 4. Both my boys are athletes so They wear morning clothes, practice clothes and then evening clothes. They never can wear morning and the same clothes in the evening. 5. laurie p. says: I usually do about 6 loads for our family of 3. My husband works outside wearing a uniform with multiple layers so that’s where most of it comes from! □ val-WI says: same here lol 6. Debbie says: I do four to five loads a week. 7. Ronda says: I’ve never really counted, but I’d say I do anywhere from 6-10 loads a week, at least!!! 8. Denise R. says: I signed up for the newsletter 9. Theresa M says: Now that the two older kids have left the nest we do about 4 to 5 loads a week for our family of 4. What a relief that is! LOL! 10. Amanda S says: Like you on Facebook also 11. Sonya Thompson says: I do 5 loads per week and we have a family of 3 however, my husband works with equipment all day so most of his clothes are oily and greasy and I don’t much like to wash mine with his:0) 12. Debra says: We’re up to 6 lods a weel our selves 13. eb says: We’re up to 7 loads of laundry each week!! 14. Denise R. says: I do 2 or 3 loads/day, we have 6 ppl in our household 15. val-WI says: I do laundry nonstop at the moment. Due to the fact that i am a full time student and mommy while daddy works 70+ hours a week, laundry is def on of the last things on the list. However, i finally had to cave and put homework on the side burner and get all of it done…one week later and i’m still trying to catch up YIKES :/ and i agree! if you find that magical solution, pass it on down LOL that would be lovely 16. Cindy says: I do about 4 – 5 loads per week at my house. 17. Jennifer says: 3 loads for a family of 2 18. Amy McNab says: I do about 12 loads of laundry a week…. 19. Lyndie says: I like follow and subscribe. My washer gets lots of work in with 10 in the house…there are at least 2 loads a day! 20. Stephenie Seeling says: I do about 5-6 loads of laundry every week. 21. Theresa M says: I Like you! I really, REALLY Like you! (on fb) 22. Deanna Adkins says: I do around 6 loads a week for our family of three. I’m a follower on facebook also I receive your emails. Thank You! :0) 23. Judy G. says: I do about 4 loads of laundry a week. 24. Sandie Balch says: 5 – 6 Loads of Laundry each week I Like you on FB I am subscribed via email 25. Cassie says: I usually need to do 5-6 loads a week for my family of 4! 26. Brenda W. says: i do about 8 loads of laundry a week. 27. Brianne says: I do about 4 loads of laundry each week for two people. 28. meg says: if im lucky…about 5 loads a week for a family of 4!! 29. Lyndie says: I follow, like and subscribe. My washer does a lot of work with 10 in the house! at least 2 loads a day! 30. Jamie says: 5-6 loads here. Folding and putting away is a different story 31. Kristen says: We do 4 laundry loads a week, which isn’t good since there is only 2 of us 32. Denise R. says: I’ve ‘liked’ you on FB for awhile now! 33. meg says: i follow u on facebook 34. Vince says: 6-8 loads a week until daughter from college comes home for spring break, then a few more! With a boy in sports and another daughter at home….we need a little Gain everyday! 35. Ronda says: I follow you on facebook! 36. nikki E says: Our family of 5 does about 6 loads a week. It’s even worse since I always leave it until the weekend. I hate putting away so if I do it during the week I have baskets of clean clothes lying around. so bad! 37. Brittany Chao says: I do roughly 5 loads of laundry a week were a family of four!!! I also follow you on facebook, twitter and RSS subscriber. 38. Tara says: I do about 4-6 loads a week! 39. Amy says: 6 kiddos + 1 Firefighter Husband = a minimum of 7 loads a week…at least 1 a day!!! (I’m a FB follower btw) 40. Dara says: We do 2-3 loads a day so about 10 loads a week. We are a family of 4 and 2 dogs and a cat plus kids in sports! 41. meg says: i get your emails 42. Kim Woessner says: We usually do 9 to 10 loads a week! We have 3 messy teenagers, and my husband and I both have uniforms for work that has to be washed weekly. Laundry is a never ending battle in our house! 43. Tiffany says: I do 7 loads a week, some times more. 44. chantal cooper says: I usually do about 7-8 loads each week. CHANTAL COOPER CHANTALGIARDINA (AT) YAHOO (DOT) COM 45. Tanya says: We are up to 7 loads a week. 46. Jill says: 4 Loads of Laundry each week I Like you on FB I am subscribed via email 47. Dara says: I follow on twitter 48. Janna Poitevint says: I do abaut 14 loads of laundry each week, my husband works in fuel tanks for the Air Force and I cant wash his clothes with mine.His clothes has all types of chemicals on them. It totally stinks! 49. nikki E says: i like you on fb 50. Mia says: I do between 6-10 loads a week – depends on towels and sheets 51. Ann says: I do 4 loads. 52. Misty Anderson says: 6 in our family, I do about 3-4 loads per DAY! I cannot keep up! 53. Tiffany says: Of course I like you on facebook! With all your $ saving info, what’s not to like? 54. Kristie says: I do usually a load a day!!! So 7 to 8. 55. Ann says: I follow you on twitter. 56. Joan says: 3-4 loads per week for Hubby and me. Depends on if he’s working on any farm equipment or not that week. 57. I go through about 3 loads a week! 58. Jill says: We are regular Gain users- detergent and dryer sheets. In a family of 4, you wouldn’t think we would have as much laundry per day that we do. I typically wash 4 loads EVERY DAY. My husband and I both work out and go to work and have pj’s. In addition, we have an 8 year old and a 1 year old. One year olds go through bibs, towels, 2-3 pairs of clothes and then some. This giveaway would help me out! 59. Janna Poitevint says: I follow on twitter 60. Nicki T. says: We are a family of 3 and have 2 dogs and 2 cats…i do at least 10 loads a week!…if i’m not washing clothes, i’m washing dog beds, towels from cleaning up muddy dog foot prints, etc.etc.! I ”liked” you on Facebook a long time ago! That is how I seen this giveaway! 61. Ann says: I am a fan on FB. 62. nikki E says: i get your emails. 63. Julie G. says: For our family of four (including one little girl who changes her clothes about every 20 minutes), I usually do between 8-10 loads a week. Crazy when I think about it! (So I try not to!) 64. christina says: If it’s a good week I would say 10 for a family of 6. My 2 older boys wash their own clothes and they have 2 each. I wash my 2 younger together and that is another 2. They change like crazy during the summer (sweaty and dirty) and winter (wet clothes.) 65. Theresa M says: I’m a subscriber! 66. chantal cooper says: facebook fan CHANTAL COOPER CHANTALGIARDINA (AT) YAHOO (DOT) COM 67. Ann says: I am signed up for your emails. 68. Joan says: I’m a Facebook follower! 69. Sherri says: I do about 8+ loads for our family of 5. I still have a baby so that in itself is alot of extra clothes from spitting up and peeing thru… My husband has his work uniforms and clothes he wears around the house. bedding for all the beds, a king, queen, twin and crib. towels oh my how we go thru towels and wash cloths! 70. MARIA C JOSEPH says: 6 loads a week for our family of 4! Of course one load is all my work uniforms! love the gain fabric softener. Thanks for the offer! 71. Tabitha says: We do about 15-20 loads a week. I have a family of 6, and I also run a daycare in my home.lot of laundry here! 72. Emily says: 5-6 a week! 73. Theresa says: We are a family of 5 and I do about 7 loads of laundry a week. I can never keep up and I LOVE GAIN!! 74. Justine says: I have subscribed to you on facebook i follow you on twitter and subscribed for email updates My family does about four loads a week 75. chantal cooper says: twitter follower @chantalgiardina CHANTAL COOPER CHANTALGIARDINA (AT) YAHOO (DOT) COM 76. I follow you on twitter (twitter name @Paintfrog) 77. Theresa M says: Twitter follower! 78. Janna Poitevint says: I receive your emails 79. christina says: I like you on facebook! 80. Beth says: 10 – 12 loads per week for a family of 7!! These are all full loads in an extra large capacity front loader. I am constantly thankful for my washer/dryer “ladies” in the laundry room. I cannot imagine handwashing the quantity of dirty clothes my family produces. 81. I like you on Facebook. 82. chantal cooper says: i subscribe CHANTAL COOPER CHANTALGIARDINA (AT) YAHOO (DOT) COM 83. Lisa says: I follow on Facebook. 84. CIndy says: we do about 6 loads a week right now…. (probably should actually be about 7 but I hate doing laundry and sometimes I tend to sort of welll.. you know… overload!) 85. Amanda Benson says: We do about 6 loads of laundry now each week…I used to think 3 was a lot! 86. Alicia Durbin says: I do a minimum of 10 loads a week. We have a HUGE family!!! Not including the pets that help dirty up the bedding and throw blankets on the couch. 87. Kris Plotner says: We do about 5-6 loads of laundry and love GAIN: detergent and fabric sheets 88. Amanda Benson says: I follow you on facebook 89. Amanda Benson says: I subscribe to your emails 90. Laney says: 5 loads a week for me and my hubs 91. Kris Plotner says: I follow you on Facebook 92. Kris Plotner says: I follow you on Twitter 93. Kris Plotner says: I now receive your emails (sorry if there are duplications on how many loads of laundry I do – my browser was acting up). 94. suzie says: On average, I do 3 loads EVERY day, rarely take the wkend off. We have 8! 95. Cindy says: I have a family of 5 and wash 5 to 6 loads of laundry every week. 96. Jacquelyn says: We do about 3 loads/week for iur family of 4. 97. Tracey M says: I do about 8 – 10 loads of laundry at least. I have a special needs dog who has seizures and when they happen at night he has bladder accidents so …. I’m rotating dog bedding ALL THE TIME. Add to that my fiance who often will go through 3 outfits a day, when he gets home he’ll change from office clothes to what wind up to be seriously filthy mechanics/construction clothes to lounge pants and t’s. Doing laundry is a constant here. 98. Jacquelyn says: I follow you on fb. 99. kim howell says: We are a family of five that do an average of 7 loads a week unless it is time to wash the linens. 100. Cindy M. says: I do about 3 loads of laundry a week. 101. Lily says: 5 loads a week including doggo’s bedding!!! 102. Cindy M. says: i follow you on Twitter. 103. Sarah K says: I do 6-7 loads of laundry a week 104. natalie cunha says: We are a family of five, and I do 2-3 a day, but I always take the weekend off! 105. Heather Moore says: We are a family of 5, and I do 1-2 loads of laundry at least per day…i have all boys so someones always getting dirty, or playing sports..not to mention the bedding and all the towels…We do LOTS of laundry! I also just “liked” your page on Facebook. 106. Jennifer J says: Family of three plus animals, so we do about 5-7 loads per week of laundry. 107. Cindy M. says: I LIKE you on Facebook. 108. Cindy M. says: I subscribe to your emails. 109. Megan Renfro says: I do at least 9 loads of laundry a week, for a family of 4 humans, and 4 dogs, ranging in size from a yorkie, a chihuahua, and 2 mastiffs weighing in at around 120 lbs each. Between our clothes, bedding, linens, throws from living room, and dog bedding….I do laundry almost every day of the week! 110. Nicole Greene says: I do about 4-5 loads a week! 111. Laura says: I do 3-4 loads a week. Plus we wash the material we use to make our green dog beds. I am a FB follower too! 112. Becky L says: We do at least 12 loads a week. 113. Nicole says: I generally do 30 loads per week. My family of 7 go through alot of laundry. On the weekends I try to wash all the bedding, so that means even more loads! 114. Amber says: Oh geez……I try not to count because it’s just depressing. There are seven of us, so I think it’s somewhere between 10-15. 115. Nicole Greene says: I follow you on twitter @pittsy82 116. Nicole Greene says: I “LIKE” you on FB (Nicole Pitts) 117. Amber says: I like you on FB 118. Nicole Greene says: I subscribe via rss google reader 119. Nicky Taylor says: I do laundry every 2 weeks since I dont own a washer or dryer. 120. Patricia says: 3 loads per week: lights, darks and whites. 121. Patricia says: Email subscriber 122. Amber says: I am an email subscriber 123. Faith says: I do 9-10 loads a week for our family of six! 124. Alicia Cross says: Followed on Twitter! 125. Jen says: I do six to seven loads a week ~ darks, lights, whites, jeans, towels and sheets ~ sometimes two loads of sheets or a load of reds. 126. Alicia Cross says: Liked on Facebook! 127. Jen says: I like you on facebook, also!! 128. Alicia Cross says: Subscribed via email. 129. Alicia Cross says: And I do two-three loads per day, so probably 15+ during a typical week. 130. Dara says: follow on twitter 131. Janet says: I do 8-10 loads per week for our family of 7. 132. Dara says: I like you on FB 133. Dara says: I receive your emails 134. louise liu says: we do 2-3 loads each week:) 135. Jackie says: Usually do 1-2 a day, so between 7- and 14, we are a large family! 136. Jackie says: Usually do 1-2 a day, so between 7- and 14 a week, we are a large family! 137. Ana Zendejas says: I do about 4 loads a week. 138. Ann Thomas says: WOW!!! To actually think about how much laundry I do is scary!!! It’s about 6-10 loads per week!! I think I need to get a laundry fairy….LOL!! =) 139. Richelle Sandoval says: I do 14-15 loads a week. We are a family of 6 with a newborn. 140. Asa says: I do about 6 loads per week for a family of 4. 141. Seema says: Its about 3-4 loads per week. We are a small family of just three 142. Kelly H. says: I do between 5-6 loads per week! 143. Deedra says: We are a family of 9. i do 2-3 loads a day. So I’m going to say anywhere from 18 to 21 loads a week. I’ve tried to keep track before, but it always turns out to be the week the kids “clean” their rooms and the extra “dirty” laundry throws my count off. LOL 144. Kelly H. says: I like you on Facebook! 145. Brian Cynic says: About 2 loads 146. Richard says: two here!! 147. Debbie G. says: I do about 4-5 loads per week for a family of 3. It was more when I constantly had to wear a different uniform every day! 148. Brenda says: We do 4 to 5 loads per week. I have a little one that really loves to get messy! 149. VIckie Jones says: I follow on facebook 150. Carole C says: I do three or four. 151. Kathy says: I do 4-5 loads of clothes, 1-2 loads of towels, and then if I have time there are always beds, rugs, and other things to wash. We are a family of 5. 152. VIckie Jones says: We probably do 6-7 a week 153. Kristen says: I Like you on facebook 154. Anna S says: I do 5-6 loads of clothes and then sheets and towel. 155. Maria G says: I follow you on facebook. 156. Maria G says: I do about 2-3 loads per week. 157. JoAnna Moss says: I usually do about 6 loads a week, at least! 158. Maria R. says: Most of the time i do 2-3 loads per week. Great giveaway. I really like this $3 gain coupons. 159. JoAnna Moss says: I follow you on facebook 160. ShelleyB says: I’m embarrassed to say that we (just me and Hubby) go through 7 loads a week! (2x for his work clothes, 1 for mine, 2 on towels, 1 on sheets/pillow cases, and 1 for the comforter) 161. Cassie says: At least 7 loads a week!! It is exhausting!!! 162. Cassie Hull says: I also Like Common Sense with Money on Facebook! 163. Lisa says: We do 7-8 loads a week thats average for us we exercise alot so its mostly workout clothes 164. JoAnna Moss says: I follow on twitter too 165. Laura says: Right now, with three small children and a husband, I do anywhere from 8-10 loads of laundry a week. It’s insane. 166. Jennie says: I do about 6 loads a week. Glad my washer is fixed now. I also follow on FB 167. Jennifer says: 3 or 4 loads a week 168. Sarah says: I do about 5 loads a week for a family of 4 169. donna says: my gosh never seen so many comments on a post before……i would have left more in the past but i was busy doing laundry…….either literally or metally doing laundry…….which equals to about one a wee for one person and one doggie! (yes she has laundry too) she just needs to learn to go do the laundry too…..wished she did know how….but momma happy to do for her!~ Good day one and all! 170. kd says: Wow I guess I am doing way too much laundry. For our family of 2 I do anywhere from 6-8 loads a week. His work clothes have to be done alone and my dress clothes I do in small loads so I guess that’s why I have so much. Then of course all the blankets and sheets due to the dogs sleeping in our bed:) 171. Sarah says: I like you on FB! 172. Sarah says: I subscribe to your email updates. 173. Brenda Adair says: We have a family of five. I would guess around 6-9 loads a week. Maybe even more with rugs and bedding. 174. Nicole Hazel says: Easily, I do at least six loads of laundry per week. In fact, the dryer as going- as I type. 175. Sarah says: It’s just my daughter and me, so I usually do about 2 loads a week. 176. Yvette M. says: I subscribed to your e-mails! We do at least 6-7 loads per week and we LOVE GAIN!! 177. Amie says: 6 loads a week! 178. asmith says: 5-6 for family of 3 179. Kayla says: My family of three typically does about 5-6 loads of laundry a week. 180. Kayla says: I like you on fb 181. jennifer says: We do about 7 loads a week! 182. Kayla says: I’m a twitter follower 183. Melanie says: I do about 8 loads a week. 184. jennifer says: I like you on facebook 185. Mary says: I like you on FB 186. Teri says: I really don’t know how many I do because I have too much to do to count. 187. Keqi says: About 5 loads! 188. Sarah B. says: This is going to sound bad, but I do anywhere from 14-21 loads a week. I get so tired of doing laundry. but when you have a family my size it is just something that cannot be helped. Thanks for the giveaway. 189. Laura says: We do about 3 loads a week ! Love your information and look forward to your posts! 190. Angela Hoag says: I do about 8 loads a week! Between kids, pets and swim lessons, we make a lot of laundry! 191. Sue Bee says: I do about 5 loads of towels a weel alone! We’re probably up to about 10 total loads of clothes a week. 192. Heather says: I do at least 14 loads a week! 193. Jessica says: I would say I do about 4 loads a week. I wish I was rich enough to hire a maid… free Gain will help me save for this dream 194. Pam says: We do about 2-3 loads of laundry per week for a family of 3. 195. Beth says: We do about 4 or 5 loads every week. 196. Renee Baird says: Most of the time I do about 5 loads a week… but on the weekends that my daughter comes home from college I have to add on about 3-4 loads…. its SO amazing that I send her off with laundry detergent and fabric softner but her clothes NEVER seem to get cleaned while she’s at school… she just brings them back home to me!! O the joys of being a MoM!!! 197. Karen says: I do at least 8 loads of laundry a week. Crazy! 198. Kara says: I do between 6 and 8 loads per week. 199. Max says: 4-5 loads a week for a family of 4 200. Janice says: I follow you on twitter and also on facebook My family does about 4 to 5 loads of laundry a week 201. Rachel says: My kids won’t wear pj’s more than once do I’ll say five without counting bedding! 202. Evann says: I just had a new baby and that makes a total household of 6 people. We do a min. of 10 loads of laundry a week, it is a never ending cycle. 203. susan says: We are a four load family!! 204. Stacey says: I do about 3-4 loads per week, but my son does about 10 per week just for himself!! He’ll never stink! 205. Stacey says: I subscribe via email. Thanks! 206. linda says: we do about 8! 207. Lora says: I would say about 8 loads a week. We are a family of 5 as well! 208. Lora says: I am a twitter follower 209. Lora says: I am a facebook “LIKER”! 210. Lora says: I am also an email subscriber! 211. Maren Lee says: I do about 4 loads of laundry a week. 212. Maren Lee says: I follow you on fb 213. Maren Lee says: I subscribe to your emails 214. Leigh-Ann W says: 7-10 loads — way too many if you ask me, lol 215. Leigh-Ann W says: subscribe via email 216. Jennifer says: New baby and 2 toddlers (one potty training) makes for at least 1 load a day. 217. Dina S. says: between 5 and 7 loads 218. gail says: We do about 12 loads a week. 219. Daisy says: My newborn is having some major spit-up issues so I’m doing about 3 loads a day, at least! Clothes, blankets, car seat covers..I hope my washer & dryer outlast the baby days! 220. CJ Lampley says: It’ just me at home so I only do 1-2 loads a week but I’d still love to win! 221. anna says: Its pretty sad i do a load a day for my Family of 3, but to my defense I have 5 cats and a dog, and my child is 3, my twin year and a half old visits everyday also so I go through towels and blankets fast. I buy soap and fabric softener everyweek cause i tend to go overboard cause some things im washing are so nasty I want to make sure its all out. I am obsessed with apple mango tango and have all of gains and febreeze with gain appple mango tango products constantly instock in my home. I just love it when I leave the house and can smell it on my clothes and when i get home I can smell it when I walk in from the Febreeze and I also put the fabric softner sheeta in my air vents so when the heat kicks on so does the sent of apple mango tango.I was more than excited when the dish soap came out and the price is even nicer! Thank you gain for making such a wonderfully delicous sent! 222. Jennifer S. says: I do about 3 loads a week 223. Jennifer S. says: I follow you on Twitter 224. Jennifer S. says: I like you on Facebook 225. Jennifer S. says: I follow via Google Reader 226. April says: I do about 5 loads a week for my family of 3 crazy right but my hubby works out side and I have 2 cats inside. I like you on facebook!!!!!!! 227. Amber says: I do 3 loads a week, but with a new baby coming that will increase soon!! 228. Amber says: I like you on facebook! 229. Sarai Timothy says: Our family of 4 usually does 7 loads of laundry each week. 230. Sarai Timothy says: E-mail subscriber and follower 231. Star Gami says: We usually do 2 loads a week, family of 3 232. MAUREEN says: I do 3-4 loads a week. 233. Tracey says: At least 10 loads a week – more when my big kids are home from college. Joy. Joy. 234. Mike says: Sometimes I wonder if the washer is ever not churning! 2 loads per day minimum! 235. Lisa M says: It’s sad but I think I average 7 loads of laundry a week! 236. michelle says: 6-8 loads a week 237. Sonya says: A minimum of 6 loads of laundry a week, and Gain certainly makes it better! 238. Lisa M says: I am an email subscriber 239. Heather says: I would say that I do about 12-15 loads a week. We have 6 in our family including two sets of twins that are 4 and 2. They change their clothes way too much. 240. Julia Gilsdorf says: We do around 7 loads per week (we have a tiny apartment washer/dryer so I don’t know how much we would do with a regular size washer) I just know I do laundry every day! □ Julia Gilsdorf says: PS I have already liked you on Facebook 241. katie says: I would say 4 to 5 loads. 242. katie says: I like you on facebook 243. katie says: I subscribe to your emails 244. vannessa says: we do about six to ten loads of laundry a week not counting cloth diapers. 245. Laurie says: Our family of 4 does about 7-8 loads per week. 246. vannessa says: i liked you on facebook. 247. becky b says: I do a load of clothes everyday! I swear somebody is putting clean clothes in the hamper! lol 248. Talia says: I do about 4 or 5 loads a week. 249. Sarah B says: We do a load everyday. !! 250. Nicki S. says: I do 2 loads a week, but wait until the last possible second to do it. Terrible, I know. 251. Nicki S. says: I like you on facebook. 252. Nicki S. says: And I get your daily emails. 253. dorothy says: we do 4 for a family of 5 254. joan says: I would be afraid to actually know the truth on this one! Way too many! 255. Tanya N says: I do about 4 loads a week for a family of 4 256. Betty says: We do about 4 loads a week here… love Gain! Sniff… Sniff… Hooray! 257. judy says: I do about 4 or 5 loads a week.I like you on facebook 258. Jennifer M. says: I do an average of 3 loads a week for 2 of us. My daughter does her own laundry =) 259. Jennifer M. says: I follow you on Twitter – jennifer6183 260. Jennifer M. says: I like you on Facebook!! 261. Jennifer M. says: I receive your daily email newsletter 262. Virginia Colborn says: I do about 10 loads of laundry a week for a family of 6. 263. Virginia Colborn says: I enjoy following you on Facebook! 264. Jennifer says: I do 6 loads myself for me, hubby and two little girls. I have teenage boys who do their own laundry do total at least 10 loads a week at my house, more if we wash sheets and blankets!! 265. Frank R says: We do like 2-3 load a week. I am following you on twitter, I like you on facebook, and i signed up for emails! 266. Karina says: I do 3-4 loads of laundry per week for the 2 of us. 267. kim says: wow…a load a day…or more!!! NO KIDDING…i have both machines running right now, and clothes to fold, and more to put away….every day!! This would be great! 268. Melanie L. says: We are up to 4-6 loads of laundry a week for our family of 4. 269. Rebecca says: 2 loads per week 270. Tracey says: 6 loads of laundry 271. Crystal says: I usually do about 4 loads a week. 272. Tanya N says: I like you on facebook 273. Tanya N says: I follow you on twitter 274. Kristel says: I do about 5 – 8 loads a week for our family of 5. We do have the “big” front loader machines which can hold a lot more than the regular ones. 275. Christine, D says: I do about 6 loads a week minimum…it never ends! 276. Marissa says: I do probably 8 to 10 loads each week. We are a family of 6 and I am the only girl. Boys are dirty! 277. Diana says: I usually do 7-8 loads. Whites, browns, darks, jeans-sweatshirts,towels, sheets-kids and ours, and the dog blankets. I will usually spread it out so I’m not doing it all in one day. 278. Julie says: I usually do 6-7 loads of laundry per week. These are AWESOME coupons. Would love to win! 279. Dawn says: We do at least 9 loads every week. Crazy to even think about how many we are doing every week. Thanks for the chance to win. 280. Mary says: Roughly 6 loads a week. 281. Mary says: Like you on Facebook. 282. Mary says: I receive your daily e-mails. 283. Sarah says: I do 4-5 loads of laundry a week. 284. Karen says: I do at least 4 loads a week. If my almost 8 month old is a bit more messy. . it ups the loads. 285. Karen says: I like you on facebook 286. sara says: i like you on facebook 287. sara says: i subscribed to you 288. sara says: i want these coupons, hope i win 289. sara says: hope i win 290. Monica says: I do 3-4 loads a week for myself and my fiance. I would love to win these coupons! I was thrilled to get this one in my sunday paper 291. I do 3-4 loads a week! 292. I follow you on twitter 293. I am your friend on FB 294. Kim says: I subscribe and follow you on Facebook 295. Tammy says: We do a minimum of eight loads a week, sometimes more!!! 296. Sandy Treece says: eight is a good number here, i don’t like having dirty clothes sitting around so i wash every couple of days. 297. Sandy Treece says: e mail subscriber 298. jennifer says: i do 4 to 5 loads per week. 299. Sonya says: I figure Ido about 9 loads a week. Wow that’s alot! 300. Sharon Pandolfini says: I do 6-8 loads of laundry a week! 301. Melissa G. says: We do 4-5 loads a week with a family of 3. Yuck! 302. Vivian says: i do 7 loads a week. 303. Safa says: I do about 4 loads a week…I dont mind washing the clothes, I just wish there was a machine to fold them 304. Ui says: I’m down to 4 loads of laundry a week. 305. Rhonda says: Personally, I do 2 loads of laundry per week. Both my teen boys do their own laundry…when I’m not home…so, I have no clue how many loads they do. 306. Becky says: we have a tiny all in one washer so i do at least 7 loads a week and there are only THREE of us i know not very green. I do hang dry though 307. Pam says: I check your site every day for new deals and ideas. Love it!! I try to do my laundry only once a week, so it often ends up being 7-8 loads for my husband and I. 308. Sandy says: We do about 6-7 loads a week. 309. Sandy says: I am an email subscriber. 310. Ed says: I would have 2 loads a week. 311. Me says: I generally do 2 loads a week. 312. Alison says: We do at least 6 a week! 313. Selena says: I do about 6 loads a week. But I actually like laundry– the actual doing of the laundry only takes a few minutes, and I catch up on DVR shows while I’m folding. It feels like a treat! 314. Michelle says: I do about 6 loads a week for a family of 4. My kids wear uniforms so it seems I am always washing for school! 315. Mary warr says: We do 5 loads of laundry, six on weeks we wash extras (comforters, curtains, sheets) 316. Mary warr says: I am a facebook fan! 317. Toya Livermore says: Same here 6 loads family of five. Thank goodness for coupons otherwise it could get pricey. 318. Toya Livermore says: Fan on Fackbook 319. KatieL says: We wash at least 7-10 loads a week. My husband is a part time massage therapist on top of his full time job…so we are washing at least 2 loads of just sheets each week.:( Gain dryer sheets would make the sheets smell so pretty for those getting a much needed massage! 320. Jennifer Sanslow says: i do about 2-4 loads of laundry a week. 321. Jennifer Sanslow says: i have already liked you on facebook. =) 322. Jennifer Sanslow says: i subscribed by email. 323. Victoria says: I think somewhere around 5-7. more in the summer because of all the icky sweat! 324. Michele says: I dos about 8 loads of laundry a week. I wash a lot of sports uniforms and practice clothes. 325. Patricia S. says: We usually do 4 loads of wash per week. 326. Yanira DelValle says: Wow, we do about 8 to 9 loads a week. FOr a family of 5! My 2 oldest do their own laundry, and I do my youngest. It’s a lot of loads, but is less work for me, and at the end of the day, that’s what counts! lollol 327. Jane says: At least 5 loads a week, minimum! 328. Jane says: follow on twitter 329. Jane says: fan on fb 330. Trinity says: we do about 4 loads for a family of 4.
{"url":"http://www.commonsensewithmoney.com/fun-giveaway-31-gain-fabric-softener-coupons-10-winners/","timestamp":"2014-04-18T10:41:37Z","content_type":null,"content_length":"282310","record_id":"<urn:uuid:763de7a2-ab6d-47fe-a0ee-78615cb5114d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20010004523 - Method for determining a parameter indicative of the progress of an extracorporeal blood treatment [0001] The invention relates to a method for determining a parameter indicative of the progress of an extracorporeal blood treatment, in particular a purification treatment whose purpose is to alleviate renal insufficiency, such as haemodialysis or haemodiafiltration. [0002] It will be recalled that haemodialysis consists in making a patient's blood and a treatment liquid approximately isotonic with blood flow, one on either side of the semipermeable membrane of a haemodialyser, so that, during the diffusive transfer which is established across the membrane in the case of substances having different concentrations on either side of the membrane, the impurities in the blood (urea, creatinine, etc.) migrate from the blood into the treatment liquid. The ion concentration of the treatment liquid is also generally chosen so as to correct the ion concentration of the patient's blood. [0003] In treatment by haemodiafiltration, a convective transfer by ultrafiltration, resulting from a positive pressure difference created between the blood side and the treatment-liquid side of the membrane, is added to the diffusive transfer obtained by dialysis. [0004] It is of the utmost interest to be able to determine, throughout a treatment session, one or more parameters indicative of the progress of the treatment so as to be able, where appropriate, to modify the treatment conditions that were fixed initially for the purpose of a defined therapeutic objective. [0005] The parameters, the knowledge of which makes it possible to follow the progress of the treatment, i.e. also to assess the suitability of the initially fixed treatment conditions to the therapeutic objective, are, in particular, the concentration in the blood of a given solute (for example, sodium) or the actual dialysance D or the actual clearance K of the exchanger for such and such a solute (the dialysance D and the clearance K representing the purification efficiency of the exchanger) or the dialysis dose administered after a treatment time t, which, according to the work of Sargent and Gotch, may be likened to the dimensionless ratio Kt/V, where K is the actual clearance in the case of urea, t the elapsed treatment time and V the volume of distribution of urea, i.e. the total volume of water in the patient (Gotch F. A. and Sargent S. A., “A mechanistic analysis of the National Cooperative Dialysis Study (NCDS)”, Kidney Int. 1985, Vol. 28, pp. 526-34). [0006] These parameters all have the same problem in respect of their determination, which is of requiring precise knowledge about a physical or chemical characteristic of the blood, whereas this characteristic cannot in practice be obtained by direct measurement on a specimen for therapeutic, prophylactic or financial reasons: firstly, it is out of the question to take, from a patient who is often anaemic, multiple specimens which would be necessary in order to monitor the effectiveness of the treatment during its execution; furthermore, given the risks associated with handling specimens of blood which may possibly be contaminated, the general tendency is to avoid such handling operations; finally, laboratory analysis of a specimen of blood is both expensive and relatively lengthy, this being incompatible with the desired objective. [0007] Several methods have been proposed hitherto for determining in vivo haemodialysis parameters without having to take measurements on the blood. [0008] Document EP 0,547,025 describes a method for determining the concentration of a substance, such as sodium, in a patient's blood subjected to a haemodialysis treatment. This method, which also makes it possible to determine the dialysance D—for example for sodium—of the haemodialyser used for administering the treatment, comprises the steps of: [0009] making a first haemodialysis liquid and a second haemodialysis liquid having different sodium concentrations flow in succession through the haemodialyser; [0010] measuring the conductivity of the first and second dialysis liquids, upstream and downstream of the haemodialyser; and [0011] computing the concentration of sodium in the patient's blood (or the dialysance D of the haemodialyser for sodium) from the values of the conductivity of the liquid which are measured in the first and second dialysis liquids upstream and downstream of the haemodialyser. [0012] Document EP 0,658,352 describes another method for the in vivo determination of the haemodialysis parameters, which comprises the steps of: [0013] making at least a first and a second treatment liquid, having a characteristic (the conductivity, for example) associated with at least one of the parameters (the ion concentration of the blood, the dialysance D, the clearance K, Kt/V, for example) indicative of the treatment, flow in succession through the haemodialyser, the value of the characteristic in the first liquid upstream of the exchanger being different from the value of the characteristic in the second liquid upstream of the exchanger; [0014] measuring, in each of the first and second treatment liquids, two values of the characteristic, respectively upstream and downstream of the exchanger; [0015] making a third treatment liquid flow through the exchanger while the characteristic of the second liquid has not reached a stable value downstream of the exchanger, the value of the characteristic in the third liquid upstream of the exchanger being different from the value of the characteristic in the second liquid upstream of the exchanger; [0016] measuring two values of the characteristic in the third liquid, respectively upstream and downstream of the exchanger; and [0017] computing at least one value of at least one parameter indicative of the progress of the treatment from the measured values of the characteristic in the first, second and third treatment [0018] Another method for the in vivo determination of the haemodialysis parameters which does not require taking measurements on the blood is described in document EP 0,920,877. This method includes the steps of: [0019] making a treatment liquid flow through the exchanger, this treatment liquid having a characteristic which has an approximately constant nominal value upstream of the exchanger; [0020] varying the value of the characteristic upstream of the exchanger and then re-establishing the characteristic to its nominal value upstream of the exchanger; [0021] measuring and storing in memory a plurality of values adopted by the characteristic of the treatment liquid downstream of the exchanger in response to the variation in the value of this characteristic caused upstream of the exchanger; [0022] determining the area of a downstream perturbation region bounded by a baseline and a curve representative of the variation with respect to time of the characteristic; and [0023] computing the parameter indicative of the effectiveness of a treatment from the area of the downstream perturbation region and from the area of an upstream perturbation region bounded by a baseline and a curve representative of the variation with respect to time of the characteristic upstream of the exchanger. [0024] All these methods have the common point of comprising a momentary modification of the value of a characteristic of the dialysis liquid (the conductivity, for example) and then the re-establishment of this characteristic to its initial value, which is generally the prescribed value. Even if the sequencing of the measurements is such that it takes less than two minutes to determine the desired parameter (the situation in the second method mentioned), it remains the case that all these methods can be carried out in practice only six times per hour. [0025] One objective of the invention is to propose a method for determining a parameter indicative of the progress of an extracorporeal blood treatment which is virtually continuous, reliable and having no influence on the treatment carried out. [0026] In order to achieve this objective, a method is provided for continuously determining a parameter (D, Cbin, K, Kt/V) indicative of the effectiveness of an extracorporeal blood treatment, consisting in making a patient's blood and a treatment liquid flow, one on either side of the semipermeable membrane of a membrane exchanger, this method comprising the steps of: [0027] making a treatment liquid having a characteristic (Cd) associated with the effectiveness of the treatment flow through the exchanger; [0028] causing a succession of variations in the characteristic (Cd) upstream of the exchanger; [0029] continuously storing in memory a plurality of values (Cdin1 . . . Cdinj . . . Cdinp) of the characteristic (Cd) upstream of the exchanger; [0030] measuring and continuously storing in memory a plurality of values (Cdout1 . . . Cdoutj . . . Cdoutp) adopted by the characteristic (Cd) downstream of the exchanger in response to the variations in the characteristic (Cd) which are caused upstream of the exchanger; [0031] computing, each time that a predetermined number of new values (Cdoutj) of the characteristic (Cd) downstream of the exchanger has been stored, a parameter (D, Cbin, K, Kt/V) indicative of the effectiveness of the extracorporeal blood treatment, from a first series of values (Cdinj) of the characteristic (Cd) upstream of the exchanger (1), from a second series of values (Cdoutj) of the characteristic (Cd) downstream of the exchanger, and by means of a mathematical model of the influence of the characteristic (Cd) on the effectiveness of the treatment, the mathematical model having at least one coefficient consisting of a parameter (D, Cbin) indicative of the effectiveness of the extracorporeal blood treatment. [0032] The advantage of this method is that it allows the parameters indicative of the progress of the treatment to be accurately and continuously determined from measurements taken continuously. The patient is never exposed to a treatment liquid very different from the prescribed treatment liquid (for example, one which is too rich or too depleted in sodium). Moreover, this method is not very sensitive to any kind of incident which may arise during the measurement of an isolated value and which may falsify the subsequent computations by making use of an erratic value. [0033] The implementation of this continuous determination method may be carried out according to one or more of the following specific ways: [0034] the parameter (D, Cbin, K, Kt/V) indicative of the effectiveness of the extracorporeal blood treatment is computed each time that a new value (Cdoutj) of the characteristic (Cd) downstream of the exchanger (1) has been stored; [0035] the second series of values (Cdoutj) of the characteristic (Cd) downstream of the exchanger (1), comprises the last value stored in memory; [0036] the second series of values (Cdoutj) of the characteristic (Cd) downstream of the exchanger (1) comprises a predetermined number of successive values. [0037] According to one characteristic of the invention, the method furthermore includes the step of establishing a correspondence between each value (Cdoutj+z) of the second series of values and a value (Cdinj) of the first series of values, the value (Cdoutj+z) of the second series of values being shifted in time with respect to the corresponding value (Cdinj) of the first series of values by a hydraulic delay (T) equal to the time taken by a liquid specimen to flow through a treatment liquid circuit connected to the exchanger, between a point lying upstream of the exchanger and a point lying downstream of the exchanger. [0038] When the hydraulic delay (T) is one of the coefficients of the mathematical model, it may be determined by the steps of: [0039] computing, by means of the mathematical model, for each value (Cdinj) of the first series of values, a corresponding value (Cd*outj+z) of the characteristic (Cd) downstream of the exchanger; [0040] determining the optimum value of the hydraulic delay (T) for which the correspondence between the computed values (Cd*outj+z) of the characteristic (Cd) downstream of the exchanger and the corresponding measured values (Cdoutj+z) of the characteristic (Cd) downstream of the exchanger is the most precise. [0041] According to another characteristic of the invention, the step of computing a parameter (D, Cbin) indicative of the effectiveness of the extracorporeal blood treatment comprises the steps of: [0042] computing, by means of the mathematical model, for each value (Cdinj) of the first series of values, a corresponding value (Cd*outj+z) of the characteristic (Cd) downstream of the exchanger; [0043] determining the optimum value of the parameter (D, Cbin) for which the correspondence between the computed values (Cd*outj+z) of the characteristic (Cd) downstream of the exchanger and the corresponding measured values (Cdoutj+z) of the characteristic (Cd) downstream of the exchanger is the most precise. [0044] According to yet another characteristic of the invention, the mathematical model is linear and the step of determining the optimum value of the parameter (D, Cbin) consists in determining that value of the parameter (D, Cbin) for which the sum of the squares of the differences between the measured values (Cdoutj+z) and the corresponding computed values (Cd*outj+z) of the characteristic (Cd) downstream of the exchanger is the least. [0045] According to the invention, the step of varying the characteristic (Cd) upstream of the exchanger may be carried out according to one of the following modes of implementation: [0046] either the characteristic is adjusted continuously as a function of the variation of a parameter of a device intended for implementing the treatment and/or of a parameter of the patient (for example, the relative variation in the blood volume of the patient), so that this parameter remains within a range of permissible values; [0047] or the characteristic is adjusted according to a rule of variation stored beforehand in memory, entailing, for example, the regular alternation of an increase and of a decrease in the characteristic of a defined amount; [0048] or the characteristic fluctuates according to the perturbations inherent in the preparation of the treatment liquid. [0049] Further characteristics and advantages of the invention will appear on reading the description which follows. Reference will be made to the single figure which illustrates, schematically and partially, a haemodialysis and haemodiafiltration system adapted to the implementation of the method according to the invention. [0050] The haemodialysis system illustrated in FIG. 1 comprises a haemodialyser 1 having two compartments 2, 3 separated by a semipermeable membrane 4. A first compartment 2 has an inlet connected to a line 5 for taking a blood sample, in which line there is a haemoglobin measurement probe 24 and a circulating pump 6, and an outlet connected to a line 7 for returning the blood, in which line a bubble trap 8 is interposed. [0051] The second compartment 3 of the haemodialyser 1 has an inlet connected to a line 9 for supplying fresh dialysis liquid and an outlet connected to a line 10 for discharging spent liquid (the dialysis liquid and the ultrafiltrate). [0052] The supply line 9 links the haemodialyser 1 to a device 11 for preparing the dialysis liquid, comprising a main line 12, the upstream end of which is designed to be connected to a supply of running water. Connected to this main line 12 are a first secondary line 13 and a second secondary line 14. The first secondary line 13, which is looped back onto the main line 12, is provided with coupling means for fitting a cartridge 15 containing sodium bicarbonate in granule form. It is furthermore equipped with a pump 16 for metering the bicarbonate into the dialysis liquid, the pump being located downstream of the cartridge 15. The operation of the pump 16 is determined by the comparison between 1) a first conductivity setpoint value for the solution forming at the junction of the main line 12 and the secondary line 13 and 2) the value of the conductivity of this mixture measured by means of a first conductivity probe 17 located in the main line 12 immediately downstream of the junction between the main line 12 and the first secondary line 13. [0053] The free end of the second secondary line 14 is intended to be immersed in a container 18 for a concentrated saline solution containing sodium chloride, calcium chloride, magnesium chloride and potassium chloride, as well as acetic acid. The second line 14 is equipped with a pump 19 for metering sodium into the dialysis liquid, the operation of which pump depends on the comparison between 1) a second conductivity setpoint value for the solution forming at the junction of the main line 12 and the second secondary line 14 and 2) the value of the conductivity of this solution measured by means of a second conductivity probe 20 located in the main line 12 immediately downstream of the junction between the main line 12 and the secondary line 14. [0054] The supply line 9 forms the extension of the main line 12 of the device 11 for preparing the dialysis liquid. Located in this supply line are, in the direction of flow of the liquid, a first flow meter 21, a first circulating pump 22 and a third conductivity probe 23. [0055] The downstream end of the line 10 for discharging the spent liquid is designed to be connected to the drain. Located in this line are, in the direction of flow of the liquid, a fourth conductivity probe 25, a second circulating pump 26 and a second flow meter 27. An extraction pump 29 is connected to the discharge line 10, upstream of the second circulating pump 26. [0056] The haemodialysis system illustrated in FIG. 1 also comprises a computing and control unit 30. This unit is linked to a screen 31 and to a keyboard 32 via which the user communicates the various setpoint values to it: flow-rate settings (blood flow rate Qb, dialysis liquid flow rate Qd), conductivity settings used for preparing the dialysis liquid, treatment duration setting and weight loss setting WL. Moreover, the computing and control unit 30 receives information from the measurement devices of the system, such as the flow meters 21, 27, the conductivity probes 17, 20, 23 , 25 and the haemoglobin measurement probe 24. This unit controls, depending on the instructions received, on the modes of operation and on the programmed algorithms, the driving devices of the system, such as the pumps 6, 16, 19, 22, 26, 29. [0057] The haemodialysis system that has just been described can operate in a relatively simple first mode and in a more sophisticated second mode. [0058] First mode of operation [0059] After the extracorporeal blood circuit has been rinsed and filled with sterile saline solution, it is connected to the patient and the blood pump 6 is operated with a predetermined flow rate Qb, for example 200 ml/min. [0060] Simultaneously, the pumps 16 and 19 of the device 11 for preparing the dialysis liquid, the pumps 22, 26 for circulating the dialysis liquid and the extraction pump 29 are operated. The flow rate of the metering pumps 16, 19 is controlled by means of the conductivity probes 17, 20 so that the dialysis liquid has the desired bicarbonate concentration and the desired sodium concentration. The flow rate Qd of the circulating pump 22 located in the supply line 9 is set at a fixed value (500 ml/min., for example), whereas the flow rate of the circulating pump 26 located in the discharge line 10 is permanently adjusted so that the flow rate measured by the second flow meter 27 is equal to the flow rate measured by the first flow meter 21. The flow rate of the extraction pump 29 is set so as to be equal to the rate of weight loss (computed from the weight WL that the patient is prescribed to lose and from the duration of the treatment session), possibly increased by the flow rate of a liquid infused into the patient. [0061] The signal delivered by the haemoglobin measurement probe 24 is used by the control unit 30 to regularly compute, from the initial value of the haemoglobin concentration of the blood, the relative variations in the volume of the patient's blood. [0062] Second mode of operation [0063] In the second mode of operation, the control unit 30 also controls the extraction pump 29 and/or the metering pump 19 so that the relative variations in the volume of the patient's blood remain within a range of permissible values. [0064] According to the invention, the effectiveness of the treatment administered to the patient by means of the system that has just been described is determined continuously by means of the following method, the implementation of which assumes the prior definition of a mathematical model describing, in the form of an equation or of a system of equations, the influence of the characteristic (Cd) of the dialysis liquid on the effectiveness of the treatment, this mathematical model having at least one coefficient consisting of a parameter (D, Cbin) indicative of the effectiveness of the extracorporeal blood treatment. [0065] In the text below, the example of a mathematical model of the exchanges taking place across the membrane 4 of the haemodialyser 1 will be taken. This mathematical model, at least one of the coefficients of which is one of the parameters indicative of the effectiveness of the treatment that it is desired to determine, establishes a relationship between a value of a characteristic of an elementary volume of the dialysis liquid upstream of the haemodialyser 1 and a value of the characteristic of an elementary volume downstream of the haemodialyser 1. Thus, a mathematical model expressing the relationship between the value Cdin of the ion concentration (or of the conductivity) of a specimen of dialysis liquid upstream of the haemodialyser and the value Cdout of the ion concentration (or of the conductivity) of a specimen of dialysis liquid downstream of the haemodialyser may comprise, for example, one or more of the following coefficients: [0066] the dialysance D, [0067] the ion concentration of the blood Cbin, [0068] the hydraulic delay T, which is equal to the time taken by a specimen of liquid to flow between the upstream point of conductivity measurement (the second conductivity probe 20, if the conductivity values used by the computing unit 30 are the setpoint values, or the third conductivity probe 23 if the conductivity values used by the computation unit 30 are measured values) and the downstream point of conductivity measurement (the fourth conductivity probe 25); the hydraulic delay T essentially depends on the flow rate of dialysis liquid, on the volume of the lines 9 and 10 between the conductivity probes 20 or 23 and 25 and on the capacity of the dialysis liquid compartment 3 of the haemodialyser 1, and [0069] the time constant Ø of the system; the time constant depends only on the dialysis liquid and blood flow rates, on the area of the membrane 4 and on the diffusion coefficient of the membrane for the solute in question, that is to say here, sodium. [0070] Once defined, the mathematical model is stored in a memory of the control and computing unit 30. [0071] The method according to the invention comprises a first step in which the conductivity of the dialysis liquid upstream of the haemodialyser 1 is subjected, preferably throughout the treatment session, to a succession of low-amplitude variations (that is to say variations not departing, or rarely, by more than approximately 5% of the mean conductivity of the dialysis liquid). This succession of variations may be controlled or uncontrolled. [0072] It is uncontrolled when, for example, the control of the pumps 16 and 19 is not perfectly slaved to the measurements taken by the conductivity probes 17 and 20 and when the dialysis liquid produced by the generator 11 is not completely homogeneous. [0073] The succession of variations is controlled when, for example, it follows a predetermined rule of variation stored in a memory of the control and computing unit 30: the speed of the pump 19 may, for example, either be modified randomly, or be modified regularly so that the conductivity of the dialysis liquid increases and then decreases continuously for the same time and by the same amount above and below the prescribed value. [0074] The succession of variations is also controlled when, according to the second mode of operation of the haemodialysis system mentioned above, the sodium concentration of the dialysis liquid is slaved to a comparison between the measured relative variation in the blood volume and a range of permissible values. [0075] In a second step of the method, a plurality of discrete values (Cdin1 . . . Cdinj . . . Cdinp) adopted by the conductivity of the dialysis liquid upstream of the haemodialyser 1, at times t1 . . . tj . . . tn, is stored in memory in the control and computing unit 30. Any two successive instants tj, tj+1 are separated by the same sampling period Ts. When the variations in the conductivity are controlled, it is preferably the conductivity values corresponding to control signals which are stored in memory. On the other hand, when the conductivity variations result from the mode of production of the dialysis liquid, the conductivity values (Cdin1 . . . Cdinj . . . Cdinp) which are stored in memory are measured by means of the third conductivity probe 23. [0076] In a third step of the method, a plurality of discrete values (Cdout1 . . . Cdoutj . . . Cdoutp) of the conductivity of the dialysis liquid is measured downstream of the haemodialyser 1, at the instants t1 . . . tj . . . tn, by means of the fourth conductivity probe 25, and is stored in memory in the control and computing unit 30. Corresponding to each conducting value Cdinj at the instant t=j downstream of the haemodialyser is a conductivity value Cdoutj+n upstream of the haemodialyser at the instant t=j+z, the time shift between these two values being equal to the hydraulic delay T (z=T/Ts). [0077] The next step in the method is a computation step. For each value (Cdinj) of a series of m values of the conductivity upstream of the dialyser, and from an initial estimated value (D1) of the parameter (for example, the dialysance D) whose actual value at any moment it is desired to determine, the control and computing unit 30 computes, by means of the mathematical model, a value (Cd*outj+z) of the conductivity downstream of the haemodialyser 1 (hereafter, the symbol * indicates a computed value). Each computed value (Cd*outj+z) of the downstream conductivity for the instant t=j+z is then compared with the downstream measured value (Cdoutj+z) at the instant t=j+z. If the result of the comparison indicates that the computed values (Cd*outj+z) and the measured values (Cdoutj+z) are close (if their difference or their quotient is, for example, less than a predetermined threshold), the control and computing unit 30 displays the numerical value D1 of the parameter D used in the computations as being the instantaneous actual value of the parameter. Otherwise, the computing unit 30 reiterates the preceding operations with a second, and then possibly a third, fourth, etc., numerical value D2, D3, D4 of the parameter D until the result of the comparison is satisfactory. [0078] When the mathematical model used is a first-order mathematical model, one method particularly appropriate for determining the dialysance D is the method of least squares, which it will be recalled consists in selecting that numerical value (D1, D2, . . . Dn) of the dialysance D for which the sum of the squares of the differences between the measured value and the corresponding computed value of the characteristic downstream of the haemodialyser is the minimum, i.e.: Σ[Cdoutj+z−Cd*outj+z] ^2 [0079] According to the invention, the method which has just been described is continuous: [0080] at any moment, the m measured values (Cdout1 . . . Cdoutm) of the downstream conductivity from which the calculations are made include the last measured value or one of the last measured values (Cdout1 . . . Cdoutm) of the downstream conductivity; [0081] the parameter D whose actual value it is desired to establish is determined each time that a new value of the downstream conductivity (Cdoutj) is measured and stored in memory, or, more generally every time an integral number of new values (Cdoutj) is stored (for example every two or three values). [0082] The number of values m from which the parameter D is determined is chosen depending on the sampling period Ts so that the total acquisition time for these m values to be sufficiently short so that it is possible to consider that the ion concentration of the blood remains constant during this acquisition time. [0083] From the actual value of the dialysance D, from a value of the conductivity Cdinj fixed or measured upstream of the haemodialyser 1, from the corresponding value of the conductivity Cdoutj+z measured downstream of the haemodialyser 1 and from the flow rate Qd of the dialysis liquid, the computing and control unit 30 can compute the equivalent blood ion concentration Cbin of the blood by applying the conventional formula: $D = Qd × Cdinj - Cdoutj + z Cbin - Cdinj$ [0084] The computing and control unit 30 can furthermore compute the actual clearance K for urea from the actual value of the dialysance D and from look-up tables, stored beforehand in memory, for the correspondence between the dialysance D for sodium and the clearance K for urea. [0085] Finally, the computing and control unit 30 can also compute the administered dialysis dose Kt/V from the actual clearance K, from the elapsed treatment time t and from the urea distribution volume V in the patient (which depends on the average weight, the sex and the age). [0086] A first example of the mathematical model, in the context of the invention, stems from the following differential equation which represents the transfer of an ionized substance (sodium) through the membrane of a haemodialyser in which a patient's blood and a dialysis liquid are made to flow, one on either side of the membrane: $ Cdout ( t ) t = 1 Ø [ - Cdout ( t ) + ( 1 - Dr ) × Cdin ( t - T ) + Dr × Cbin ( t ) ] ( I )$ [0087] with Dr=D/Qd, where Qd is the flow rate of dialysis liquid and D is the dialysance. [0089] Cdin(t) is the sodium concentration in the dialysis liquid, upstream of the haemodialyser; [0090] Cdout(t) is the sodium concentration in the dialysis liquid, downstream of the haemodialyser; [0091] Cbin(t) is the sodium concentration in the blood, upstream of the haemodialyser; [0094] Dr is the relative dialysance. [0095] Starting from the observation that, over a time interval of the order of a few minutes, the ion concentration of the blood, Cbin, does not vary substantially, and only considering discrete values adopted by the conductivity upstream (Cbinj) and downstream (Cboutj) of the haemodialyser at successive instants t1 . . . tj, tj+1 . . . tm, equation (I) may be re-written in the following [0096] with z=T/Ts, Ts being the sampling period of the conductivity of the dialysis liquid, the values Cdoutj+z and Cdinj therefore representing the conductivity of the same volume of liquid before and after it passes through the haemodialyser. In this example, it will be assumed that the hydraulic delay is known. [0097] The coefficients a, b, c in equation (II) are related to the coefficients of the differential equation (I) in the following manner: $Ø = - Ts ln ( a ) ( III ) Dr = 1 - b 1 - - Ts / Ø ( IV )$ $Cbin = C Dr × ( 1 - - Ts / ø ) ( V )$ [0098] According to the invention, equations (II) to (V) constitute a mathematical model which can be used for implementing a method for determining the dialysance D and the sodium concentration Cbin in the blood during a dialysis treatment. [0099] A first step in the method consists in making the conductivity of the dialysis liquid vary continuously, about an average value. [0100] The value Cdin adopted by the conductivity upstream of the dialyser (the measured value or the setpoint value) is regularly and cumulatively stored in a memory in the computing unit 30, which therefore permanently contains a plurality of discrete values of the conductivity (Cdin1 . . . Cdinj . . . Cdinp) taken upstream of the haemodialyser 1 respectively at the instants t1 . . . tj . . . tp separated by the sampling period Ts. [0101] Likewise, the value Cdout adopted by the conductivity downstream of the dialyser (the measured value) is regularly and cumulatively stored in a memory in the computing unit 30, which therefore permanently contains a plurality of discrete values of the conductivity (Cdout1 . . . Cdoutj . . . Cdoutp) taken downstream of the haemodialyser 1 respectively at the instants t1 . . . tj . . . tp separated by the sampling period Ts. [0102] In order to determine the value of Cbin and of D at the instant tp, the computing unit is programmed to set, based on a first series of m values Cdin and a second series of m values Cdout, a series of equations: [0103] in which errj+z is the difference between the computed value Cd*outj+z, using equation (II), of the conductivity for the instant j+z downstream of the haemodialyser and the measured value Cdoutj+z of the conductivity at the instant j+z downstream of the haemodialyser. [0106] The computing unit is also programmed to carry out the method of least squares, by which it is possible to determine that matrix P for which the difference between the computed values Cd*out and measured values Cdout is the least, i.e.: [0107] where H′ is the transpose matrix of H. [0108] When P is known, that is to say the coefficients a, b, c are known, the computing unit 30 computes and displays Cbin and D using equations (III) to (V). [0109] Computing the variance V in the difference, err=Cd*out− Cdout, using the formula: $V = ( Z - H × P ) - 1 × ( Z - H × P ) m - 3$ [0110] provides information about the accuracy of the value of the dialysance D and of the ion concentration of the blood Cbin which are determined by means of the method according to the invention. [0111] According to the invention, each time that a new pair of values Cdinj, Cdoutj+z is stored in memory, the computing unit 30 determines a new value of the dialysance D and a new value of the ion concentration of the blood Cbin from the two most recent series of the m values adopted by the conductivity of the dialysis liquid upstream and downstream of the dialyser. [0112] In this example, the mathematical model is the same as before, but the hydraulic delay T is not known. The method according to the invention then comprises a preliminary step of determining the hydraulic delay, which consists in carrying out iteratively the computations which have just been described, each time using a different numerical value of the hydraulic delay T. The value of the hydraulic delay which is adopted after this preliminary step is that for which the variance V of the difference err=Cd*out−Cdout is the least. [0113] This step of determining the hydraulic delay does not, of course, have to be reiterated each time that a new pair of conductivity values Cdinj, Cdoutj+z is stored in memory. However, each time that the flow rate of dialysis liquid is modified, or if a new haemodialyser is used during the session, the hydraulic delay must be computed again. [0115] According to the invention, the value Cdin adopted by the conductivity upstream of the dialyser (the measured value or the setpoint value) is regularly and cumulatively stored in a memory in the computing unit 30, which therefore permanently contains a plurality of discrete values of the conductivity (Cdin1 . . . Cdinj . . . Cdinp) taken upstream of the haemodialyser 1 respectively at the instants t1 . . . tj . . . tp separated by the sampling period Ts. [0116] Likewise, the value Cdout adopted by the conductivity downstream of the dialyser (the measured value) is regularly and cumulatively stored in memory in the computing unit 30, which therefore permanently contains a plurality of discrete values of the conductivity (Cdout1 . . . Cdoutj . . . Cdoutp) taken downstream of the haemodialyser 1 respectively at the instants t1 . . . tj . . . tp separated by the sampling period Ts. [0118] Provided that the period of the periodic variation imposed on the conductivity is chosen to be sufficiently long compared with the time constant Ø of the system, the relative dialysance Dr may be computed simply by means of the following formula: $Dr = D Qd = 1 - Cdout Cdin $ [0119] In order to compute the ion concentration Cbin of the blood, the computing unit 30 determines beforehand, from the m last values of conductivity recorded, the average conductivity CdinM upstream of the haemodialyser and the average conductivity CdoutM downstream of the haemodialyser, and then it applies the following formula: $Cbin = 1 Dr × CdoutM - ( 1 - Dr ) Dr × CdinM$ [0120] One advantage of this second mode of implementing the invention is that at no moment does it require the hydraulic delay T to be known. [0121] The invention is not limited to the examples of implementation that have just been described, and it is capable of variants.
{"url":"http://www.google.com/patents/US20010004523?dq=7751826","timestamp":"2014-04-23T20:58:16Z","content_type":null,"content_length":"118792","record_id":"<urn:uuid:9f1f651f-ed53-45e3-bb2b-cda9b82fd617>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Martins Additions, MD Algebra 1 Tutor Find a Martins Additions, MD Algebra 1 Tutor ...I have also had extensive coursework in econometrics and used econometrics in research papers in both college and graduate school. I received an A in college-level linear algebra course, and have taken and received As and Bs in additional math courses (ordinary differential equations, partial di... 16 Subjects: including algebra 1, calculus, statistics, geometry ...My tutoring style involves coaching the student into reaching the answers on their own. For example, If they struggle to find the correct equation, I teach them how to locate resources and how to look at the variables present to select the best way to solve a problem. After working through a pr... 39 Subjects: including algebra 1, Spanish, chemistry, writing ...In addition, I have applied nonparametric methods including rank-sum tests, randomization tests, and bootstrapping. I have a preference for using R for these analyses, although I have used Minitab, SPSS, MATLAB, and SAS some. As an undergraduate, I took a course in linear algebra and a course in differential equations that involved applications of it. 15 Subjects: including algebra 1, calculus, statistics, geometry ...Surely, anyone taking the GRE can benefit from applying these strategies. I have about 2 years of experience working as a tutor with Kaplan. In addition to teaching science and math, I helped students improve their reading comprehension and writing skills in preparation for a number of different standardized tests. 23 Subjects: including algebra 1, reading, English, writing ...I have played in jazz bands in high school and continued into college spanning 3 years. I have also played shows with many different groups of musicians on both cello and electric bass as well as other instruments in the Long Island and Washington, DC areas. I have tutored students on both elec... 10 Subjects: including algebra 1, calculus, elementary math, algebra 2
{"url":"http://www.purplemath.com/Martins_Additions_MD_algebra_1_tutors.php","timestamp":"2014-04-21T00:07:40Z","content_type":null,"content_length":"24815","record_id":"<urn:uuid:8a928313-4419-42f8-9b0c-f7f3cae554bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Sean Prendiville's Home Page Postdoctoral Research Associate Email: Sean.Prendiville@Bristol.ac.uk Office: Office 2.16, 2nd Floor, Howard House, Queens Avenue, Bristol, BS8 1SD, United Kingdom Research Interests: The Hardy-Littlewood Circle Method, Arithmetic Combinatorics. Number Theory and Group Theory. First year course, University of Bristol, Autumn 2012. Galois Theory. Fourth year course, University of Bristol, Autumn 2012. [1] S.M. Prendiville, Solution-free sets for sums of binary forms, P. London Math. Soc. (accepted, to appear), 36pp. [2] S.T Parsell, S.M. Prendiville and T.D. Wooley, Near-optimal mean value estimates for multidimensional Weyl sums, submitted, 58pp. [3] S.M. Prendiville, On solution-free multidimensional sets of integers, 28pp. PhD Thesis. Completed under the supervision of Prof. T. D. Wooley, FRS. Lecture Notes. Aspects of Sieve Methods From a Slightly Personal Point of View (draft) - A series of 12 lectures given by Prof. Christopher Hooley FRS in January 2009 and January 2010. Comments and corrections would be gratefully received at the above email address. Analytic Number Theory - A course given by T.D. Browning (autumn 2007). These notes comprise only half the course. Newton's Identities on symmetric polynomials. My old band http://sidewayssometimes.com/
{"url":"http://www.maths.bris.ac.uk/~sp2331/","timestamp":"2014-04-19T04:19:04Z","content_type":null,"content_length":"13010","record_id":"<urn:uuid:02052c31-8dc3-4d6e-9414-7e8fe255c278>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluting Binomial Coefficients that contain fractions? September 19th 2010, 12:41 PM Evaluting Binomial Coefficients that contain fractions? I was just wondering how I would evaluate something like: ${\frac{1}{2} \choose 1}$ (1/2 choose 1) ${\frac{1}{2} \choose 2}$ (1/2 choose 2) I've never seen fractions in binomial coefficients. September 20th 2010, 05:29 AM Binomial coefficient - Wikipedia, the free encyclopedia talks about that way down the page at "Binomial Coefficient with n= 1/2". Essentially, you write $\begin{pmatrix}n \\ m\end{pmatrix}$ as $\frac{n!}{m!(n-m)!}= \frac{n(n-1)(n- 2)\cdot\cdot\cdot(n-m+1)}{m(m-1)(m-2)\cdot\cdot\cdot(3)(2)(1)}$ with the difference that since n is not an integer, you never end at "1" or "n- m+1". Instead, you get an infinite product which must be shown to have a finite value. Wikipedia gives, explicitely, $\begin{pmatrix}\frac{1}{2} \\ n \end{pmatrix}= \begin{pmatrix}2k+1 \\ k\end{pmatrix}\frac{(-1)^{k+1}(k+1)}{2^{2n}(2k-1)(2k+1)}$
{"url":"http://mathhelpforum.com/number-theory/156720-evaluting-binomial-coefficients-contain-fractions-print.html","timestamp":"2014-04-17T11:04:13Z","content_type":null,"content_length":"5477","record_id":"<urn:uuid:9a14b9fb-a29b-488f-80e1-664b51fcbd4b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Limits! Because the zero may be in the googolth place. Maybe it is in the bajilionth place. Maybe it's in the googolplexth place. Or in Graham's numberth place. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=15863&p=23","timestamp":"2014-04-16T07:34:23Z","content_type":null,"content_length":"15689","record_id":"<urn:uuid:31a7a3a1-8f53-446b-8352-96073db04fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Graham's Law Graham's Law: Problems 1 - 10 Problem #1: If equal amounts of helium and argon are placed in a porous container and allowed to escape, which gas will escape faster and how much faster? Set rate[1] = He = x Set rate[2] = Ar = 1 The molecular weight of He = 4.00 The molecular weight of Ar = 39.95 Graham's Law is: r[1] over r[2] = √MM[2] over √MM[1] Substituting, we have: x / 1 = √(39.95 / 4.00) x = 3.16 times as fast. Video: Solution to a Graham's Law Problem Problem #2: What is the molecular weight of a gas which diffuses 1/50 as fast as hydrogen? Set rate[1] = other gas = 1 Set rate[2] = H[2] = 50 The molecular weight of H[2] = 2.02 The molecular weight of the other gas = x. By Graham's Law (see the answer to question #1), we have: 1 / 50 = √(2.02 / x) x = 5050 g/mol Video: Solution to a Graham's Law Problem Problem #3: Two porous containers are filled with hydrogen and neon respectively. Under identical conditions, 2/3 of the hydrogen escapes in 6 hours. How long will it take for half the neon to Set rate[1] = H[2] = x Set rate[2] = Ne = 1 The molecular weight of H[2] = 2.02 The molecular weight of Ne = 20.18 By Graham's Law: x / 1 = √(20.18 / 2.02) x = 3.16 Since the H[2] escapes 3.16 times as fast as Ne, this calculation determines the amount of Ne leaving in 6 hours: 0.67 / 3.16 = 0.211 Calculate the time needed for half the Ne to escape, knowing that 0.211 escapes in 6 hours: 0.211 / 6 = 0.50 / x x = 14.2 hours Problem #4: If the density of hydrogen is 0.090 g/L and its rate of diffusion is 5.93 times that of chlorine, what is the density of chlorine? Set rate[1] = H[2] = 5.93 Set rate[2] = Cl[2] = 1 The molecular weight of H[2] = 2.02 The molecular weight of Cl[2] = x. By Graham's Law: 5.93 / 1 = √(x / 2.02) x = 71.03 g/mol Determine gas density using the molar volume: 71.03 g / 22.414 L = 3.169 g/L Problem #5: How much faster does hydrogen escape through a porous container than sulfur dioxide? Set rate[1] = H[2] = x Set rate[2] = SO[2] = 1 The molecular weight of H[2] = 2.02 The molecular weight of SO[2] = 64.06 By Graham's Law: x / 1 = √(64.06 / 2.02) x = 5.63 times as fast Problem #6: Compare the rate of diffusion of carbon dioxide (CO[2]) & ozone (O[3]) at the same temperature. The molecular weight of CO[2] = 44.0 The molecular weight of O[3] = 48.0 Do two things: set O[2] rate = 1 (since it is the heavier) assign it to be r[2] (since r[2] is in the denominator) Graham's Law: r[1] over r[2] = √[(molec. wt[2] over molec. wt[1])] x over 1 = √(48 over 44) x = 1.04 CO[2] diffuses 1.04 times as fast as O[3] Problem #7: 2.278 x 10¯^4 mol of an unidentified gaseous substance effuses through a tiny hole in 95.70 s. Under identical conditions, 1.738 x 10¯^4 mol of argon gas takes 81.60 s to effuse. What is the molar mass of the unidentified substance? The first thing we need to do is compute the rate of effusion for each gas: unknown gas: 2.278 x 10¯^4 mol / 95.70 s = 2.380 x 10¯^6 mol/s argon: 1.738 x 10¯^4 mol / 81.60 s = 2.123 x 10¯^6 mol/s Now, we are ready to use Graham's Law. Please note: (1) I will drop the 10¯^6 from each rate and (2) we know the molar mass of argon from reference sources. Let argon be r[1]: 2.123 / 2.380 = √(x / 39.948) Square both sides and solve for x: 0.7957 = x / 39.948 x = 31.786 g/mol (31.79 to 4 significant figures) Problem #8: A compound composed of carbon, hydrogen, and chlorine diffuses through a pinhole 0.411 times as fast as neon. Select the correct molecular formula for the compound: a) CHCl[3] b) CH[2]Cl[2] c) C[2]H[2]Cl[2] d) C[2]H[3]Cl Let r[1] = 0.411; this means r[2] (the rate of effusion for Ne) equals 1. Inserting values into Graham's Law yields: 0/411 / 1 = √(20.18 /x ) the 20.18 is the atomic weight of Ne. Squaring both sides gives: 0.16892 = 20.18 / x Solving for x yields: x = 119.46 g/mol Examining the formulas for the possible answers, we see that answer a (CHCl[3]) gives a molecular weight of about 119.5. Problem #9: Which pair of gases contains one which effuses at twice the rate of the other in the pair? A. He and Ne B. Ne and CO[2] C. He and CH[4] D. CO[2] and HCl E. CH[4] and HCl 1) We can solve this problem by solving a fake problem: Set rate[1] = 2 Set rate[2] = 1 We now have a gas (rate[1]) effusing twice as fast as another gas (rate[2]). We now want to know how much heavier the slower gas is. Set MM[1] = 1 Set MM[2] = x Our faster gas (rate[1]) is also our lighter gas (MM[1]). We know want to know the molar mass (MM[2]) of our heavier, slower (rate[2]) gas. Notice how I set the lighter gas' mass equal to 1. I could have used any number, all I need to know is how many times larger the mass of the slower gas is. 2) Use Graham's Law: 2/1 = √(x/1) x = 4 Our heavier gas is four times heaver than the lighter gas (remember that the lighter is effusing twice as fast as the heavier gas). 3) Answer the question: We look for a pair of gases in which the heavier gas is four times as heavy as the lighter gas. We find the only choice which satisfies that criterion is answer c. To continue the 'twice as' theme, you could solve this problem, if you wish: Oxygen weighs approximately twice as much as methane. Under the same conditions of temperature and pressure, how much faster does a sample of methane effuse than a sample of oxygen? Problem #10: If a molecule of CH[4] diffuses a distance of 0.530 m from a point source, calculate the distance (in meters) that a molecule of N[2] would diffuse under the same conditions for the same period of time. Assume the gases each diffuse in one second, in order to create a rate. Set rate[1] = N[2] = x Set rate[2] = CH[4] = 0.530 m/s The molecular weight of N[2] = 28.0 The molecular weight of CH[4] = 16.0 Graham's Law is: r[1] over r[2] = √MM[2] over √MM[1] Substituting, we have: x / 0.530 = √(16.0 / 28.0) x = 0.400 m/s
{"url":"http://www.chemteam.info/GasLaw/WS1-Graham.html","timestamp":"2014-04-18T15:39:00Z","content_type":null,"content_length":"9144","record_id":"<urn:uuid:58ba726c-e93b-4643-969e-c5b36e95ad70>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient Serial and Parallel Algorithms for Selection of Unique Oligos in EST Databases Advances in Bioinformatics Volume 2013 (2013), Article ID 793130, 6 pages Research Article Efficient Serial and Parallel Algorithms for Selection of Unique Oligos in EST Databases ^1Department of Computer Science, Memorial University, Canada ^2Department of Mathematics and Statistics, Memorial University, Canada Received 15 October 2012; Accepted 14 February 2013 Academic Editor: Alexander Zelikovsky Copyright © 2013 Manrique Mata-Montero et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Obtaining unique oligos from an EST database is a problem of great importance in bioinformatics, particularly in the discovery of new genes and the mapping of the human genome. Many algorithms have been developed to find unique oligos, many of which are much less time consuming than the traditional brute force approach. An algorithm was presented by Zheng et al. (2004) which finds the solution of the unique oligos search problem efficiently. We implement this algorithm as well as several new algorithms based on some theorems included in this paper. We demonstrate how, with these new algorithms, we can obtain unique oligos much faster than with previous ones. We parallelize these new algorithms to further improve the time of finding unique oligos. All algorithms are run on ESTs obtained from a Barley EST database. 1. Introduction Expressed Sequence Tags (or ESTs) are fragments of DNA that are about 200–800 bases long generated from the sequencing of complementary DNA. ESTs have many applications. They were used in the Human Genome Project in the discovery of new genes and are often used in the mapping of genomic libraries. They can be used to infer functions of newly discovered genes based on comparison to known genes [ An oligonucleotide (or oligo) is a subsequence of an EST. Oligos are short, since they are typically no longer than 50 nucleotide bases. Oligos are often referred to in the context of their length by adding the suffix “mer”. For example, an oligo of length 9 would be referred to as a 9-mer. The importance of oligos in relation to EST databases is quite significant. An oligo that is unique in an EST database serves as a representative of its EST sequence. The oligonucleotides (or simply oligos) contained in these EST databases have applications in many areas such as PCR primer design, microarrays, and probing genomic libraries [2–4]. In this paper we will improve on the algorithms presented in [2] to solve the unique oligos search problem. This problem requires us to determine all oligos that appear in one EST sequence but not in any of the others. In addition, we will consider two oligos to be virtually identical if they fall within a certain number of mismatches from each other. In the appendix we include all the algorithms used and developed in this paper. 2. The Unique Oligos Search Problem In this paper we use the notation to denote the Hamming Distance between the strings and . Given an EST database , where is a string over the alphabet , integers and , and -mer , we say that occurs approximately in if there exists a substring of some EST such that . We also say that an -mutant list of a string is a list of all possible strings, , of length over the alphabet such that . Such a string is referred to as an -mutant of . A unique oligo of is defined as an -mer such that occurs exactly in one EST and does not occur approximately in any other EST. The unique oligos search problem is the problem of finding all unique oligos in an EST database. Many algorithms have been presented to solve this problem [5, 6]. The algorithm presented in [2] relies on an observation that if two -mers agree within a specific Hamming Distance, then they must share a certain substring. These observations are presented in this paper as theorems. Theorem 1. Suppose one has two -mers and such that . If one divides them both into substrings, and , and each , except possibly , has length , then there exists at least one , such that . Proof. Suppose by contradiction that for any , and have at least 2 mismatches. Then which is a contradiction to the fact that . Using this observation, an algorithm was presented in [2] which solves the unique oligos search problem in time . The algorithm can be thought of as a two-phase method. In the first phase we record the position of each -mer in the database into a hash table of size . We do so in such a way that for each -mer over the alphabet we have that whereby is an EST sequence, is the position of within that sequence, and is the number of occurrences of in the database. In the second phase, we extend every pair of identical -mers into -mers and compare these -mers for nonuniqueness. We also do the same for pairs that have a Hamming Distance of 1. If they are nonunique, we mark them accordingly. Theorem 1 guarantees that if an -mer is nonunique, then it must share a -mer substring that differs by at most one character with another -mer substring from another -mer. Hence, if an -mer is nonunique, it will be marked during phase two. Assuming there are symbols in our EST database, the filing of the -mers into the hash table takes time . In phase two, we assume that the distribution of -mers in the database is uniform; in other words, that each table contains entries. Thus we have comparisons within each table entry. Each -mer also has a 1-mutant list of size , so, we have comparisons for each entry in the table. Also, the time required to extend each pair of -mers to -mers is . Given that we have entries in the hash table, we have a total time complexity of where In [7], several variations of Theorem 1 are presented. We can use these theorems to generate similar algorithms with slightly different time complexities. Theorem 2. Suppose one has two -mers and such that . If one divides them both into substrings, and , and each , except possibly , has length , then there exists at least one , such that . Proof. Suppose by contradiction that we cannot find any such that . Then there exists at least one mismatch between and for each , and thus we have at least mismatches which contradicts the fact that Based on Theorem 2 we can design a second algorithm that works in a similar way to Algorithm 1. The major difference between these algorithms is that in Algorithm 2 we are not required to do comparisons with each hash table entries mutant list. This means we have comparisons within each table entry which yields a total time complexity of where A third theorem was also briefly mentioned [7]; however, it was not implemented in an algorithm. We use this theorem to create a third algorithm to solve the unique oligos search problem. Theorem 3. Suppose one has two -mers and such that . If one divides them both into substrings, and , and each , except possibly , has length , then there exists at least one , such that . Proof. Suppose by contradiction that for any , and have at least 3 mismatches. Then which is a contradiction to the fact that . The algorithm is somewhat similar to Algorithm 1. The main difference is that we compare every -mer to -mers in its corresponding 2-mutant list, rather than its 1-mutant list. Each -mer has 2-mutants, so we have comparisons for each entry in the hash table yielding a total time complexity of where It is important to note the term in the denominator of our time complexity expressions. Since this term is exponential, it will have the largest impact on the time taken to run our algorithms. Based on this observation, we expect Algorithm 3 to run the fastest, followed by Algorithm 1 and then Algorithm 2. 3. Implementation We implement these algorithms using C on a machine with 12 Intel Core i7 CPU 80 @ 3.33GHz processors and 12GB of memory. The datasets we use in this implementation are Barley ESTs taken from the genetic software HarvEST by Steve Wanamaker and Timothy Close of the University of California, Riverside (http://harvest.ucr.edu/). We use two different EST databases, one with 78 ESTs and another with 2838. In our experiments we search for oligos of lengths 27 and 28 since they are common lengths for oligonucleotides. As we increase the size of the database, we see that Algorithm 3 is the most efficient as anticipated (data shown in Tables 1 and 2). One important thing to note about all of these algorithms is the fact that the main portion of them is a for loop which iterates through each index of the hash table. It is also obvious that loop iterations are independent of each other. These two factors make the algorithms perfect candidates for parallelism. Rather than process the hash table one index at a time, our parallel algorithms process groups of indices simultaneously. Ignoring the communication between processors, our algorithms optimally parallelize our three serial algorithms. There are many APIs in different programming languages that aid in the task of parallel programming. Some examples of this in the C programming language are OpenMP and POSIX Pthreads. OpenMP allows one to easily parallelize a C program amongst multiple cores of a multicore machine [8]. OpenMP also has an extension called Cluster OpenMP which allows one to parallelize across multiple machines in a computing cluster. A new trend in parallel programming is in the use of GPUs. GPUs are the processing units inside computers graphics card. C has several APIs which allow one to carry out GPU programming. The two such APIs are OpenCL and CUDA [9, 10]. In the second implementation of our algorithms we use OpenMP to parallelize our algorithms throughout the 12 cores of our machine. We can easily see that we achieve near optimal parallelization with our parallel algorithms; that is, the time taken by the parallel algorithms is approximately that of the serial algorithms divided by the number of processors. 4. Conclusion In this paper we used three algorithms to solve the unique oligos search problem which are extensions of the algorithm presented in [2]. We observed that we can achieve a significant performance improvement by parallelizing our algorithms. We can also see that Algorithm 3 yields the best results for larger databases. For smaller databases, however, the time difference between each pair of algorithms is negligible, but results in Algorithm 3 being the slowest, and this is due to the time required to compute the mismatches of each -mer. Other algorithms can be obtained by setting to different values. See Algorithms 1, 2, 3, 4, 5, 6, 7, and 8. 1. M. D. Adams, J. M. Kelley, J. D. Gocayne et al., “Complementary DNA sequencing: expressed sequence tags and human genome project,” Science, vol. 252, no. 5013, pp. 1651–1656, 1991. View at Scopus 2. J. Zheng, T. J. Close, T. Jiang, and S. Lonardi, “Efficient selection of unique and popular oligos for large EST databases,” Bioinformatics, vol. 20, no. 13, pp. 2101–2112, 2004. View at Publisher · View at Google Scholar · View at Scopus 3. S. H. Nagaraj, R. B. Gasser, and S. Ranganathan, “A hitchhiker's guide to expressed sequence tag (EST) analysis,” Briefings in Bioinformatics, vol. 8, no. 1, pp. 6–21, 2007. View at Publisher · View at Google Scholar · View at Scopus 4. W. Klug, M. Cummings, and C. Spencer, Concepts of Genetics, Prentice-Hall, Upper Saddle River, NJ, USA, 8th edition, 2006. 5. F. Li and G. D. Stormo, “Selection of optimal DNA oligos for gene expression arrays,” Bioinformatics, vol. 17, no. 11, pp. 1067–1076, 2001. View at Scopus 6. S. Rahmann, “Rapid large-scale oligonucleotide selection for microarrays,” in Proceedings of the 1st IEEE Computer Society Bioinformatics Conference (CSB '02), pp. 54–63, IEEE Press, Stanford, Calif, USA, 2002. 7. S. Go, Combinatorics and its applications in DNA analysis [M.S. thesis], Department of Mathematics and Statistics, Memorial University of Newfoundland, 2009. 8. OpenMP.org, 2012, http://openmp.org/wp/. 9. Khronos Group, “OpenCL—The open standard for parallel programming of heterogeneous systems,” 2012, http://www.khronos.org/opencl/. 10. Nvidia, “Parallel Programming and Computing Platform—Cuda—Nvidia,” 2012, http://www.nvidia.com/object/cuda_home_new.html.
{"url":"http://www.hindawi.com/journals/abi/2013/793130/","timestamp":"2014-04-20T05:00:33Z","content_type":null,"content_length":"166529","record_id":"<urn:uuid:d5cb5e1d-0dbc-4334-96e8-4f6950dc95e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
What Does the Classifying Space of a 2-Category Classify? Posted by Urs Schreiber My personal spy has just returned from the Nordic Conference in Topology that took place last week. I hear that Tore A. Kro has new notes on his work with N. Baas and M. Bökstedt available online N. Baas, M. Bökstedt, T. A. Kro 2-categorical K-theories. They try to answer the question: What does the classifying space of a 2-category classify? Their answer is: for sufficiently well behaved topological 2-categories $C$, the nerve of $C$ is the classifying space for charted $C$-bundles. Here a charted $C$-bundle is essentially like what one would call the transition data for a 2-groupoid bundle #. The only difference is that no invertibility in $C$ is assumed. As a consequence, transition functions may go from patch $i$ to patch $j$, but not the other way around. The main application of this theory in these notes is a proof of the previously announced claim, that for $C$ the 2-category of Kapranov-Voevodsky 2-vector spaces the classifying space is the 2K-theory introduced by Baas, Dundas and Rognes. For $C$ the 2-category of Baez-Crans 2-vector spaces the classifying space is two copies of ordinary K-theory. Posted at December 4, 2006 3:28 PM UTC Re: What does the Classifying Space of a 2-Category classify? The Baas, Dundas, and Rognes paper is here. Posted by: Allen Knutson on December 4, 2006 6:47 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? The Baas, Dundas, and Rognes paper is here. Thanks. I should have included more links. I did collect a couple of related links here. A transcript of a summary talk by Birgit Richter on the BDR work is given here: Seminar on 2-Vector Bundles and Elliptic Cohomology, II, III, IV, V. Posted by: urs on December 4, 2006 8:42 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Thank goodness that Urs has his own mathematical MI-6! 2-Categorical K-theories are certainly something very interesting to think about. I will try to understand better this idea, with the help of the useful `Seminar on 2-Vector Bundles and Elliptic Cohomology’. Posted by: Bruce Bartlett on December 4, 2006 8:53 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? On p. 5, in example 2.4, the authors mention String bundles as those classified by a 2-category they call $2S$. Maybe I didn’t look at these notes closely enough, but I did not see the definition of $2S$. I would expect that the classifying space for $\mathrm{String}_G$ bundles, for fixed Lie group $G$, is the realization of the nerve of the following sub-2-category of $\mathrm{Bim}(\mathrm{Hilb})$: objects are algebras Morita equivalent to the algebras generated by positive energy representations of $\L_k G$, morphisms are bimodules for these algebras and 2-morphisms are bimodule homomorphisms. On p. 4, example 2.2, a 2-category is introduced whose classifying space classifies line bundle gerbes. The category has a single object, has $\mathbb{CP}^\infty$ worth of 1-morphisms and a circle worth of 2-morphisms between any pair of 1-morphisms. Noticing that $\mathbb{CP}^\infty \simeq BU(1)$ it seems to me that this category is essentially the one I described in a little essay called How many Circles are there in the World?. Posted by: urs on December 5, 2006 12:10 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? The only difference is that no invertibility in C is assumed. As a consequence, transition functions may go from patch i to patch j, but not the other way around.The lack of inverses is familiar from the case of fibrations in which instead of a group one has only the topological monoid of self-homotopy equivalences of the fibre. The patching should be replaced by a mapping cylinder. Alternatively the transport along paths has no need to cancel. Posted by: jim stasheff on December 5, 2006 4:57 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? What is a topological 2-category? It doesn’t seem to be defined in `2-vector bundles and forms of elliptic cohomology’ or `2-categorical K-theories’. Posted by: Bruce Bartlett on December 5, 2006 3:11 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? What is a topological 2-category? I’d guess they use it to mean a 2-category internal to topological spaces. I.e. a topological space of $n$-morphisms for $0 \leq n \leq 2$ with all source-, target- and composition maps continuous. Posted by: urs on December 5, 2006 3:22 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Right - thanks. Its just that there is another notion of `topological 2-category’ inspired by John’s `HDA II : 2-Hilbert spaces’ paper. It works as follows. Firstly, I assume everyone is familiar with the Grothendieck paradigm relating n-groupoids and topology: (1)$\text{n-groupoids} \leftrightarrow \text{homotopy n-types}.$ There is a `quantum cousin’ of this which relates topology, algebra, duality and higher categories: (2)$\array{ & \text{Topology} & \text{Algebra} \\ & & \\ n=0 & \text{topological spaces} & \text{commutative H*-algebras} \\ n=1 & \text{topological groupoids} & \text{symmetric 2-H*-algebras} \\ n=2 & \text{topological 2-groupoids} & \text{symmetric 3-H*-algebras} }$ and so on (By the way, all this weird stuff about $n-H^*$-algebras can be found in the introduction of HDA II). Lets see how it works. When $n=0$, it is just the standard Gelfand-Naimark theorem relating topological spaces with commutative $H^*$-algebras. (Of course, its really compact Haussdorff topological spaces and so on, but lets gloss over these details for the moment). When $n=1$, it’s the categorified Gelfand-Naimark theorem. This relates topological groupoids to symmetric 2-$H^*$-algebras. In this context, a topological groupoid is just a groupoid whose hom-sets are topological spaces. The objects don’t carry a topology! A symmetric 2-$H^*$-algebra is just a nice linear, monoidal category with direct sums. The categorified Gelfand-Naimark theorem says these structures are the same. In the one direction you take `Rep’ while in the other you take `Spec’. And the pattern continues. The important thing is that an $n$-groupiod in this context is an $n$-category, all of whose objects, morphisms, 2-morphisms etc. are weakly invertible, and whose $n$ -morphisms carry a topology. There is *no* topology on the lower morphisms. The idea is that as $n \rightarrow \infty$, we are `bumping’ topology out of the game! Thus a topological $\infty$-groupoid will be the same thing as a discrete $\infty$-groupoid. At this point we’ll have consummated Grothendieck’s dream - for we’ll have an equivalence between topological spaces, $\infty$-groupoids and *quantum* $\infty$-categories (linear $\infty$-categories with duals). Anyhow, the point is that there is a context in which a `topological 2-category’ could be conceived of differently. Posted by: Bruce Bartlett on December 5, 2006 5:06 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? A good example of a topological groupoid, in the sense that the objects are discrete but the morphisms are topological spaces, is Cohen, Jones and Segal’s flow-line groupoid $C_f$ associated to a Morse function $f$ on a manifold $X$. This comes from their paper on Morse theory and classifying spaces. Recall that the objects of $C_f$ are the critical points of $f$, and the hom-sets are the flow lines. Posted by: Bruce Bartlett on December 5, 2006 5:48 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Anyhow, the point is that there is a context in which a `topological 2-category’ could be conceived of differently. Very worthwhile remark! In fact, maybe one should have a closer look at what precisely Kro and collaborators call a topological 2-category. Certainly, they want KV-2-vector spaces to form a topological 2-category, also in the semi-skeletal version where the collection of objects is the natural numbers and a morphism from $n$ to $m$ is an $m\times n$ matrix with entries being vector spaces. This is rather similar to the example you mention: A good example of a topological groupoid, in the sense that the objects are discrete but the morphisms are topological spaces But I guess in the case of KV-2-vector space we do get a topological 2-category in the sense internal to $\mathrm{Top}$, in that, for instance, source and target maps on 1-morphisms are indeed constant on connected components of the space of 1-morphisms. Posted by: urs on December 5, 2006 7:06 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? ‘Topological category’ is used to mean two different things: • categories internal to Top, which have a space of objects and a space of morphisms, and • categories enriched in Top, which have a set of objects and, for any two objects $x$ and $y$, a space of morphisms $hom(x,y)$. The latter is the special case of the former where the space of objects has the discrete topology. In general, whenever we have a category $K$ with finite limits, we can define both categories internal to $K$ and categories enriched in $K$: • categories internal to $K$, which have a $K$-object of objects and a $K$-object of morphisms, and • categories enriched in $K$, which have a set of objects and, for any two objects $x$ and $y$, a $K$-object of morphisms from $x$ to $y$, called $hom(x,y)$. When we have a functor $F : Set \to K$ which preserves finite limits, we can use this to turn any category enriched in $K$ into a category internal to $K$. That’s what we’re doing above, where $F : Set \to Top$ sends any set to that space regarded as a set with the discrete topology. In general, for $n$-categories, we expect $n+2$ different levels of internalization. At one extreme we should have ‘complete internalization’, where there’s a $K$-object of objects, a $K$-object of morphisms, and so on up to a $K$-object of $n$-morphisms. At the other extreme we have plain old $n$-categories, where there’s a set of objects, a set of morphisms and so on. Right next to that other extreme we have ‘enrichment’, where we have a set of objects, a set of morphisms, and so on — but for any two parallel $(n-1)$-morphisms $f$ and $g$ we have an object in $K$ called $hom(f,g)$. When $n = 1$ we just have three choices: categories internal to $K$, categories enriched in $K$, and plain old categories. I think this is cool. Posted by: John Baez on December 6, 2006 1:56 AM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Such groupoids enriched in $\mathbf{Top}$ are much like `simplicial groupoids’ - really groupoids enriched in $\mathbf{sSet}$, and these model all unpointed homotopy types. There is a paper by Zhi-Ming Luo (math.AT/0301045) relating (presheaves of) simplicially enriched groupoids and 2-groupoids, but not simplicially-enriched 2-groupoids. Interesting … Posted by: David Roberts on December 6, 2006 2:38 AM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? A topological 2-cat - is it not a 2-cat in which all the structures are topological spaces or continuous maps? Posted by: jim stasheff on December 5, 2006 4:48 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? A topological 2-cat - is it not a 2-cat in which all the structures are topological spaces or continuous maps? I think so #. Unless these authors have redefined this somewhere in the context of their work. But I am not aware of any such redefinition. Posted by: urs on December 5, 2006 4:52 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Jim wrote: A topological 2-cat - is it not a 2-cat in which all the structures are topological spaces or continuous maps? This is one meaning, and probably by far the most common one. If you want to be painfully unambiguous, you can call this a 2-category internal to Top. In the case of 1-categories, people often use K-category to mean a category enriched in K, and category in K to mean a category internal to K. I described the difference here. In the case of 2-categories there are even more layers of distinction, but not many people seem to realize this yet. Posted by: John Baez on December 6, 2006 2:05 AM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Hi, Bruce! If you want to be disgustingly cool, you’ll write quotes like ‘this’ instead of like `this’. That’s one difference between this environment and TeX. Posted by: John Baez on December 6, 2006 2:15 AM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Great - thanks for the tip! The next cool thing I’d like to see is the ability to include .eps files into one’s comments… or perhaps an \xymatrix environment :-) Posted by: Bruce Bartlett on December 6, 2006 12:03 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? The next cool thing I’d like to see is the ability to include .eps files into one’s comments… or perhaps an \xymatrix environment :-) You can include pictures, using the ordinary HTML img tag. Somebody should write a script that reads in xypic code, runs it through a LaTeX compiler, and transforms the result into a .jpg or something. Posted by: urs on December 6, 2006 12:14 PM | Permalink | Reply to this Re: What does the Classifying Space of a 2-Category classify? Bruce asked: What is a topological 2-category? Along with all of the answers above, it’s worth knowing that there is another (completely different) meaning of ‘topological category’ that I’ve soon used point-set topologists (who, like most working mathematicians, are using catgory theory as a language to point out useful features of specific categories). To them, a topological category is a category equipped with a faithful functor to Set that creates all limits and colimits. Examples include the category of topological spaces, the category of uniform spaces, and the category of convergence spaces. Examples do not include any category with homotopy-equivalent maps identified, nor the category of locales, nor any category of smooth spaces as far as I can tell. (These are point-set topologists, after all!) Posted by: Toby Bartels on December 6, 2006 11:09 PM | Permalink | Reply to this Behind the scenes we talked about this: Some definitions in the above notes appear only after the terms defined appear in the theorems. One such definition is concordance. This is defined in definition 7.1 on p. 28. Two “2-bundles” (really: local transition data of 2-bundles) on $X$ are said to be concordant if there is a 2-bundle on $X \times \mathrm{interval}$ which restricts to the given ones on the boundary. That’s an equivalence relation, and concordance classes are therefore denoted $\mathrm{Con}(X,\mathrm{something})$. That’s what one sees appear, for instance, in example 2.4 on p. 5. (By the way: I greatly prefer anonymous comments over no comments at all. If you don’t feel like transmitting what you consider private communication over the entire web, with your name attached, but if you do feel like commenting on anything we talk about here, please consider dropping us an anonymous comment. ) Posted by: urs on December 5, 2006 3:48 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Would it be worth checking what the associated charted $2 C$-bundles for 2-categories, $2 C$, of other 2-vector spaces? Baas et al. cover Kapranov-Voevodsky and Baez-Crans versions, which leaves Elgueta and other versions. Posted by: David Corfield on December 7, 2006 1:47 AM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Would it be worth checking what the associated charted $2C$-bundles for 2-categories, $2C$, of other 2-vector spaces? I certainly think so, and I have ranted on that several times, for instance here. The thing is this: while it is quite interesting that the “K-theory” of Baez-Crans 2-vector bundles is $K\times K$ and that of Kapranov-Voevodsky is $\mathrm{BDR}-2K$, the original hope was that there is a kind of 2-vector bundle such that its K-theory is something more closely resembling elliptic cohomology, somehow. This goal has not been achived yet, as far as I am aware. And I argued that this is maybe no wonder: while the notions of 2-vector spaces used so far in these studies all have their raison d’être, they are all comparatively restricted, as compared with the most general notion of 2-vector space one would imagine. Baez-Crans 2-vector spaces are $\mathrm{Disc}(k)$-module categories. This is “relatively restricted” because $\mathrm{Disc}(k)$ (the discrete monoidal category over a field $k$) is so puny. Kapranov-Voevodsky 2-vector spaces are module categories for something a little bigger, namely for $\mathrm{Vect}$. But they are just a tiny subset of all $\mathrm{Vect}$-module categories. So, here is my first $n$-Café Millenium Prize: One million Microeuros for the first one to compute the classifying space of charted $2C$-bundles with (1)$2C := \mathrm{Bim}(\mathrm{Vect}) \,.$ Posted by: urs on December 7, 2006 9:27 AM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? The second $n$-Café Millenium Prize is to categorify the Generalized Tangle Hypothesis. We have a classifying space for a 2-group. So now find a categorified Thom construction, using the best 2-vector spaces on offer. What kind of thing has a normal 2-bundle with structure? Posted by: David Corfield on December 7, 2006 4:20 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? So, Urs, what might correspond in the case of your favoured 2-vector spaces to the completion of a $k$-dimensional real vector space as a $k$-sphere? Presumably, the answer for a skeletal $(p, q)$ Baez-Crans 2-vector space would be a $q$-sphere bundle over a $p$-sphere. Posted by: David Corfield on December 12, 2006 10:27 AM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? So, Urs, what might correspond in the case of your favoured 2-vector spaces to the completion of a $k$-dimensional real vector space as a $k$-sphere? You, John and others have, in the meantime, thought much more about this than I have. I can try to say something, but the risk is that I throw the discussion back to a point you have long passed. Anyway. If I understand you correctly, you are asking what the categorification of a projective space would be. For an ordinary $k$-vector space $V$, and for $k^\times$ the multiplicative group inside $k$ (i.e. $k$ without 0), we have a $k^\times$-action on $V$ (1)$k^\times \times V \to V$ and passing to the projective space amounts to “dividing out” this action (2)$P V \simeq V/{k^\times} \,.$ My instinct would be to try to categorify this in the more or less obvious way. I am considering 2-vector spaces to be suitable module categories for an action by a (usually abelian and braided) monoidal category (4)$C \,.$ Inside $C$, we find the Picard 2-group $P(C)$. In terms of its suspension, this is simply the core of $\Sigma(C)$. So $P(C)$ has all those objects of $C$ which have a dual object and all isomorphism between these. The action (5)$C\times V \to V$ hence restricts to an action (6)$P(C) \times V \to V$ and we can think of this as the action of the (weak, in general) Picard 2-group on $V$. I guess it makes sense to try to divide out by this group action and address the result as a 2-projective space. We should then also talk about how exactly to define the quotient of a category by the action of a 2-group. But let me put that aside for the moment. The remaing question then is to identitfy the Picard 2-groups for various monoidal categories $C$. For Baez-Crans 2-vector spaces, which should be module categories for (7)$C = \mathrm{Disc}(k)$ the Picard 2-group is just (8)$P(C) = \mathrm{Disc}(k^\times) \,.$ For Kapranov-Voevodsky 2-vector spaces we have (9)$C = \mathrm{Vect}_k$ and hence (10)$P(C) = 1d\mathrm{Vect}_k \,.$ Same for general $\mathrm{Vect}$-module categories. Hm, I have the vague recollection that we were that far long before already. Posted by: urs on December 12, 2006 10:55 AM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Sorry, I wasn’t very clear. I was thinking about how to categorify the Thom Space construction, part of which involves forming a sphere bundle from a vector bundle. I was asking what the equivalent move might be for a 2-vector 2-bundle. Posted by: David Corfield on December 12, 2006 1:30 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? how to categorify the Thom Space construction Ah, I see. Hm, we’d need an arrow-theoretic formulation of what it means to form a one-point compactification of a vector space. Any ideas? Posted by: urs on December 12, 2006 1:36 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Well, here’s Toby telling us that: The Stone Cech compactification functor from the category of topological spaces to the category of compact topological spaces is the left adjoint of the inclusion functor. Posted by: David Corfield on December 12, 2006 4:30 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? The Stone Cech compactification functor from the category of topological spaces to the category of compact topological spaces is the left adjoint of the inclusion functor. Ah, great. So what we need next, then, is a notion of topological 2-space and compact topological 2-space. But that we essentially talked about above. I’d say it looks like a save move to start by declaring that a compact topological 2-space is just a category enriched over - or internalized in - compact topological spaces. If so, we have an obvious inclusion 2-functor from compact topological 2-spaces to topological 2-spaces. This might likely have a weak adjoint. And what you are looking for should be the action of this weak adjoint on a given kind of 2-vector bundle. So now it looks as if all the abstract ingredients we need are there. But apparently work is required for actually carrying through this procedure for a given case. Posted by: urs on December 12, 2006 4:44 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Now which varieties of vector 2-spaces are topological 2-spaces? Posted by: David Corfield on December 12, 2006 5:07 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? Now which varieties of vector 2-spaces are topological 2-spaces? Most of them are naturally enriched over something topological. Whether there is something more interesting than the discrete topology on the objects may depend. Baez-Crans 2-vector spaces are, being categories internal to $\mathrm{Vect}$ automatically also categories internal to $\mathrm{Top}$, as long as we have the standard topology on our (finite-dimensional) vector spaces. Kapranov-Voevodsky 2-vector spaces are also “topological categories”, this is the example that got the discussion above started. To think of an object in $\mathrm{Bim}(\mathrm{Vect})$ – an algebra $A$ – as a topological category we should think of it in terms of its image under $\mathrm{Bim}(\mathrm{Vect}) \stackrel{\subset}{\ to} {}_{\mathrm{Vect}}\mathrm{Mod}$, where it becomes the category of $A$-modules. This is naturally enriched over $\mathrm{Top}$, as morphisms here are linear spaces. So I think about all flavors of 2-vector spaces that one would think of are topological categories. Posted by: urs on December 12, 2006 5:22 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? There are other ways to think of Thom spaces that are more natural (i.e. that are already meaningful both in differential geometry and in algebraic geometry): if E is a vector bundle on X, the Thom space of E is the quotient of E by the complement E-s(X) of the zero section s of E. This is the point of view adopted by Morel and Voevodsky in their homotopy theory of schemes (its naturality is due the strong link of this object with Grothendieck’s six operations). The interest of this is that we don’t need any metric. A very deep feature of Thom spaces is their link with projective spaces. Hence the questions: what is the 2-projective space of dimension n? How to associate to a 2-vector bundle E a 2-projective space P(E)? Posted by: Denis-Charles Cisinski on December 12, 2006 5:06 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? the quotient of $E$ by the complement $E-s(X)$ Hm, how is this quotient formed? I am not sure I understand which quotient is meant. Posted by: urs on December 12, 2006 5:15 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? To form the Thom space of a vector bundle $E$, you add a ‘point at infinity’ to it. In the simplest cases it amounts to this: you take each fiber $E_x$ and add a point at infinity to it to get a sphere; then you identify all these points at infinity. This has the effect of making the complement of the zero section of $E$ contractible. That’s what Denis-Charles meant by ‘taking the quotient of $E$ by the complement of the zero section’. Given a subspace $A \subseteq X$, topologists write $X/A$ to mean the result of collapsing $A$ to a point — an honest quotient. But, you can also imagine a ‘homotopy quotient’ where you glue on just enough stuff to $X$ to make $A$ contractible, and that’s what’s going on here. Posted by: John Baez on February 4, 2007 8:45 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? That’s what Denis-Charles meant by ‘taking the quotient of E by the complement of the zero section’. Thanks! Cool that you rememebered this question after such a long while. Posted by: urs on February 5, 2007 2:29 PM | Permalink | Reply to this Re: What Does the Classifying Space of a 2-Category Classify? 2-categorical $K$-theories is now out on the ArXiv. There are some differences from the version mentioned in the post. In particular, on page 6 we hear about foam bundles, a construction which draws inspiration from work by J. C. Baez and S. Galatius. Posted by: David Corfield on December 20, 2006 10:24 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2006/12/my_personal_spy_has_just.html","timestamp":"2014-04-16T22:56:02Z","content_type":null,"content_length":"104224","record_id":"<urn:uuid:0fbc51c4-58db-47cf-81f5-d53242f0cf7a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Norms to Understand Linear Regression In my last post, I described how we can derive modes, medians and means as three natural solutions to the problem of summarizing a list of numbers, \((x_1, x_2, \ldots, x_n)\), using a single number, \(s\). In particular, we measured the quality of different potential summaries in three different ways, which led us to modes, medians and means respectively. Each of these quantities emerged from measuring the typical discrepancy between an element of the list, \(x_i\), and the summary, \(s\), using a formula of the form, \sum_i |x_i – s|^p, where \(p\) was either \(0\), \(1\) or \(2\). The \(L_p\) Norms In this post, I’d like to extend this approach to linear regression. The notion of discrepancies we used in the last post is very closely tied to the idea of measuring the size of a vector in \(\ mathbb{R}^n\). Specifically, we were minimizing a measure of discrepancies that was almost identical to the \(L_p\) family of norms that can be used to measure the size of vectors. Understanding \ (L_p\) norms makes it much easier to describe several modern generalizations of classical linear regression. To extend our previous approach to the more standard notion of an \(L_p\) norm, we simply take the sum we used before and rescale things by taking a \(p^{th}\) root. This gives the formula for the \ (L_p\) norm of any vector, \(v = (v_1, v_2, \ldots, v_n)\), as, |v|_p = (\sum_i |v_i|^p)^\frac{1}{p}. When \(p = 2\), this formula reduces to the familiar formula for the length of a vector: |v|_2 = \sqrt{\sum_i v_i^2}. In the last post, the vector we cared about was the vector of elementwise discrepancies, \(v = (x_1 – s, x_2 – s, \ldots, x_n – s)\). We wanted to minimize the overall size of this vector in order to make \(s\) a good summary of \(x_1, \ldots, x_n\). Because we were interested only in the minimum size of this vector, it didn’t matter that we skipped taking the \(p^{th}\) root at the end because one vector, \(v_1\), has a smaller norm than another vector, \(v_2\), only when the \(p^{th}\) power of that norm smaller than the \(p^{th}\) power of the other. What was essential wasn’t the scale of the norm, but rather the value of \(p\) that we chose. Here we’ll follow that approach again. Specifically, we’ll again be working consistently with the \(p^{th}\) power of an \(L_p\) norm: |v|_p^p = (\sum_i |v_i|^p). The Regression Problem Using \(L_p\) norms to measure the overall size of a vector of discrepancies extends naturally to other problems in statistics. In the previous post, we were trying to summarize a list of numbers by producing a simple summary statistic. In this post, we’re instead going to summarize the relationship between two lists of numbers in a form that generalizes traditional regression models. Instead of a single list, we’ll now work with two vectors: \((x_1, x_2, \ldots, x_n)\) and \((y_1, y_2, \ldots, y_n)\). Because we like simple models, we’ll make the very strong (and very convenient) assumption that the second vector is, approximately, a linear function of the first vector, which gives us the formula: y_i \approx \beta_0 + \beta_1 x_i. In practice, this linear relationship is never perfect, but only an approximation. As such, for any specific values we choose for \(\beta_0\) and \(\beta_1\), we have to compute a vector of discrepancies: \(v = (y_1 – (\beta_0 + \beta_1 x_1), \ldots, y_n – (\beta_0 + \beta_1 x_n))\). The question then becomes: how do we measure the size of this vector of discrepancies? By choosing different norms to measure its size, we arrive at several different forms of linear regression models. In particular, we’ll work with three norms: the \(L_0\), \(L_1\) and \(L_2\) norms. As we did with the single vector case, here we’ll define discrepancies as, d_i = |y_i – (\beta_0 + \beta_1 x_i)|^p, and the total error as, E_p = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^p, which is the just the \(p^{th}\) power of the \(L_p\) norm. Several Forms of Regression In general, we want estimate a set of regression coefficients that minimize this total error. Different forms of linear regression appear when we alter the values of \(p\). As before, let’s consider three settings: E_0 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^0 E_1 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^1 E_2 = \sum_i |y_i – (\beta_0 + \beta_1 x_i)|^2 What happens in these settings? In the first case, we select regression coefficients so that the line passes through as many points as possible. Clearly we can always select a line that passes through any pair of points. And we can show that there are data sets in which we cannot do better. So the \(L_0\) norm doesn’t seem to provide a very useful form of linear regression, but I’d be interested to see examples of its use. In contrast, minimizing \(E_1\) and \(E_2\) define quite interesting and familiar forms of linear regression. We’ll start with \(E_2\) because it’s the most familiar: it defines Ordinary Least Squares (OLS) regression, which is the one we all know and love. In the \(L_2\) case, we select \(\beta_0\) and \(\beta_1\) to minimize, E_2 = \sum_i (y_i – (\beta_0 + \beta_1 x_i))^2, which is the summed squared error over all of the \((x_i, y_i)\) pairs. In other words, Ordinary Least Squares regression is just an attempt to find an approximating linear relationship between two vectors that minimizes the \(L_2\) norm of the vector of discrepancies. Although OLS regression is clearly king, the coefficients we get from minimizing \(E_1\) are also quite widely used: using the \(L_1\) norm defines Least Absolute Deviations (LAD) regression, which is also sometimes called Robust Regression. This approach to regression is robust because large outliers that would produce errors greater than \(1\) are not unnecessarily augmented by the squaring operation that’s used in defining OLS regression, but instead only have their absolute values taken. This means that the resulting model will try to match the overall linear pattern in the data even when there are some very large outliers. We can also relate these two approaches to the strategy employed in the previous post. When we use OLS regression (which would be better called \(L_2\) regression), we predict the mean of \(y_i\) given the value of \(x_i\). And when we use LAD regression (which would be better called \(L_1\) regression), we predict the median of \(y_i\) given the value of \(x_i\). Just as I said in the previous post, the core theoretical tool that we need to understand is the \(L_p\) norm. For single number summaries, it naturally leads to modes, medians and means. For simple regression problems, it naturally leads to LAD regression and OLS regression. But there’s more: it also leads naturally to the two most popular forms of regularized regression. If you’re not familiar with regularization, the central idea is that we don’t exclusively try to find the values of \(\beta_0\) and \(\beta_1\) that minimize the discrepancy between \(\beta_0 + \ beta_1 x_i\) and \(y_i\), but also simultaneously try to satisfy a competing requirement that \(\beta_1\) not get too large. Note that we don’t try to control the size of \(\beta_0\) because it describes the overall scale of the data rather than the relationship between \(x\) and \(y\). Because these objectives compete, we have to combine them into a single objective. We do that by working with a linear sum of the two objectives. And because both the discrepancy objective and the size of the coefficients can be described in terms of norms, we’ll assume that we want to minimize the \(L_p\) norm of the discrepancies and the \(L_q\) norm of the \(\beta\)’s. This means that we end up trying to minimize an expression of the form, (\sum_i |y_i – (\beta_0 + \beta_1 x_i)|^{p}) + \lambda (|\beta_1|^q). In most regularized regression models that I’ve seen in the wild, people tend to use \(p = 2\) and \(q = 1\) or \(q = 2\). When \(q = 1\), this model is called the LASSO. When \(q = 2\), this model is called ridge regression. In another approach, I’ll try to describe why the LASSO and ridge regression produce such different patterns of coefficients. 7 responses to “Using Norms to Understand Linear Regression” Great and accessible post, thanks John. It would be great to also explain L2 regularization as Gaussian prior on the parameters, and L1 regularization as a double-Exponential prior. As you noted in our exchange on twitter, that does seem to require bringing up a bunch of probability theory, which complicates the accessibility. One way to keep this as an optimization-focused note is to comment, as an aside, that the regularized objective functions are the log-likelihood functions of a probabilistic model, where the additive regularization is really just the log of the multiplicative prior on the likelihood function, before it is log’ed. Given a concise explanation of this connection, I think it’s helpful to then simple “note” that the L2 regularization “happens” to correspond to Gaussian prior and L1 “happens” to correspond to double-Exponential. In principle, this then gives a framework for dreaming up other regularizations that might correspond to meaningful priors on the parameters. Agreed. I think I’ll just add another post on the regularization / prior parallelism since I had a conversation about precisely that topic the other day and realized that the equivalence between the two ideas is less widely known than it should be. 2. Just a minor note: your formula for $\nu$ under “The Regression Problem” and the final displayed equation are missing some parens: you write $y_i – \beta_0 + \beta_1 x_i$ rather than $y_i – (\ beta_0 + \beta_1 x_i)$. Thanks for catching both of those! Great post! Makes everything intuitive and clear. L_0 can be applied to regularization. According to Hastie, Tibshirani and Friedman, on page 72 of ESL (2nd ed.), “… q = 0 corresponds to variable subset selection, as the penalty simply counts the number of nonzero parameters.” You’re totally right, but that involves applying an L0 penalty to the coefficients, rather than the residuals. In a future post I’ll try to describe the ways in which the L1 penalty is the best convex approximation to the L0 penalty — when applying them to the coefficients. I’m surprised you didn’t mention the L_\infty norm (Chebyshev norm), which leads to minimax/Chebyshev regression. Minimax regression to a linear fit (or other function that depends linearly on the parameters) leads to a nice LP problem. There is also robust least-squares, which minimizes the worst-case least-squares fit assuming some uncertainty in the data, and turns the problem into an SOCP. The L0 “norm” (zero norm) is a little bit of a misnomer, because it is not a norm in the usual sense (e.g. it doesn’t satisfy ||ax|| = |a| ||x||), and writing it down in the way you did above seems a bit odd because it requires that you define 0^0 = 0.
{"url":"http://www.johnmyleswhite.com/notebook/2013/03/22/using-norms-to-understand-linear-regression/","timestamp":"2014-04-18T08:02:29Z","content_type":null,"content_length":"28951","record_id":"<urn:uuid:e874d7bf-71fe-4d9e-958e-a16aad3bf3a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Aug 10, 2008 1:40:05 PM (6 years ago) • v44 v45 130 130 * With !SubstFam and !SubstVar, we always substitute locals into wanteds and never the other way around. We perform substitutions exhaustively. For !SubstVar, this is crucial to avoid 131 131 * We should probably use !SubstVar on all variable equalities before using !SubstFam, as the former may refine the left-hand sides of family equalities, and hence, lead to Top being applicable where it wasn't before. * We use !SubstFam and !SubstVar to substitute wanted equalities '''only''' if their right-hand side contains a flexible type variables (which for variable equalities means that we apply 132 !SubstVar only to flexible variable equalities). '''TODO:''' This is not sufficient while we are inferring a type signature as SPJ's example shows: `|- a ~ [x], a ~ [Int]`. Here we want to infer `x := Int` before yielding `a ~ [Int]` as an irred. So, we need to use !SubstVar and !SubstFam also if the rhs of a wanted contains a flexible variable. This unfortunately makes termination more complicated.[DEL::DEL] * We use !SubstFam and !SubstVar to substitute wanted equalities '''only''' if their right-hand side contains a flexible type variables (which for variable equalities means that we apply 132 !SubstVar only to flexible variable equalities). '''TODO:''' This is not sufficient while we are inferring a type signature as SPJ's example shows: `|- a ~ [x], a ~ [Int]`. Here we want to infer `x := Int` before yielding `a ~ [Int]` as an irred. So, we need to use !SubstVar and !SubstFam also if the rhs of a wanted contains a flexible variable. This unfortunately makes termination more complicated. 134 134 Notes:
{"url":"https://ghc.haskell.org/trac/ghc/wiki/TypeFunctionsSolving?action=diff&version=45","timestamp":"2014-04-17T07:54:31Z","content_type":null,"content_length":"15991","record_id":"<urn:uuid:a3432234-1b11-459a-93ec-31e0e1fef2f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics (PH) Courses 1003 Fundamentals of Astronomy (3) F, SP Descriptive astronomy. The solar system, stars, galaxies. Prerequisite: At least 15 ACT or 360 SAT mathematics score or Mathematics 1020 with minimum grade of "C." Lecture, 2 hours; laboratory, 3 hours. 1013 Fundamentals of Physics (3) F, SP, SU Mechanics, heat, electricity, atomic and nuclear physics. Lecture, 2 hours; laboratory, 3 hours. Prerequisite: At least 15 ACT or 360 SAT mathematics score or Mathematics 1020 with minimum grade of “C.” Lecture, 2 hours; laboratory, 3 hours. 111V Special Topics in Physics (1-3) This course will concentrate on one or more topics from the field of physics. The topics will depend upon current interests of students and staff. While the presentation will be at an elementary level, an attempt will be made to cover the topic in depth and to establish connections to other branches of science and human affairs. Offered on demand. 1214, 1224 Elementary College Physics I, II (4, 4) 1214-F; 1224-SP, SU A non-calculus based introduction to physics, mechanics, fluids, heat and thermodynamics, electricity and magnetism, wave motion, sound, light, and atomic and nuclear physics. Courses must be taken in sequence. Prerequisite: Mathematics 1123. Lecture, 3 hours; laboratory, 3 hours. 2414 General Physics I (4) SP A calculus based introduction to general physics and its applications. Mechanics, heat, and sound. Prerequisite: Mathematics 1314 with a minimum grade of “C.” Lecture, 3 hours; laboratory, 3 hours. 2424 General Physics II (4) F A calculus based introduction to physics and its applications. Electricity and magnetism, optics, modern physics. Prerequisites: Mathematics 2314 and Physics 2414 with a minimum grade of “C.” Lecture, 3 hours; laboratory, 3 hours. 2434 Structure of Matter (4) SP Topics related to the modern physical theory of matter: experiment and theory related to quantum phenomena, relativity, and atomic and nuclear structure. Emphasis on condensed matter and material science appropriate for engineering curricula. Prerequisite: Physics 2424. Lecture, 3 hours; laboratory, 3 hours. 3051 Methods of Teaching Physics (3) Designed to acquaint education majors with techniques, demonstration equipment, and audio-visual aids for use in teaching physics. To be taken during Professional Semester. Offered on demand. 3303 Mechanics (3) F An introduction to classical mechanics with the use of vector calculus. Particle kinematics and dynamics, free and forced harmonic motion, conservative and central forces, angular momentum, introduction to the Lagrange and Hamilton formulations. Prerequisite: Physics 2424. Corequisite: Mathematics 3133. Lecture, 3 hours. 3403 Electromagnetic Fields (3) SP A study of electromagnetic fields beginning with Maxwell’s equations. Interactions with conductors and dielectric media; waveguides, antennas. Prerequisite: Physics 2424. Lecture, 3 hours. Same as Electrical and Computer Engineering 3403. 3503 Electromagnetic Fields II (3) F A continuation of PH 3403 to cover topics in electromagnetic radiation, waveguides, transmission lines, antennas, radiation from charged particles, and relativity in electromagnetism. Prerequisite: PH 3403. Lecture, 3 hours. 3603 Optics (3) F Geometrical and physical optics. Image formation, thick lenses, lens aberrations. Electromagnetic wave theory, interference, diffraction, dispersion. Interaction of light with matter. Prerequisite: Physics 2424. Lecture, 3 hours. Offered on demand. 3703 Thermodynamics (3) F Concepts, models and laws; energy and the first law; properties and state; energy analysis of thermodynamics systems; entropy and the second law; conventional power and refrigeration cycles. Prerequisites: Chemistry 1113, 1211, Mathematics 2324, Physics 2414. Lecture, 3 hours. Same as Mechanical Engineering 3703. 3903 Introduction to Biomedical Physics (3) F Historical perspectives and the field of biomedical physics; overview of anatomy and physiology; basic principles of bioelectric phenomena; biomechanics and biofluidmechanics; sound and hearing; vision; radiation and imaging. Prerequisite: consent of instructor. 3913 Biomedical Physics Research Seminar (3) F A review of important research papers and current innovations in biomedical physics. Prerequisite: consent of instructor. 395V Special Topics (1, 2, 3, or 4) Topics from physics and related fields (biophysics, cosmology, etc.) in either lecture- or laboratory oriented format, depending on the specific topic selected. Course may be repeated for credit. Prerequisite: consent of instructor. Offered on demand. 4111, 4121 Advanced Lab I, II (1, 1) F, SP Significant experiments chosen from electricity and magnetism, optics, atomic and nuclear physics. Attention is given to laboratory techniques and data analysis. Prerequisite: 6 hours of upper division physics. Laboratory, 3 hours. 4313 Quantum Theory (3) F Introduction to quantum physics of particles at the atomic and nuclear level. The Schroedinger equation, the uncertainty principle, angular momentum and spin. Prerequisite: Physics 2434, Mathematics 2324. Lecture, 3 hours. 4323 Atomic and Nuclear Structure (3) SP Quantum theory applied to molecules, the hydrogen atom, many-electron atoms, and nuclei. Nuclear models and structure, nuclear decay, nuclear reactions, and the Standard Model of electromagnetic and nuclear interactions. Prerequisite: Physics 4313. Lecture, 3 hours. 491V Independent Study (1, 2, 3) Independent study or research by the student on a problem of special interest. Prerequisite: Consent of instructor. Offered on demand. 4991 Senior Seminar (1) SP Required of all senior majors in the department. Overview of major field. Special project or research paper. Two hours each week.
{"url":"http://www.lipscomb.edu/physics/courses","timestamp":"2014-04-19T11:59:20Z","content_type":null,"content_length":"29304","record_id":"<urn:uuid:f8265eaa-fe1d-4b4f-a3e9-5dd364c9b6e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 69 Does anyone know where are Ontario's largest mineral deposits other than the ring of fire in the north? Thanks for the link Ms. Sue, still having trouble. Hi i'm looking for some help finding information on the topic of "mineral deposits" - specifically; where are Ontario, Canada's largest ones, what are they & how did they form? I'm having trouble finding enough research on this topic for my paper so any g... Number Sequences Explain It. What do you need to do to extend a pattern? 3rd Grade Casey has 10 olives to eat. She wants to have the same number of olives at lunch and at dinner. She wonders how many olives she should eat at each meal. Which number sentence fits this problem? Can someone give me the answers to pizzazz e-8 page Calculus antiderivatives find an antiderivative of the function ((t-1)^2)/sqrt(t) that goes through the point (1,2) i find derivative of top and bottom first and get 2t-2/(1/2*sqrt(t)) but this doesnt give right solution then i find diervative of that again and i get 2/(-1/4 * t^(-3/2)) what am i doin... find value of def integral with a=-2 and b=2sqrt(3) definite integral is : x^3 * sqrt(x^2+4) dx for integral i get 1/15 *((4+x^2)^(3/2)) (-8+3x^2) for value i get [1536- 64sqrt(2)]/15 but its' wrong. help please integral calculus find indefinite integral [(sinx)^3]*[(cosx)^3] dx i got 1/192 * ((cos(6x)-9cos(2x)) + C but this isnt right. explain plz? thnx find value of def integral with a=-2 and b=2sqrt(3) for integral i get 1/15 *((4+x^2)^(3/2)) (-8+3x^2) for value i get [1536- 64sqrt(2)]/15 but its' wrong. help please def integrals find value of def integral with a=-2 and b=2sqrt(3) for integral i get 1/15 *((4+x^2)^(3/2)) (-8+3x^2) for value i get [1536- 64sqrt(2)]/15 but its' wrong. help please find indefinite integral [(sinx)^3]*[(cosx)^3] dx i got 1/192 * ((cos(6x)-9cos(2x)) + C but this isnt right. explain plz? thnx Simplify and write in standard form. 1.) (radical 3 + i radical 15)(radical 3 - i radical 15) 2.) 8 + 16i / 2i Thanks again everyone. Any help is appreciated. 1) Perform the operation and leave the result in trig. form. [3/4(cos pi/3 + i sin pi/3)][4(cos 3pi/4 + i sin 3pi/4)] Thanks 1)Solve and write in standard form: 4x^2-4x+21=0 2)Find the standard form of the complex number. 8(cos pi/2 + i sin pi/2) Thanks everyone! Physics [No Math] When calculating weights away from Earth's surface, for example: At 6.38x10^3 km away from Earth's surface a spacecrafts weight is d=rE+rE Then F=(1/4)(SpaceCraftWeight) Why is the distance 2rE? steve, you are the man! i wish i had your brain. A plane is headed due west with an air speed of 300 mph. The wind is from the north at 80 mph. Find the bearing for the course and the ground speed of the plane. Find the magnitude of the resultant force and the angle between the resultant and each force: Forces of 2 lbs and 12 lbs at an angle of 60 degrees to each other. vbr{lr<5vvx><f(nn)/.bbn> vbr<lr><5.12>(9.8)^3 .bbn=%ff %ff=F(9x) .bbn=%ff9x .bbn=.133%tri .bbn=.tri t.ri=vbr v=161.2 b=39.1 r=12.7 Sub in values of vbr ^^^Wrong! 10c-<0.06>=f(n)(l1/l3+l2/l4) uf=f(x)_ln<1.962> fx/%fl %fl=1.333 2C-<0.09>=f(1.33) ANS:___? Solve: vbr{lr<5vvx><f(nn)/.bbn> vbr<lr><5.12>(9.8)^3 .bbn=%ff %ff=F(9x) .bbn=%ff9x .bbn=.133%tri .bbn=.tri t.ri=vbr v=31.4 b=59.3 r=28.6 KE___? Physics HARD Thank you!!! Why is t.ri = vbr and not just tri? Physics HARD If an objects vbr is 6.32N/sc what is its mass? I'm so stuck...:(((( How can you use rounding to estimate 331 + 193? If alpha= 33.5degrees, a=7.4, and b=10.6, then what is beta to the nearest tenth of a degree. (rewrite of prev question) If alpha=8degrees, beta=121degrees and c=12 then what is Y? If a=8degrees, b=121degrees and c=12 units, then what is Y? I am greater than 200 but less than 500. The sum of my digits is 18. the product of my tens place digit and my 100's place digit equal 20. What digit am I ??? verify the identity: (cosx/1+sinx) + (1+sinx/cosx) = 2secx What key measurments tools would you use to measure quality outcomes such as a reduction in patient falls, medication errors, complication rates? Why? A 350 ml of gas at 1.25 atm and 298 K is heated to 343 K an the volume increased to 750.0ml. What is te new pressure? i have a 36 in. wide by 18 in. length by 20 in high rectangular aquarium that needs to be filled. i have a 12 in high cylindrical bucket with a 12 in diameter to fill the tank. if the aquarium contains 3 in of gravel and the bucket is to be filled to a depth of 11 in, how many... 2. Obtain and interpret descriptive statistics (Mean, Median, Variance, and Std. Dev.). Include the STATA outputs in your results. (Use Stat command tabstat crimerat police59, statistics(mean, p50, var, sd) (3 pts) . tabstat crimerat police59,statistics (mean, p50,... Five percent of all items sold by mail order company are returned by customers for a refund. Find the probability that, of two items sold during a given hour by this company, a) both will be returned for a refund b) neither will be returned for a refund 15.8 grams of NH3 and excess O2 produces NO gas and H2O. 12.8 grams of NO gas was produced. Find the percent yield. Balanced Equation: 4 NH3 + 5 O2 -> 4 NO + 6 H2O A rocket tracking station has two telescopes A and B placed 1.9 miles apart. The telescopes lock onto a rocket and transmit their anlges of elevation to a computer after a rocket launch. what is the distance to the rocket A rocket tracking station has two telescopes A and B placed 1.9 miles apart. The telescopes lock onto a rocket and transmit their anlges of elevation to a computer after a rocket launch. what is the distance to the rocket If a runner travels at a speed of 10 m/s for 5 seconds, stops for a short time then continues on at a speed of 8 m/s for 4 seconds. How long was the runner stopped for if his average speed at the end of the trip is 4 m/s. Expain how there can be more heat energy in an iceburg than in a pot of boiling water. Chemistry- Buffers Since Ka a measure of H+ ions, -log(Ka)= pH.. my assignment says that part is right, I just don't know what to do from there to get M -> moles -> grams Chemistry- Buffers a buffer is prepared by mixing .208 L of 0.452 M HCl and 0.5 L of 0.4 M sodium acetate. When the Ka= 1.8 x 10^-5 I figured out that the pH= 4.74 but then it asks how many grams of KOH must be added to the buffer to change the pH by .155 units? You have a .200M solution of NaCl. if you pipette 2.00 mL of the solution into a 25mL volumetric flask and dilute it to the mark, then what is the molarity of the new solution PHYSICS!!!!! HELP! The velocity of a diver just before hitting the water is -8.6 m/s, where the minus sign indicates that her motion is directly downward. What is her displacement during the last 0.86 s of the dive? im so stuck. i tried this problem several times, and my answers werent correct. ... A team of 6 scouts plans to cross a lake on a raft they designed. The scouts have wooden beams with an average relative density of 0.80. The beams measure 30cmx30cmx3m. The average scouts' mass is about 65kg and for their safety, they want the top of the raft to be at lea... a hot air balloon rise above the ground, accelerating upwards at a rate of 2 m/s^2. The total mass of the hot air balloon and it occupants is 400kg. the density of the air is 1.3 kg/m^3. a) Determine the volume if the hot air balloon. b) if this acceleration is maintained for ... CHEMISTRY RIDDLE!!!! hi! i was just wondering if you could help me figure out this riddle!! im absolutely stumped!!! "Anger: an acid that can do more harm to the vessel in which it is stored, than to anything on which it is poured" is a quote attributed to Seneca, a Roman philosopher. Do... Sam bought 1 of 250 tickets selling for $2 in a game with a grand prize of $400. Was $2 a fair price to pay for a ticket to play this game. If these guys are not solving your problems than I dont know what the heck is this website for? Thank you Chad for asking this question and thank you Br. Bob for answering it! Thank you Dr. Bob! Radius= 7.15x10^7m Mass of Jupiter is 1.90x10^27kg religious education i too struggled with this question what do you call two railroad trains after a head-on collision? thank you Ok so ive gotten this far on finding the points of (2004, 3370) and (2007, 3552) can someone help me find out the equation in y=mx+b form please sikh temples, muslim mosques thanks math riddle ALGEBRA WITH PIZZAZZ year 7 music explain ternary form what have played/sound in this structure thanks for your help! can someone tell me how to draw a gene Explain how the world would be different if C4 plants and CAM plants had not evolved. Bio - Na+,K, and Cl- leave a cell as a result of? Na+, K+, and Cl- leave a cell as a result of ? A) exocytosis B) pumps C) ATP D) a, b, and c Bioo // gateway for ions to enter and leave a cell Gateway for ions to enter and leave a cell ... A) hydrophobic end B) hydrophilic end C) protein channels D) a, b, and c Algebra II (4x-9)- (7x-3) would be: -3x-6 need some help..waht is predicate, predicator and predication in this sentence: Twenty years ago London could have claimed the title "Smog City,Europe". Writeacher has given good information here. http://www.jiskha.com/display.cgi?id=1178019535.1178021833 ok i can un... I'M IN 6 GRADE PLEASE EXPLAIN HOW TO RENAME FRACTIONS TO DECIMALS AS EASY AS YOU CAN You divide the fraction by the the number at the top by the number at the bottom and get a decimal For example 5 --- = 6 doesn't go into the 5 so we add a 0 and a point and then we add... Whats a good site to go to get information about a county? The information is just the capital, culures, stats, basic information that's all. Thanks http://www.google.com/search?q=county+statistics& ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a There are... I have heard of polyphonic textures, are polyphonic textures the same as polyphonic sections? Polyphonic sections are parts of a homophonic texture that vary and are polyphonic. Here is a site on polyphonic texture. http://www.uwosh.edu/faculty_staff/liske/musicalelements/text...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=mikey","timestamp":"2014-04-18T16:50:12Z","content_type":null,"content_length":"20421","record_id":"<urn:uuid:e7746f8b-25d3-4825-a614-9c49cbf0dbb1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Has anyone considered using standard 6061 aluminum tubing? Submitted by kristxw on Mar 1 2010 3:11pm as mallet shafts? it's pretty light, pretty cheap, and pretty durable? i looked around quick and didn't see any mention of this. i mean used ski poles are like 15 bucks. (though the shop i worked at gave me mismatched poles from winter xc ski sales for free/cheap) and this is cheaper, and doesn't have the taper... any thoughts? pros/cons? 6061 isn't good enough. try 7075. for the geeks: 6061-O: has ultimate tensile strength no more than 18,000 psi (125 MPa), and maximum yield strength no more than 8,000 psi (55 MPa) 6061-T4: has an ultimate tensile strength of at least 30,000 psi (207 MPa) and yield strength of at least 16,000 psi (110 MPa) 6061-T6: has an ultimate tensile strength of at least 42,000 psi (290 MPa) and yield strength of at least 35,000 psi (241 MPa) 7075-O: has maximum tensile strength no more than 40,000 psi (276 MPa), and maximum yield strength no more than 21,000 psi (145 MPa) 7075-T6: has an ultimate tensile strength of 74,000 - 78,000 psi (510 - 538 MPa) and yield strength of at least 63,000 - 69,000 psi (434-476 MPa) 7075-T651: has an ultimate tensile strength of at least 67,000 - 78,000 psi (462 - 538 MPa) and yield strength of 54,000 - 67,000 psi (372-462 MPa). The 51 suffix has no bearing on the mechanical properties but denotes that the material is stress relieved by control stretching. whast the rating on 6063-t6? T6 temper 6063 has an ultimate tensile strength of at least 30,000 psi (207 MPa) and yield strength of at least 25,000 psi (172 MPa) Winston Salem NC Bike Polo You can probably get 2024 raw and use it. It's stronger than 6061 I believe. You'll need to find the right wall thickness though. made a couple out of Lowe's aluminum tubing and about 6 bucks for 3 ft it works fine as a community or back up mallet, but that's about it. Put it in the hands of an aggressive player that plays every game they can and it'll get all bent and wobbly pretty fast. Not sure exactly what grade that stuff is, since Lowe's has no mention of it on their page. Maybe some legit 6063 would hold up Plus, that taper helps in mallet making. I always make the top hole just big enough to fit the pole in and then hammer it in with a mallet...making the whole the exact size it needs to be (let the innuendo jokes begin). With a straight pole you have to drill it exact from the beginning or else you are relying and the securing bolt only to keep it from wobbling. 2021 is not cost efficient. It equals out to the same as buying shafts from MKE (maybe more: http://www.mcmaster.com/#aluminum/=61i9zt). If you want to try some 6063 then here's your product: http:// Oddly enough, mcmaster doesn't carry 7075 which is what MKE's shafts are made of. I can't even find that diameter available on any raw materials site. i also just found a length of 3.5 ft of welded ti at a scrap yard my friend works at for 17 dollars. It bends but doesn't break!, try it out. what is the weight /diameter of the piece? Yo Dawg I heard you like redundancies so we got a PIN number for your PIN a friend of mine contacted Reynolds about making titanium shafts. They would be about 25 bucks each...if we were to buy them in bulk. Can't remember how many they said. Somewhere between 100 and 500...maybe more it's a little under .75 inch. i have some hdpe and UHMWPE on order so when it arrives i will see how well it goes. I'm jealous. That's just the right width too. Best of luck with buildin that. Where'd you get UHMW tube? Or are you just milling rod? Hit your friends with sticks I have one made out of aluminum tubing. Just general home depot stuff. Holds up great and ive had people try to hack it and all it did was mess up their ski pole not the pipe I had. Stuff is light cheezer wrote: I have one made out of aluminum tubing. Just general home depot stuff. Holds up great and ive had people try to hack it and all it did was mess up their ski pole not the pipe I had. Stuff is light maybe there's a difference between home depot and lowe's tubing....because that lowe's stuff sucks mad polo ass. awlawall wrote: cheezer wrote: I have one made out of aluminum tubing. Just general home depot stuff. Holds up great and ive had people try to hack it and all it did was mess up their ski pole not the pipe I had. Stuff is light maybe there's a difference between home depot and lowe's tubing....because that lowe's stuff sucks mad polo ass. There is a chance my mallets have not been through the abuse yours have. Mine has not really been used in tournaments I usually just stick with pickup. Might be why mine works so well. I have seen the same material last through a few matches with AB though cheezer wrote: awlawall wrote: cheezer wrote: I have one made out of aluminum tubing. Just general home depot stuff. Holds up great and ive had people try to hack it and all it did was mess up their ski pole not the pipe I had. Stuff is light maybe there's a difference between home depot and lowe's tubing....because that lowe's stuff sucks mad polo ass. There is a chance my mallets have not been through the abuse yours have. Mine has not really been used in tournaments I usually just stick with pickup. Might be why mine works so well. I have seen the same material last through a few matches with AB though Dammit! I meant to take a picture of that mallet tonight. Its still in play, but barely. The guy who plays with it plays pretty rough, and he played with it most of Savannah. The other one is catching up to it...but at 5-8 bucks a shaft for 2-3 months of play, its not that bad. I've used regular tubing and it's sheize. It just doesn't stand any abuse. Ski poles not only are they made with "better" aluminum, they are cold-formed and, sometimes, heat treated after being extruded. *Somebody please think of the children!!* conduit. work on them muscles boy. Polo at Mack Park in Denton Tx Sundays 5pm-10pm $15 seems a bit much for used ski poles. The salvation army here sells them for $5/set in the winter and $2.50/set in the summer. Not sure if that's standard pricing for their stores in other cities or not. Also, I made a classified ad asking for old x-country ski poles and got 6 sets for $10. On the figures of metals which is.more important ultimate tensile strength or max yield strength. Can get straight 6063-t6 (19 diameter with a 1.21mm wall thickness) pretty cheap would that be strong enough or just bend 4 comments
{"url":"https://leagueofbikepolo.com/forum/gear/mallets/2010/03/01/has-anyone-considered-using-standard-6061-aluminum-tubing","timestamp":"2014-04-20T00:38:46Z","content_type":null,"content_length":"81474","record_id":"<urn:uuid:b2dfae33-7289-4569-86ab-2775d625ef72>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the exact value of log base pi (pi^3 cos(pi)) I first canceled out log base pi and pi and was left with 3 cos (pi). Then I looked at the unit circle and looked at pi and cos is -1. So then I multiplied 3(-1)=-3 and my answer was wrong. Can someone explain what I am doing wrong please? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ff6626e4b00c5a3be657b5","timestamp":"2014-04-18T14:01:16Z","content_type":null,"content_length":"44617","record_id":"<urn:uuid:05386dd0-132f-4c2a-9813-49c1b0a91ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotation of elements of an array Rotation of elements of an array I have written two rotation functions that operate on a character array of length equal to a perfect square. The first rotate n times each odd line to the right and n times each pair line to the left. The second rotates n times each odd column down and n times each pair column up, where n is the numer of current line or column. I would like to reduce everything to a single function, or rather, to a single loop that moves each character in its final position without performing all the steps. You can do this? Any suggestions?
{"url":"http://cboard.cprogramming.com/cplusplus-programming/142507-rotation-elements-array-printable-thread.html","timestamp":"2014-04-16T12:03:01Z","content_type":null,"content_length":"6771","record_id":"<urn:uuid:cc352721-6ec3-412c-b653-990fb5a326d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Horology Howard' Brain Teaser Horology Howard Situation puzzles (sometimes called lateral thinking puzzles) are ones where you need to ask lots of yes or no questions to figure out what happened in the situation. These are good puzzles for groups where one person knows the puzzle and answers the questions. Puzzle ID: #12641 Category: Situation Submitted By: Codammanus Corrected By: boodler Howard is almost obsessed with timing things. Unfortunately, he recently broke his most precious watch, so he buys a new timepiece. Walking from the jewelry store, he hears the gong of the clock in the Public Square which gongs at the hour and half hour. Just when he finishes walking up to his apartment someone asks him for the time. Howard immediately responds ''It's 2:35!'' Now, at the time he gives the time, Howard is not wearing a watch. The new watch is still in the box, which is sealed, and not transparent. Nor is it an audio-watch that tells time. He doesn't hear someone else say what time it is either. Howard also has a quirk about time. He never asks anyone for the time, never looks at a clock, a watch, or anything that tells time that isn't his and on his wrist. He also never counts past the number 12, but still remains efficient at whatever he does. Therefore, how did Howard know it was, precisely, 2:35. (Oh, and he was correct). Show Hint Howard went to the jewelry store right after his watch hit the floor. The time stopped on his watch at 2:00. Coming from the jewelry store, Howard hears the gong signaling 2:30 from the clock in the Public Square. (He knows an hour hasn't passed yet since his old watch bit the dust). As soon as he hears the gong, he begins to listen to the timepiece in the box, counting the ticks. He counts the ticks 12 at a time, each one being one second long. For every 12 ticks, he puts a mark on the watch box. For every 5 marks, he knows he has counted a minute. It just so happens that when he is asked the time, he just finished noting the 5th minute, and therefore can say correctly that it was 2:35. Hint Explained: "He never counted past 12, and he won't start now" should have reinforced the number 12 in your mind, reminding you that 12 is a multiple of 60 by 5. Therefore, Howard would be able to just count 5 sets of marks, each mark representing 12 seconds, without having to count past 12. Hide What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=12641&comm=0","timestamp":"2014-04-19T12:17:33Z","content_type":null,"content_length":"24892","record_id":"<urn:uuid:506ad6f9-338e-4a62-9b62-531c2eb98cdd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Pencils of curves on smooth surfaces Melle Hernández, Alejandro and Wall, Charles Terence Clegg (2001) Pencils of curves on smooth surfaces. Proceedings of the London Mathematical Society , 83 (2 ). 257-278 . ISSN 0024-6115 Official URL: http://plms.oxfordjournals.org/ Although the theory of singularities of curves - resolution, classification, numerical invariants - goes through with comparatively little change in finite characteristic, pencils of curves are more difficult. Bertini's theorem only holds in a much weaker form, and it is convenient to restrict to pencils such that, when all base points are resolved, the general member of the pencil becomes non-singular. Even here, the usual rule for calculating the Euler characteristic of the resolved surface has to be modified by a term measuring wild ramification. We begin by describing this background, then proceed to discuss the exceptional members of a pencil. In characteristic 0 it was shown by Há and Lê and by Lê and Weber, using topological reasoning, that exceptional members can be characterised by their Euler characteristics. We present a combinatorial argument giving a corresponding result in characteristic p. We first treat pencils with no base points, and then reduce the remaining case to this. Item Type: Article Uncontrolled Keywords: Numerical invariants of singularities; Characteristic p; Singularities of curves; Resolution; Bertini’s theorem; Pencil; Euler characteristic; Wild ramification Subjects: Sciences > Mathematics > Algebraic geometry ID Code: 13210 Deposited On: 06 Sep 2011 07:53 Last Modified: 06 Feb 2014 09:43 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/13210/","timestamp":"2014-04-18T10:37:58Z","content_type":null,"content_length":"26285","record_id":"<urn:uuid:1a935166-70d7-4ab1-b522-2ea31fa3ee4c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Limits and sequences May 3rd 2008, 02:27 PM Limits and sequences hi im stuck on a question and dnt knw what the answer is about.. Xn=(n^4+1)/(n^4-4). for n>2. it asks me to determine L=lim n->infinity Xn for each positive real number e, determine an integer No such that |Xn-L|<e for all integers n>No May 3rd 2008, 02:39 PM mr fantastic I'll give some help with the first bit (needed for the second bit) and let you have another try at the second bit: $\lim_{n \rightarrow \infty} \frac{n^4 + 1}{n^4 - 4} = \lim_{n \rightarrow \infty} \frac{1 + \frac{1}{n^4}}{1 - \frac{4}{n^4}} = \frac{1 + 0}{1 - 0} = 1$. May 3rd 2008, 02:45 PM limits and sequences is it because 1/n^4 converges to 0 that the L=1. what do u do with this limit? May 3rd 2008, 02:57 PM mr fantastic May 3rd 2008, 03:22 PM nope.. my lecture notes arent very informant.. sorry bout that. the answers are quite confusing. because they get 1+5/(n^4+1). dnt knw where that 5 came from? May 3rd 2008, 03:37 PM May 3rd 2008, 03:44 PM mr fantastic Another way of doing it is to first note that $\frac{n^4 + 1}{n^4 - 1} = \frac{(n^4 -1) + 5}{n^4 - 1} = 1 + \frac{5}{n^4 - 1}$ ...... You're advised to thoroughly review the algebra that is pre-requisite for these sorts of questions. May 3rd 2008, 03:51 PM mr fantastic I'm sorry but I find it hard to believe that you would be given a question like this and have no example to refer to, either from class notes or textbook. $|X_n - L| = \left| 1 + \frac{5}{n^4+1} - 1 \right| = \left|\frac{5}{n^4+1}\right| < \epsilon$ when $n > \left( \frac{5}{\epsilon} - 1\right)^{1/4}$. So choose $N_0 = \left( \frac{5}{\epsilon} - 1\right)^{1/4}$.
{"url":"http://mathhelpforum.com/calculus/37028-limits-sequences-print.html","timestamp":"2014-04-19T06:58:15Z","content_type":null,"content_length":"10866","record_id":"<urn:uuid:f5f246e7-f17a-4981-b5aa-ad9c45227cf4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 1,485 Find the standard normal area for each of the following, showing your reasoning clearly. a. P(1.22<z<2.15) b. P(2.00<z<3.00) c. P(-2.00<z<2.00) d. p(z=0.50) The Weight of a small Starbucks coffee is normally distributed random variable with a mean of 360 grams and a standard deviation of 9 grams. Find the weight that corresponds to each event. a. highest 10% b. highest 50% c. highest 5% D. highest 80% e. lowest 10% f. middle 50% g... College Algebra An event in the Summer Olympics is 10-meter springboard diving. In this event, the height s, in meters, of a diver above the water t seconds after jumping is given by s(t) = 4.9t2 + 7.1t + 10. What is the maximum height that the diver will be above the water? Round to th... college algebra The height of a projectile fired upward is given by the formula s = v0t 16t2, where s is the height, v0 is the initial velocity and t is the time. Find the times at which a projectile with an initial velocity of 128 ft/s will be 144 ft above the ground. Round to the nea... Healthcare finance I post a question earlier- confused as to when taxes are assumed. Am I heading in the right direction when I say after profit- so my answer is 7,937.5 based on profit of 82,500. I am taking 317,5009Total costs)/40(cost per visit). Or if before profit 342,750/40= 8,567.75??????... Healthcare Finance you are considering starting a walk in clinic. Your financial projections are as follows: revenues $400,000, wages and benefits 220000, rent $5,000., depreciation $30,000., utilities $2,500., medical supplies $50,000. and administrative costs $10,000. Assume all costs are fixe... A 3.0g sample of NaCl is added to a Styrofoam cup of water, and the change in water temperature is 6.5∘C . The heat of solution of NaCl is 3.76 kJ/mol. What is the mass (in g) of water in the Styrofoam cup? Acetylene(C2H2) torches are used in welding. How much heat (in kJ) evolves when 2.0L of C2H2 (d=1.0967kg/m3) is mixed with a stoichiometric amount of oxygen gas? The combustion reaction is C2H2(g) +52O2(g)→2CO2(g)+H2O(l), ΔH∘=−1299.5kJ Is the comma in the right place below? Did you know, Dad, that the the colonists broke all the treaties signed by Philip and his father? Do the commas go there because to set off a nonessential integrated science find the resultant force on a .150kg baseball if gravity is pulling down and it is hit at a 45 degree angle with 50N of force by a baseball bat. Which of the following is true of the Industrial Workers of the World (I.W.W.)? A. It was founded in Russia. B. It made business leaders create new methods to deal with labor issues. C. It had very little influence over the labor movement. [D. It was devoted to organizing high... Pilar picked 12 apples.she gave 1/4 to Dwayne 1/4 tot Murphy and 1/4 to Kelley .how many apples did each get Can you check these? Neither the tourists nor their guide has/have time to photograph the gazelle? I think has because guide is singular. Neither my spaniel nor my neighbor's terriers do/ does tricks. I think do because terriers is plural. Which verb should I use and why? Either the wild dogs or the baboon makes or make that howling sound. I think since dogs is plural it should be make? Suppose a student titrates a 10.00-mL aliquot of saturated Ca(OH)2 solution to the equivalence point with 13.65 mL of 0.0234 M HCl. What was the initial [OH −]? What is the experimental value of a block with a velocity 4m/s slides 8m across a rough horizontal floor before coming to rest.the coefficient of friction is. MATH Fractions tony has 15 pennies in his pocket. 5/8 of his coins are pennies. how many coins does tony have in his pocket? thank you very much a region has 12 equal squares, what is the number of squares in 1/3 of the region How about this one? During the next few days, I would explore the city. Preposition is during and object is days? Window and right? Should I include the as the object of the preposition? Out in the first one To in the second one? I think window or the window for the first one I think the right or just right for the second one? Science -- I can't find my original post... In a closed room with an electric system controlled by a thermostat, would hanging a load of laundry to dry in that room require the system to work harder to maintain the temperature of the room? Last one, The flood destroyed everything in its path. Any predicate adjectives? In this sentence, When the dam broke, it sounded explosive. Is explosive an adverb or an adjective? Water rose almost to the top of the dam. Is the a predicate adjective? jenna has two lengths of ribbon, one 24 inches one 36 inches. she want to cut the ribbon inot equal lengths and not have any left over. what lengths can she cut the ribbon? What are the adjectives in this sentence? Many families headed for a favorite beach last week during the hot weather. Many, a, favorite, last, the, hot Thank you DrBob, that helps a lot! Two competing corporations are submitting data to the US Military to create chemical hot packs. The Military has stipulated that each hot pack may cost no more than 85 cents and must result in at least a 20degrees Celsius rise in temperature when activated. Which of the follow... Calculus Area between curves Consider the area between the graphs x+6y=8 and x+8=y2. This area can be computed in two different ways using integrals First of all it can be computed as a sum of two integrals where a= , b=, c= and f(x)= g(x)= I found a, but not b or c. I can't seem to figure out f(x) an... Calculus Area between curves Evaluate the definite integral: sqrt(8-2x) lower limit=-7 upper limit=0 I got -(1/3)(8-2x)^(3/2) and it was wrong. Please Help! Thanks in advance! Yes I figured it out thanks Sketch the region enclosed by the given curves. Decide whether to integrate with respect to x or y. Then find the area of the region. x+y^2=42, x+y=0 Which word fits these sentences? A Persian rug with beautiful patterns lay/laid on the floor. I think lay for present tense? The temperature has raised/risen four degrees in the past hour. I think risen because there is no object? One more... After the ceremony, the queen lay/laid her crown in its glass case. I think laid for past tense. Which word fits these sentences? A Persian rug with beautiful patterns lay/laid on the floor. I think lay for present tense? The temperature has raised/risen four degrees in the past hour. I think risen because there is no object? Which word fits these sentences? A Persian rug with beautiful patterns lay/laid on the floor. I think lay for present tense? The temperature has raised/risen four degrees in the past hour. I think risen because there is no object? I think he learned and had been waiting. What about these? In 1919 he learned/is learning about a contest in which he could win $25,000. People in Paris had been waiting/will have been waiting for him for hours, and they met his plane with a huge celebration. Which words go in the following sentences. Young Charles was born in 1902 and was growing/grew up on a farm. Later, although he has studied/had studied engineering in college, he was more interested in flying planes. In the sentence They are not nuts. They is the subject Are is the verb What is the predicate word? How might you apply ethical philosophies and principles that summarize what you perceive to be the top five ethical issues challenging health care delivery today? When you inhale, where is there more oxygen, in the lungs or body. I would just like someone to check my work and make sure I am doing it correctly before I continue. f(x)=x^2-3x+5 1. f(2) I got f(2)=-7 and g(x)=5x-1 1.g(-3) I got g(-3)=-16 Thanks for any help!! Beginner Spanish I need help please Complete the sentences with the correct forms of conseguir, ir, pensar, querer, or suponer. Nosotros _______ de excursión a las montañas, Toño. ¡Yo____ que vas a dormir todo el fin de semana! No, Carmen. No voy a dormir. Voy a ver ... How about these? Most of the students prefer acting as his or her/their best job. His or her? Both plural so top one their and bottom one they? I thought each was singular? Which pronoun? That is the month many of the classes put on its, their own plays. Easpch of the classes chooses the play they, it will put on. You need to put carpet in a rectangular room that measures 12 feet by 17 feet at a cost of $27.50 per square yard. Assuming that you can buy precisely the amount of carpet you need, how much will the carpet for the room cost? I came up with $1870.00 but Im, not sure if I am do... Calculus Homework You are blowing air into a spherical balloon at a rate of 7 cubic inches per second. The goal of this problem is to answer the following question: What is the rate of change of the surface area of the balloon at time t= 1 second, given that the balloon has a radius of 3 inche... Calculus Practice Problems A filter filled with liquid is in the shape of a vertex-down cone with a height of 12 inches and a diameter of 18 inches at its open (upper) end. If the liquid drips out the bottom of the filter at the constant rate of 3 cubic inches per second, how fast is the level of the l... Calculus Practice Problems A boat is pulled into a dock by a rope attached to the bow (front end) of the boat and passing through a pulley on the dock that is 4 m higher than the bow of the boat. If the rope is pulled in at a rate of 3 m/s, at what speed is the boat approaching the dock when it is 8 m ... Calculus Homework Let A(t) be the (changing) area of a circle with radius r(t), in feet, at any time t in min. If the radius is changing at the rate of dr/dt =3ft/min, find the rate of change of the area(dA/dt) at the moment in time when r = 16 ft dA/dt= An astronaut wearing a 20-kg spacesuit jumps on the moon with an initial velocity of 16 m/s. On the moon, the acceleration due to gravity is 1.62 m/s squared. What is the maximum height he reaches? An astronaut wearing a 20-kg spacesuit jumps on the moon with an initial velocity of 16 m/s. On the moon, the acceleration due to gravity is 1.62 m/s squared. What is the maximum height he reaches? A discount store marks up the merchandise it sells by 55%. If the wholesale price of a particular item is $25, what should the retail price be set at? What are some examples of an agonist? Use Distributive Property. a(b+c) = ab + ac 4th grade math I have letters representing each species. Replace each letter with the number representing the specie. There are 62 Clams. There are 70 Fish. C=62 B= 14+C M= 14+C-9 F=70 I= 1/2*F R= 1/2*F-21 S= 1/ 7^3 is 7 multiply by itself 3 times 7*7*7=x 7^9 is 7 multiply by itself 9 times 7*7*7*7*7*7*7*7*7=y You take the answer from 7^3 (x) and multiply it with 7^9 (y) and you get the answer. When it says more than it's addition so the equation is 0.25x+2=6 Then you subtract 2 from both side leaving 0.25x = 4 and then divided by 0.25 to get x by itself and then there's your answer!! It's an upside down trapezoid. with 50 degrees on the left top corner. a 43 degrees on the right top corner. T2 which is 61N is the shorter line on the bottom on the trapezoid. and M1 is hanging on T2 left side and M1 is hanging on T2 right side. Does that make sense?? sor... The left-hand cable has a tension T1 and makes an angle of 50 degree with the horizontal. The right-hand cable has a tension T3 and makes an angle of 43 degree with the horizontal. A W1 weight is on the left and a W2 weight is on the right. The cable connecting the two weights... K2SO4+ Na2HPO4=? NaHCO3 + Sr(NO3)2=? CuSO4+LiNo3 Potassium sulfate+lithium nitrate Can someone help me figure out the equations for these two? Phases aren't required Steve mcspoke left home on his bike traveling at 18km/hr. Steve's brother set out 2 hours later , following the same route, TRAVELING 54 per hour. How long did brother have to travel to catch up with Is it 15kmper hour? Rod and John ride their bikes in opposite directions. Rod ride 5 miles an hour faster than john. After 2 hours, they are 70 miles apart. How fast is john biking? I get .27 but that's not an answer choice. Based on published real estate research, 65% of homes for sale have garages, 20% have swimming pools, and 18% have both features. i. Given that a home for sale has a garage, what s the probability that it also has a pool I got 2 percent for number two. But I can't solve for number one. Is it 60%? You and your friend decide to get your cars inspected. You are informed that 70% of cars pass inspection. If the event of your car s passing is independent of your friend s car: iii. What is the probability that both of the cars pass inspection (6 points)? a. 60% b. ... Expected number of moles of copper oxide (using the emprical formula of copper oxide and the starting moles of Cu) Moles of Cu in the copper sulfate salt: mass is 90. and there are 5g of copper sulfate Calculate the moles of O in 2.57g of Ba(No3)2 I get 33.2m On an alien planet, volume is measured in units of smaugs (S) and odahviings (O), where 1 S = 2.25 O. By some weird coincidence, they use the exact same metric system that we use. Knowing this, how much is 500.0 O in units of kS Let p(x,y) be a point on the graph of y= X^2-3 A) Express the distance from P to the point (1,2) as a function of x. Simplify completely. B) Use your calculator to determine which value of x-yields the smallest d. a rectangular yard is to be enclosed by a new fence on 3 sides and by an existing fence on the fourth side. the amount of new fencing to be used is 160 feet a rectangular yard is to be enclosed by a new fence on 3 sides and by an existing fence on the fourth side. the amount of new fencing to be used is 160 feet find a possible polynomial equation of degree 4 if the zeroes are -1,2, and 5, the graph touches the x-axis at -1 but does not cross through, and the y intercept is -5 I don't really understand how you got this answer. Calculate the theoretical mass percent of oxygen in KClO3. here is the my mass percentage of 02 25gKCl03/30.0gKCl03=0.61%02. Math (Pre-calc) 1) F(x)= (X+2)/(X^2-4x-5) A) Find the domain (-00,-1)u(-1,5)u(5,00) B) find F(-3) I'm not sure how to do it. C) Find F(X+2) I'm not sure either. 2. f(x)= (2x)/(x-4) G(x)= (x)/(X+5) find the domain of each. A)(F+G) (4x^2+5x)(x-4)(X+5) x can't= 4,-5 B) (F*G) (2x^2)/(... The region bounded by y=-x^2+14-45 and y=0 is rotated about the y-axis, find the volume. They are both 1Mole How much solid product formed with 30ml of NaOH and 40mL of NiCl2? I'm not really sure how to go about this? Any help would be appreciated. This is for my study guide in order to do well on my test. Someone please help. Solve the system of equations 3x-y=11 -5x+y=12 2. Perform the indicated operation and simplify (5/(x-2)+(4(x^2-2x) 3.simpify and write with only positive exponents (7x^3y^-2)/(x^-4y^9)^-3 4. find the solution of the inequality 2x-11 is less than or equal to -4(5x-3) 5. the len... SAT math If the ratio of blue balls to red balls is 2.5 to 4, and there are 25 blue balls, how many red balls are there? Words with the root sid: 1. President 2. Reside 3. Residence 4. inside 5.preside Potation: The act of drinking something *Unfortunately I can't explain your "Insidious" issue Chemistry-Please help! In a concentration cell, what would be the potential if we studied two solutions that had the same concentration? Chemistry- Please Help!! Acetic acid has a Ka = 1.75 x 10-5 and formic acid has a Ka = 1.8 x 10-4. Which is the stronger acid? Chemistry- Please Help!! If a 0.10 M solution of an acid has a pH of 4.5, what would be the predicted pH of a .10 M solution of the conjugate, and why? Chemistry- Please Help!! according to lechateliers principle, would increasing the starting concentration of Ca2+ increase or decrease the amount of Ca(IO3)2 that dissolves??? Chemistry- Please Help!! Predict the proper equilibrium constant expression for the following reaction... Fe(OH)3(s) ----> Fe3+(aq) + 3 OH-(aq) Explain the reasoning behind your answer. Mass Media Public Law In the aftermath of the 9/11/2001 terrorist attacks on America, and the resulting War on Terrorism by the United States, what do you think is the reasonable and legal limitations on the First Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=KELLY","timestamp":"2014-04-19T02:49:44Z","content_type":null,"content_length":"29779","record_id":"<urn:uuid:25a385ae-05c3-4915-b94f-d07b37220589>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Revision for Standard Grades and need Help March 25th 2009, 12:59 PM #1 Oct 2008 Revision for Standard Grades and need Help I am revising with past papers for the 2009 Standard Grade Credit Maths. Would appreciate some help with the following past paper example. Solve algebraically the equation 5 cos xo + 4=0, 0 ≤ x ≤ 360 If anyone knows where I would be able to obtain Standard Grade Credit past paper exemplar answers (that show working not just the end result) would much appreciate if you could let me know I have only got around 2 months left to exam time and need all the help I can get Thanks in anticipation According to the wording of your question the argument of the cos-function is in degrees. $5\cos(x)+4=0~\implies~\cos(x)=-\dfrac45~\implies~x\approx 143.13^\circ\vee x\approx360^\circ-143.13^\circ\approx 216.87^\circ$ Goto Maths -> Standard Grade -> Exam Solutions at the following: Invergordon Academy - Maths Resources Hope that's of some use to you. Good luck with your Maths! (and the rest of your Standard Grades - hope you do Higher Maths by the way, it's rather fun Many thanks that is exactly what I was looking for and will help immensely. March 26th 2009, 02:34 AM #2 March 26th 2009, 12:07 PM #3 Mar 2009 March 27th 2009, 07:28 AM #4 Oct 2008
{"url":"http://mathhelpforum.com/algebra/80635-revision-standard-grades-need-help.html","timestamp":"2014-04-21T10:08:24Z","content_type":null,"content_length":"39897","record_id":"<urn:uuid:f31cdf52-be89-4ca5-b5e9-04d2adf910f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I determine my BMI and also how can I figure out how many calories I should be eating to lose weight? Share | Status:Closed Asked:Jun 25, 2013 - 07:29 AM How can I determine my BMI and also how can I figure out how many calories I should be eating to lose weight? I am 68 years old and am 5 feet tall/short and weigh 160 pounds. Do you have the same question? Follow this Question BMI has traditionally been used as a quick, non-invasive health assessment that provides an approximation of body fat and identifies potential health risk. BMI or Body Mass Index is a simple ratio of weight to height. It can be calculated either metrically or via U.S. conversion through the following equations: Metric Formula Height (M) 2 US Conversion Weight (Lbs) x 703 Height (In) 2 The results are interpreted as follows: Underweight <18.5 Normal Weight 18.5-24.9 Overweight 25-29.9 Obesity >30 You can also utilize the free online BMI Calculator on ACEfit.com. Enter your weight, height and select calculate. The program will do all the math for you and offer an interpretation of the results. The BMI measurement however is not without interpretation limitations as it doesn’t account for variances in weight due to muscularity, frame size, water retention, fat distribution and bone density. Therefore, using BMI alone as a means of estimating body fat may pose some validity issues with individuals such as athletes, children, and the elderly. Often times we find it helpful to use other measures such as percent body fat and/or hip-to-waist ratios in conjunction with BMI measures to get a more comprehensive assessment. Another tool offered on ACEfit.com is the Daily Caloric Needs Estimate Calculator which may offer you a starting point for determining the estimated number of calories needed to simply maintain your current body weight. Once you have that number, you can apply a standard weight loss formula to give you a better idea on a potential target to shoot for each day. The standard equation uses a formula of 3,500 calories equaling 1 pound. This means that in theory to lose 1 pound per week, you would need to create a deficit of approximately 500 calories each day below energy balance (the amount of calories it takes for you to remain at your current weight) either through food, exercise or ideally a combination of both; 500 calories x 7 days/week = 3500 calories (DHHS, 2005). Seeking advice from your medical provider however is always a great place to start. Together you can decide on a plan of action, weight goals, and discuss the possibility of referring you to a Registered Dietitian who can help design a meal plan specific to your individual needs. Centers for Disease Control and Prevention, Assessing Your Weight: Adult BMI, http://www.cdc.gov/healthyweight/assessing/bmi/adult_bmi/#Interpreted U.S. Department of Health & Human Services (DHHS), AIM for a Healthy Weight, 2005; http://www.nhlbi.nih.gov/health/public/heart/obesity/aim_hwt.pdf Source: http://www.acefitness.org/acefit/heal... Jun 25, 2013 - 07:27 AM Report it For weight loss, after working out for 40-60 minutes cardio,...
{"url":"http://answers.acefitness.org/How-I-determine-BMI-I-figure-calories-I-eating-lose-weight-q341412.aspx","timestamp":"2014-04-20T11:15:37Z","content_type":null,"content_length":"38260","record_id":"<urn:uuid:deda651e-d87f-4761-8035-5d670424d1fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Measurement Invariance - Constraints Anonymous posted on Wednesday, May 12, 2010 - 3:55 pm I am testing a CFA model for measurement invariance. If the loadings for a particular measure cannot be constrained, is it appropriate to try to constrain the error variances for that measure? Linda K. Muthen posted on Thursday, May 13, 2010 - 1:38 pm I think you are asking whether you should test for invariance of residual variances when you don't have factor loading invariance. I don't think so. Jon Heron posted on Friday, May 14, 2010 - 5:21 am In Timothy Brown's book on CFA he discusses partial measurement invariance and what is possible if not all loadings can be constrained. I assume you're talking about invariance over two groups. He references Byrne et al (1989) so this might be worth a look if you're not lucky enough to have access to Tim's book. Byrne et al (1989) Psychological Bulletin, 105, pp 456-466. Hmm, that last author sounds familiar. Anonymous posted on Friday, May 14, 2010 - 10:51 am Thank you both for the info! I will check out the Byrne article and hopefully gain some insight on how to proceed. Elayne Livote posted on Friday, April 22, 2011 - 1:19 pm I am testing measurement invariance with a bifactor model where every item loads on the main factor and one of two item group factors (binary indicators). Along the lines of the section on partial measurement invariance in chapter 13 of the User’s Guide, if I find that the loading for an item on one of the two factors is not invariant, would you recommend that I relax the equality constraint on the other factor in addition to relaxing the equality constraint for the threshold to test for partial invariance? Thank you very much for your assistance. Bengt O. Muthen posted on Friday, April 22, 2011 - 3:52 pm I can see going either way, but I would probably do what you mention. Lisa M. Yarnell posted on Monday, March 19, 2012 - 5:41 pm Hello, in a multigroup configural invariant model with categorical indicators, latent means are zero in all groups. In a multigroup measurement invariant model with categorical indicators, latent means are freely estimated in the non-reference groups. What about the scenario of partial measurement invariance (where some loadings and thresholds are constrained across groups and others not)? Should the latent means be set at zero or freely estimated in the second and third groups (the non-reference groups)? It seems that setting the means to be zero in the non-reference groups creates a linear dependency (possibly through the thresholds)? Thank you. Linda K. Muthen posted on Monday, March 19, 2012 - 6:30 pm With partial measurement invariance, the means are zero in one group and free in the others. See Slide 170 of the Topic 2 course handout. Lisa M. Yarnell posted on Monday, March 19, 2012 - 6:32 pm Thanks, Linda. Yes, the slides were helpful, along with trial and error in Mplus to see what the program is doing by default. Thanks again. Linda K. Muthen posted on Monday, March 19, 2012 - 6:34 pm The multiple group defaults are described in Chapter 14 of the Mplus User's Guide under Multiple Group Analysis. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=9&page=5452","timestamp":"2014-04-16T11:10:35Z","content_type":null,"content_length":"28095","record_id":"<urn:uuid:3fedbe26-9780-4c46-a670-60305677eb1d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressing Combinatorial Optimization Problems by Systems of Polynomial Equations and the Nullstellensatz, submitted manuscript 2007, see http://arxiv.org/abs/0706.0578 , 2008 "... A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations c ..." Cited by 17 (13 self) Add to MetaCart A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations can be carried out automatically. Reformulation techniques are very common in mathematical programming but interestingly they have never been studied under a common framework. This paper attempts to move some steps in this direction. We define a framework for storing and manipulating mathematical programming formulations, give several fundamental definitions categorizing reformulations in essentially four types (opt-reformulations, narrowings, relaxations and approximations). We establish some theoretical results and give reformulation examples for each type. "... Systems of polynomial equations over an algebraically-closed field K can be used to concisely model many combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution o ..." Cited by 10 (5 self) Add to MetaCart Systems of polynomial equations over an algebraically-closed field K can be used to concisely model many combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over K. In this paper, we investigate an algorithm aimed at proving combinatorial infeasibility based on the observed low degree of Hilbert’s Nullstellensatz certificates for polynomial systems arising in combinatorics and on large-scale linear-algebra computations over K. We report on experiments based on the problem of proving the non-3-colorability of graphs. We successfully solved graph problem instances having thousands of nodes and tens of thousands of edges. "... Systems of polynomial equations over an algebraically-closed field K can be used to easily model combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over K. In ..." Cited by 2 (2 self) Add to MetaCart Systems of polynomial equations over an algebraically-closed field K can be used to easily model combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over K. In this paper we investigate an algorithm aimed at proving combinatorial infeasibility based on the low degree of Hilbert’s Nullstellensatz certificates for polynomial systems arising in combinatorics and large-scale linear algebra computations over K. We report on experiments based on the problem of proving the non-3-colorability of graphs. We successfully solved graph problem instances having thousands of nodes and tens of thousands of edges. 1 "... The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kin ..." Cited by 2 (1 self) Add to MetaCart The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization. , 2009 "... Systems of polynomial equations with coefficients over a field K can be used to concisely model combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over the a ..." Add to MetaCart Systems of polynomial equations with coefficients over a field K can be used to concisely model combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over the algebraic closure of the field K. In this paper, we investigate an algorithm aimed at proving combinatorial infeasibility based on the observed low degree of Hilbert’s Nullstellensatz certificates for polynomial systems arising in combinatorics, and based on fast large-scale linearalgebra computations over K. We also describe several mathematical ideas for optimizing our algorithm, such as using alternative forms of the Nullstellensatz for computation, adding carefully constructed polynomials to our system, branching and exploiting symmetry. We report on experiments based on the problem of proving the non-3-colorability of graphs. We successfully solved graph instances with almost two thousand nodes and tens of thousands of edges. "... Often in mathematics, a theoretical investigation leads to a system of polynomial equations. Generically, such systems are difficult to solve. In applications, however, the equations come equipped with additional structure that can be exploited. It is crucially important, therefore, to develop techn ..." Add to MetaCart Often in mathematics, a theoretical investigation leads to a system of polynomial equations. Generically, such systems are difficult to solve. In applications, however, the equations come equipped with additional structure that can be exploited. It is crucially important, therefore, to develop techniques for studying structured polynomial systems. I work on a wide range of such problems that arise from other areas of mathematics and the physical sciences. The intellectual merit of this research is two-fold: on the one hand, I am advancing the theoretical understanding of fundamental mathematical objects; and on the other, I am developing algorithms for performing computations with them. I have collaborated with 13 researchers, many of whom are near the beginning of their careers. In several cases, I have taken a leadership role with these younger people. These interactions also unite groups in different fields towards common goals. Numerical algorithms from semidefinite programming have become useful in many applications. A guiding open problem is to remove the need for approximations in these methods, while preserving their efficiency. I am working on a solution to this problem, building on recent success. A specific application is the Bessis-Moussa-Villani trace conjecture "... I work on a wide range of problems that arise from other areas of mathematics and the physical sciences. Currently, I am focused on using mathematical and computational tools to solve basic problems in theoretical neuroscience, and in this regard, I have begun collaborations with scientists at the R ..." Add to MetaCart I work on a wide range of problems that arise from other areas of mathematics and the physical sciences. Currently, I am focused on using mathematical and computational tools to solve basic problems in theoretical neuroscience, and in this regard, I have begun collaborations with scientists at the Redwood Center for Theoretical Neuroscience and mathematicians at U.C. Berkeley. I am also interested in theoretical questions involving semidefinite programming, optimization, and computational algebra. The following is a description of several interrelated lines of research in which I will actively participate in the coming years. The first three sections contain very brief discussions of topics related to theoretical neuroscience that I have only begun exploring in recent months. The final sections describe more theoretical studies that I have been investigating in recent years and therefore contain more detailed descriptions. 1. Sparse coding and compressed sensing Sparse coding refers to the process of representing a real vector input (such as an image) as a sparse linear combination of an overcomplete set of vectors (called a sparse basis). Here, overcomplete refers to the fact that there are many more vectors in the sparse basis "... Often in mathematics, a theoretical investigation leads to a system of polynomial equations. Generically, such systems are difficult to solve. In applications, however, the equations come equipped with additional structure that can be exploited. It is crucially important, therefore, to develop techn ..." Add to MetaCart Often in mathematics, a theoretical investigation leads to a system of polynomial equations. Generically, such systems are difficult to solve. In applications, however, the equations come equipped with additional structure that can be exploited. It is crucially important, therefore, to develop techniques for studying structured polynomial systems. Hillar proposes to work on a wide range of problems that arise from other areas of mathematics and from the physical sciences. The intellectual merit of this research is two-fold: on the one hand, Hillar is advancing the theoretical understanding of fundamental mathematical objects; and on the other, he is developing algorithms for performing computations with them. Hillar has collaborated with 13 researchers, many of whom are near the beginning of their careers. In several cases, he has taken a leadership role with these younger people. These interactions broadly impact mathematics by uniting groups in different fields towards common goals as well as by preparing the next generation of mathematicians. Numerical algorithms from semidefinite programming have become useful in many applications. A guiding open problem is to remove the need for approximations in these methods, while preserving their efficiency. Hillar proposes to solve this problem, building "... Systems of polynomial equations are commonly used to model combinatorial problems such as independent set, graph coloring, Hamiltonian path, and others. We formulate the dominating set problem as a system of polynomial equations in two different ways: first, as a single, high-degree polynomial, and ..." Add to MetaCart Systems of polynomial equations are commonly used to model combinatorial problems such as independent set, graph coloring, Hamiltonian path, and others. We formulate the dominating set problem as a system of polynomial equations in two different ways: first, as a single, high-degree polynomial, and second as a collection of polynomials based on the complements of domination-critical graphs. We then provide a sufficient criterion for demonstrating that a particular ideal representation is already the universal Gröbner bases of an ideal, and show that the second representation of the dominating set ideal in terms of domination-critical graphs is the universal Gröbner basis for that ideal. We also present the first algebraic formulation of Vizing’s conjecture, and discuss the theoretical and computational ramifications to this conjecture when using either of the two dominating set representations described above. Keywords: dominating sets, Vizing’s conjecture, universal Gröbner bases 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5256079","timestamp":"2014-04-18T02:08:48Z","content_type":null,"content_length":"36530","record_id":"<urn:uuid:88b8195e-09e0-4b0b-8e12-66fc9ce39a1a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Problems • Assume f is defined and twice differentiable on the whole real line. Around a minimum of the function f, is f concave up or concave down? • Assume f is defined and twice differentiable on the whole real line. Around a maximum of the function f, is f concave up or concave down? A minimum of f will usually occur at the bottom of a right-side up bowl: Having a right-side up bowl means f is concave up here. A maximum of f will usually occur at the top of an upside-down bowl: Having an upside-down bowl means f is concave down here. The Second Derivative Test says • If f is concave up around a critical point, that critical point is a minimum. • If f is concave down around a critical point, that critical point is a maximum. This is true because if f is concave up around a critical point, f looks like this: Such a critical point must be a minimum. On the other hand, if f is concave down around a critical point, then f looks like this: Such a critical point must be a maximum. Be Careful: If f " is zero at a critical point, we can't use the Second Derivative Test, because we don't know the concavity of f around the critical point. Be Careful: There's sometimes confusion about this test because people think a concave up function should correspond to a maximum. This is why pictures are useful. If we remember what a concave up function looks like, we'll be fine. There's a good question that most people have right about now: if you're not told which to use, how do you know whether to use the first derivative test or the second derivative test? The good news is that it often doesn't matter. When it's possible to use both the first derivative test and the second derivative test, they will give the same answer. The other good news is that you can usually do whichever test is easier. Sometimes finding the second derivative is not fun, like with the function The first derivative is and while we could find the second derivative, it's not pretty and we don't want to bother. In this case, it probably makes more sense to plug in a couple of numbers and see what the sign of the first derivative is doing. Sometimes the second derivative test doesn't work at all (if f " is 0 at the critical point), in which case we need to use the first derivative test. On the other hand, sometimes you can see that the second derivative is really nice. Take the function f (x) = x^2 + 4x + 1. The first derivative is f '(x) = 2x + 4 and the second derivative is f "(x) = 2, which is always positive. Therefore f is always concave up, so any critical point needs to be a minimum. The second derivative test for this one is a piece of cake. Mmm, cake. The bad news is that, as with the rest of math, we do need to practice. The more functions we stare at, the better we will become at deciding whether to use the first derivative test or the second derivative test to classify a function's extreme points. Don't worry; there are plenty of practice problems. Let f (x) = xe^x. If possible, use the second derivative test to determine if each critical point is a maximum, a minimum, or neither. Let f (x) = -x^3 + 3x^2 - 3x. If possible, use the second derivative test to determine if each critical point is a minimum, maximum, or neither. Let f (x) = 5 - x^2. If possible, use the second derivative test to determine if each critical point is a minimum, maximum, or neither. For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. f (x) = e^xsin x for 0 < x < 2π For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. f (x) = x^3 – 2x^2 + x For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. For the function, use the second derivative test (if possible) to determine if each critical point is a minimum, maximum, or neither. If the second derivative test can't be used, say so. Classify the extreme points of the function, using either the first or second derivative test. Explain why you chose to use the test you did. f (x) = x^4 - 32x Classify the extreme points of the function, using either the first or second derivative test. Explain why you chose to use the test you did. f (x) = (x - 1)^9 Classify the extreme points of the function, using either the first or second derivative test. Explain why you chose to use the test you did. f (x) = (x - 1)^9 Classify the extreme points of the function, using either the first or second derivative test. Explain why you chose to use the test you did. f (x) = e^x^2 - 4x Classify the extreme points of the function, using either the first or second derivative test. Explain why you chose to use the test you did.
{"url":"http://www.shmoop.com/second-derivatives/second-derivative-test-help.html","timestamp":"2014-04-16T20:21:17Z","content_type":null,"content_length":"62518","record_id":"<urn:uuid:011cda72-37dd-487e-9071-21af4a0eb325>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the perimeter of a 40 acre square in feet? The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but some Imperial units are still used in the United Kingdom and Canada. United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire before American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of some units used there, so several differences exist between the two systems. The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were refined by the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of industry use metric units. The SI metric system, or International System of Units is preferred for many uses by NIST Real estate is "Property consisting of land and the buildings on it, along with its natural resources such as crops, minerals, or water; immovable property of this nature; an interest vested in this; (also) an item of real property; (more generally) buildings or housing in general. Also: the business of real estate; the profession of buying, selling, or renting land, buildings or It is a legal term used in jurisdictions such as the United States, United Kingdom, Canada, Australia, and New Zealand. Hospitality is the relationship between the guest and the host, or the act or practice of being hospitable. This includes the reception and entertainment of guests, visitors, or strangers. Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-perimeter-of-a-40-acre-square-in-feet","timestamp":"2014-04-16T15:59:48Z","content_type":null,"content_length":"26163","record_id":"<urn:uuid:d311d7b5-d10c-4532-baf1-a78b630a9c71>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
3D Game Engine, Get absolute transform up vote 0 down vote favorite I have a parent scene node in my engine, and a child. child's transform(position, rotation and scale) is relative to the parent. Now I want to know the child's absolute transform. I mean the child's relative transform to the main coordinates. Any ideas? Edit: The problem is that I don't store matrices in a scene node. I only have 3 vectors. Position, Rotation and Scale. 3d game-engine rotational-matrices 3 Multiply all your transform matrixes? (parents, ..., sub-parents, children) – loki2302 Dec 8 '11 at 11:10 1 A function that converts pos/rot/scale to a matrix is a necessity in any game engine. Storing/caching said matrix is often a good idea for performance reasons. – Macke Dec 8 '11 at 11:22 @Macke: Then how do I convert a vector to a matrix or convert a matrix to a vector? – Kia.celever Dec 8 '11 at 11:28 @Kia.celever: See en.wikipedia.org/wiki/Transformation_matrix (or just f-in google it, every 3d graphics engine ever made has code for the same) – Macke Dec 8 '11 at 19:33 A lot of engines don't even provide their own code, because every graphics math library around has matrix creation functions. For example, D3DX has an entire family of functions to build various matrices. – ssube Dec 9 '11 at 1:44 add comment 1 Answer active oldest votes You need to walk down the tree and multiply each matrix along the way, from the scene root to the final object. The resulting matrix will be the absolute transforms. up vote 6 down vote accepted add comment Not the answer you're looking for? Browse other questions tagged 3d game-engine rotational-matrices or ask your own question.
{"url":"http://stackoverflow.com/questions/8429976/3d-game-engine-get-absolute-transform/8430047","timestamp":"2014-04-18T01:15:57Z","content_type":null,"content_length":"67968","record_id":"<urn:uuid:1c54daf2-91ea-42ad-8b03-ddc765c64f9b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Graphs in numarray? Perry Greenfield perry at stsci.edu Wed Apr 17 12:10:32 CDT 2002 Hi Magnus, On Behalf Of Magnus Lie Hetland > I'm looking at various ways of implementing graphs in Python (beyond > simple dict-based stuff -- more performance is needed). kjbuckets > looks like a nice alternative, as does the Boost Graph Library (not > sure how easy it is to use with Boost.Python) but if numarray is to > become a part of the standard library, it could be beneficial to use > that... > For dense graphs, it makes sense to use an adjacency matrix directly > in numarray, I should think. (I haven't implemented many graph > algorithms with ufuncs yet, but it seems doable...) For sparse graphs > I guess some sort of sparse array implementation would be useful, > although the archives indicate that creating such a thing isn't a core > part of the numarray project. First of all, it may make sense, but I should say a few words about what scale sizes make sense. Currently numarray is implemented mostly in Python (excepting the very low level, very simple C functions that do the computational and indexing loops. This means it currently has a pretty sizable overhead to set up an array operation (I'm guessing an order of magnitude slower than Numeric). Once set up, it generally is pretty fast. So it is pretty good for very large data sets. Very lousy for very small ones. We haven't measured efficiency lately (we are deferring optimization until we have all the major functionality present first), but I wouldn't be at all surprised to find that the set up time can be equal to the time to actually process ~10,000-20,000 elements (i.e., the time spent per element for a 10K array is roughly half that for much larger arrays. So if you are working with much smaller arrays than 10K, you won't see total execution time decrease much (it was already spending half its time in setup, which doesn't change). We would like to reduce this size threshhold in the future, either by optimizing the Python code, or moving some of it into C. This optimization wouldn't be for at least a couple more months; we have more urgent features to deal with. I doubt that we will ever surpass the current Numeric in its performance on small arrays (though who knows, perhaps we can come close). > What do you think -- is it reasonable to use numarray for graph > algorithms? Perhaps an additional module with standard graph > algorithms would be interesting? (I'm sure I could contribute some if > there is any interest...) Before I go further, I need to find out if the preceeding has made you gasp in horror or if the timescale is too slow for you to accept. (This particular issue also makes me wonder if numarray would ever be a suitable substitute for the existing array module). What size graphs are you most concerned about as far as speed goes? > And -- is there any chance of getting sparse matrices in numarray? Since talk is cheap, yes :-). But I doubt it would be in the "core" and some thought would have to be given to how best to represent them. In one sense, since the underlying storage is different than numarray assumes for all its arrays, sparse arrays don't really share the same underlying C machinery very well. While it certainly would be possible to devise a class with the same interface as numarray objects, the implementation may have to be completely different. On the other hand, since numarray has much better support for index arrays, i.e., an array of indices that may be used to index another array of values, index array(s), value array pair may itself serve as a storage model for sparse arrays. One still needs to implement ufuncs and other functions (including simple things like indexing) using different machinery. It is something that would be nice to have, but I can't say when we would get around to it and don't want to raise hopes about how quickly it would appear. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2002-April/001342.html","timestamp":"2014-04-18T16:36:19Z","content_type":null,"content_length":"6533","record_id":"<urn:uuid:6dae4214-a623-43d7-b46b-3e63ef8375ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: perpendicular distance from point B(0,5) to the line x+3y-5=0 • one year ago • one year ago Best Response You've already chosen the best response. @virtus Are you here? Best Response You've already chosen the best response. Best Response You've already chosen the best response. @virtus There maybe a direct formula for this, but I don't remember that. I'll start from the basics. Do you have time for this? Best Response You've already chosen the best response. sure, thanks ash2326 Best Response You've already chosen the best response. Can I give the formula?? Best Response You've already chosen the best response. Thanks virtus, let's begin. I recommend that you understand the problem, you'll remember it always. What do you want? Best Response You've already chosen the best response. Go ahead @ash2326 .. Best Response You've already chosen the best response. |dw:1341645333105:dw| Let AB be our line and C is the point for which we need to find the perpendicular distance. Do you understand the diagram? Best Response You've already chosen the best response. Best Response You've already chosen the best response. You can see that the perpendicular line from the point C is intersecting the line AB at a point. Let it be D If we could find the coordinates of D, then we can easily find the distance. do you have any ideas what should we do? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. find the Point Of Intersection Best Response You've already chosen the best response. Ha ha ha ha.. Best Response You've already chosen the best response. Yeah:) @virtus For that we need to find the equation of the line CD, do you have any idea how to do that? Best Response You've already chosen the best response. well you know what point C is and the gradient of line CD would be -1/ gradient of line AB Best Response You've already chosen the best response. oh wait do we know what C is ? Best Response You've already chosen the best response. Awesome:D Could you do that? We have AB as \(x+3y-5=0\) Yeah we know C as (0, 5) Best Response You've already chosen the best response. Best Response You've already chosen the best response. Oops, You made a mistake I can see that perpendicular line's slope is same as AB, it's to be -1/ slope AB Check again :) Best Response You've already chosen the best response. oh silly me Best Response You've already chosen the best response. No problem, :D find it again? Best Response You've already chosen the best response. y-5 =3(x-0) y=3x +5 3x-y+5=0 Best Response You've already chosen the best response. Correct:D Now find the intersection of the line (3x-y+5=0) and (x+3y-5=0) this will give us D Best Response You've already chosen the best response. D (-1,2) Best Response You've already chosen the best response. Great work:D now find the distance between (0, 5) and (-1, 2) using distance formula. Best Response You've already chosen the best response. That will be the perpendicular distance:D Best Response You've already chosen the best response. ngawwwwwww! I SEE!!!! so that's how you do it ;) oh btw i got square root 10 Best Response You've already chosen the best response. Yeah, that's correct:D Great work btw:D Best Response You've already chosen the best response. thanks so much for your invaluble help @ash2326 :D Best Response You've already chosen the best response. @virtus you did all the work. I just guided you:D You are a good student:D Best Response You've already chosen the best response. you can also use the formula from analytic geometry that the distance from a given point to a given line in general form \(\large Ax+By+C=0 \) is given by Distance = \(\large \frac{Ax_0+By_0+C}{\ pm \sqrt{A^2+B^2}} \) where \(\ (x_0, y_0) \) are the coordinates of the given point. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff7e073e4b01c7be8ca95a1","timestamp":"2014-04-19T20:00:57Z","content_type":null,"content_length":"111196","record_id":"<urn:uuid:9f075201-cd77-4cb6-b9e5-e446cfa02602>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
existence of such a one-to-one transformation May 31st 2012, 08:40 PM #1 Jul 2009 existence of such a one-to-one transformation This question comes from the proof of Neyman's factorization theorem as excerpted below from Robert V. Hogg, Joseph W. McKean, Allen T. Craig, "Introduction to Mathematical Statistics", 6th In the proof, a one-to-one transformation is used which is indicated by the red line. But why does such a one-to-one transformation surely exists? Although this theorem belongs to mathematical statistic, the question itself, I think, belongs to algebra, so I post it here. Thank you for any help! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/199535-existence-such-one-one-transformation.html","timestamp":"2014-04-21T02:39:47Z","content_type":null,"content_length":"29247","record_id":"<urn:uuid:4badfa3e-9afc-4289-b7f3-932247f8d509>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Streamline diffusion methods for the incompressible Euler and Navier- Stokes equations. (English) Zbl 0609.76020 The authors extend the streamline diffusion method, which is a finite element method for convection-dominated convection-diffusion problems, to the time-dependent two-dimensional Navier-Stokes equations for an incompressible Newtonian flow in the case of high Reynolds number and also the limit case with zero viscosity, the Euler equations. The method for the Euler equations is based on using the stream function- vorticity formulation of the Euler equations. Two methods are considered for the Navier-Stokes equation: one method using a velocity-pressure formulation, and one method using a velocity-pressure-vorticity formulation. 76D05 Navier-Stokes equations (fluid dynamics) 65N30 Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods (BVP of PDE) 35Q30 Stokes and Navier-Stokes equations
{"url":"http://zbmath.org/?q=an:0609.76020","timestamp":"2014-04-16T16:32:29Z","content_type":null,"content_length":"21855","record_id":"<urn:uuid:f428b0fd-4723-482d-9647-b15118a3d66c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Using NBA Statistics for Box and Whisker Plots In this lesson, students use information from NBA statistics to make and compare box and whisker plots. The data provided in the lesson come from the NBA, but you could apply the lesson to data from the WNBA or any other sports teams or leagues for which player statistics are available. Students will make 3 box and whisker plots for sets of data about basketball players. They will make 1 box and whisker plot for the players’ weights, and 2 box and whisker plots for height. One will include the tallest player, and one will not. The effects of changing one piece of the data will be addressed. Students can work in groups or pairs throughout this activity, but make sure that they all record their own information on their own activity sheet. Background Knowledge Students should be familiar with interpreting and constructing a box and whisker plot. Use the first example on the activity sheet, in which teh weights of teh players are analyzed as a warm up or a whole-class activity. The concepts of minimum, maximum, median, upper quartile and lower quartile may need to be reviewed. Students may find it helpful to use graphing calculators to display the box and whisker plots, if they are available. Instructions for graphing on the TI-83/84 are available here Gathering Data Students gather data to complete 2 tables, but make 3 box and whisker plots on the activity sheet. Box and Whisker Activity Sheet Explain to students that, for this lesson, we will only gather data for the team members who have numbers. Professional teams have practice players who don't have numbers, but we won't be using them for statistics in this lesson in order to keep the sample size down. • Internet Option: Tell students that they can look up the roster of the Houston Rockets at the official page, www.nba.com/rockets/index_main.html Have students record the names, weight, and height of the players who have numbers. • Non-Internet Option: Give students a copy of the Houston Rockets roster. Have them record the data for the players who have a number. For each of the numbered players on the Houston Rockets, write down their name and weight on the activity sheet. Find the minimum, maximum, lower quartile, upper quartile and median for the numbers. Show students how to construct a box and whisker plot from the data. If you are using TI-83/84 these instructions can help. The box and whisker plot students generate should resemble this: Next, students should gather data on the height of the numbered players. The heights are given in feet and inches and need to be converted to inches. The conversion formula is the number of feet times 12 plus the number of inches. Check that students know how to do this by asking them to convert 6’8”, 5’6”, and 7’3” into inches. Write an example on the board for studenst to use as a reference. This will help ensure that they focus on the constructing and analying of a box and whisker plot rather than on converting the player’s height. Ask students to record the height of each player in inches. Ask students to check their answers with a partner, and to check with you when they think they are finished. Monitor that students are recording the heights properly. Consider keeping a list of converted heights handy to give it to or read to any student who is struggling. Analyzing the Data Check the box and whisker plots that the students have made. The first height graph (Question 3 on the Activity Sheet) should include all of the "numbered" players. Make sure students record the minimum, maximum, lower quartile, upper quartile, and median. Before students move on to Question 4, ask them to compare their first plot with their neighbors’ plots to see if they agree on what the plot should look like. Have a plot ready to show if there is unresolved disagreement. This will allow you to be sure that all students have constructed the plot properly. The aim here is to compare 2 plots, so accuracy is important. Without accurate plots, no analysis can occur. The second box and whisker plot of heights (Question 4) excludes the height of Yao Ming (the tallest player). Again, make sure students record the minimum, maximum, lower quartile, upper quartile, and median. Students are to compare the 2 height plots and then write about what changed and what stayed the same. They need to identify which statistics changed and explain why some of the statistics changed while others did not. If a group finishes early, ask them to predict what happens to the box and whisker plot for players’ weights when Yao Ming is removed. Ask them if they think they will have the same observations as they did for the players' heights. Have them check their predictions by constructing the additional plot. When students have finished, lead a whole-group discussion to read and read and answer Questions 5 to 8 on the Activity Sheet. Have students explain the changes they observed. Record and display their specific reasons to help the class critique the reasoning of others. Talk about any misconceptions and emphasize why changes occurred or did not occur. You may need to add your own reasons to the list if students are not coming up with valid reasons on their own. Ask the whole class to write down what they think might happen to the mean height when Yao Ming is removed from the data. After students have predicted the changes, ask them to calculate the mean height with and without Yao Ming. Students should find that the median doesn't change, but the mean changes drastically when Yao Ming is excluded. 1. Ask students to use the Houston Rockets data to make weight box and whisker plots with and without Yao Ming. Then ask the same questions as used in the activity. 2. Make 2 box and whisker plots for the Denver Nuggets. In this case they could do one that includes Chucky Atkins (the shortest player on the team)and one that does not. Then ask the same questions as used in the activity. 3. Make 2 height box and whisker plots for an NBA team of the students’ choice. Answer the same questions as used in the activity. 1. Students could use another team’s roster and eliminate the tallest or shortest player as suggested in the Assessment Options section. 2. Ask students to use the Rockets data again to make a new plot, but this time eliminate player(s) with the median height. What differences do they observe between the plots? 3. If Internet access is available, students could research to determine the shortest player in the NBA, and then find that player’s roster. Questions for Students Use these questions to compare the height plots. 1. What happened to the medians? Explain why. 2. What happened to the maximums? Explain why. 3. What happened to the first and third quartiles? Explain why. 4. What happened to the mean? Explain why. 5. How does the plot change if the shortest player is removed? 6. Suppose the height of a player near the middle of the ordered list is removed instead of Yao Ming. How will the statistics change? 7. What effect does Yao Ming have on the range and the mode? 8. Suppose the heights of Yao Ming and just 4 other numbered players are used to make a box and whisker plot. What effect does removing Yao Ming from the data have on the plot? Teacher Reflection • How effective was it to have students compare their plots with each other to determine if there was agreement on the shape of the plot? What can you do to make this strategy more effective so students don’t rely on you as the authority? • What student misconceptions did you anticipate, and how did you address them? • What advantages, if any, are there in using a graphing calculator with this lesson? • How might you use a lesson like this be teach students about other types of graphical representations? • Did you find it necessary to make adjustments while teaching the lesson? If so, what adjustments did you make? Were they effective? Learning Objectives In this lesson, students will: • Collect data on the height of the Houston Rockets' players • Create box and whisker plots • Compare and analyze different box and whisker plots Common Core State Standards – Mathematics Grade 6, Stats & Probability • CCSS.Math.Content.6.SP.B.4 Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=2643","timestamp":"2014-04-19T04:19:50Z","content_type":null,"content_length":"76578","record_id":"<urn:uuid:470eae5d-e635-434e-9756-4d6a1d556cb7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Growth problem? Any help would be greatly appreciated! Hello, camjenson! Quote: A colony of people grows at a rate directly proportional to the size of the population. The colony triples every two hours. 1. (a) Write differential equation that describes the colony's growth. . . (b) Find the general solution. $(a)\;\frac{dP}{dt} \:=\:k\!\cdot\! P$ (b) We have: . $\frac{dP}{P} \:=\:k\,dt$ Integrate: . $\ln P \:=\:kt + c \quad\Rightarrow\quad P \:=\:e^ {kt+c} \:=\:e^{kt}\!\cdot\! e^c} \:=\:e^{kt}\cdot C$ Hence: . $P \:=\:Ce^{kt}$ When $t = 0,\;P = P_o$, the initial population. So we have: . $P_o \:=\:Ce^{0} \quad\Rightarrow\quad C \,=\,P_o$ Therefore, the equation is: . $P \;=\;P_oe^{kt}$ Quote: 2. What's the value of the constant of proportionality? The population triples every two hours. When $t = 2,\;P = 3P_o$ We have: . $P_oe^{2k} \:=\:3P_o \quad\Rightarrow\quad e^{2k} \,=\,3 \quad\Rightarrow\quad 2k \,=\,\ln3$ . . . . . . . . $k \:=\:\tfrac{1}{2}\ln 3 \quad\Rightarrow\quad k \:=\:\ln\left(3^{\frac{1}{2}}\right) \quad\ Rightarrow\quad \boxed{k \:=\:\ln\left(\sqrt{3}\right)}$ Quote: 3. At t = 9 hours, the population is 800 million, what is the initial population? The equation is: . $P \;=\;P_o\,e^{\frac{1}{2}\ln(3)\ cdot t} \;=\;P_o\left(e^{\ln 3}\right)^{\frac{1}{2}t}$ And we have: . $P \;=\;P_o\!\cdot\!3^{\frac{1}{2}t}$ When $t = 9,\:P = 800,\!000,\!000.$ . . $P_o\!\cdot\!3^{4.5} \:=\:800,\!000,\!000$ . . $P_o \;=\;\frac{800,\!000,\!000}{3^{4.5}} \;\approx\; 5,\!702,\!225$
{"url":"http://mathhelpforum.com/calculus/207299-growth-problem-any-help-would-greatly-appreciated-print.html","timestamp":"2014-04-19T18:28:20Z","content_type":null,"content_length":"9199","record_id":"<urn:uuid:63985a3f-54e4-4390-acb3-71de3a6260d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Check the hypotheses of Rolle’s Theorem and the Mean Value Theorem and find a value of c that makes the appropriate conclusion true. Illustrate the conclusion with a graph. f(x) = x^2 + 1, [2,-2] • one year ago • one year ago Best Response You've already chosen the best response. Well, to satisfy rolle's theorem, you must satisfy the 3 conditions (you can find them in your book - if not, I can write them down for you if you don't have access to a book). Now to find the value of "c" using mean value theorem, you must start by finding the derivative of the function I'll leave you to find it's derivative as it is quite simple for this function. Basically, it must follow the first two conditions of rolle's theorem to apply the formula Then, plug it in to the formula: \[f'(c) = \frac{ f(b)-f(a) }{ b-a }\] on the given interval, which for your case it's -2<x <2 Now, set your derivative equal to that. and solve for "c" pick the value which matches the solution in the given interval. if the value falls outside of the given interval then the solution is Best Response You've already chosen the best response. so find the derivative of x^2 + 1? Best Response You've already chosen the best response. Yes. Find the derivative. Use the formula f'(c) = to solve for "c". Best Response You've already chosen the best response. okay got you thanks Best Response You've already chosen the best response. You don't need to check first two conditions since you know that your functions IS in fact continuous on the given interval and differentiable. So with that being said, best of luck. Best Response You've already chosen the best response. @zepdrix can u help? Best Response You've already chosen the best response. i did it but i think i got it wrong Best Response You've already chosen the best response. f'(c) = f(-2) - f(2)/ -2 - 2 so f(-2) -2^2 + 1 = -3, f(2) 2^2 + 1 = 5. So the slope will be 2 because -3 - 5/-2-2 = -8/-4 which is 2. So To c : f'(x) = 2x, f'(c) = 2c = 2 2c = 2 so c will equal one (1) but when i checked (a< c <b ) (2, 1 , -2) it doesn't make sense because 1 is not less than -2 :/ Best Response You've already chosen the best response. @abb0t @zepdrix @experimentX ? Best Response You've already chosen the best response. pick any two points ... on two sides of 0, you see that it satisfies the Rolle's theorem. Best Response You've already chosen the best response. Did i go wrong when i found f'x? and find the f prime of c? Best Response You've already chosen the best response. equate it to zero ... and you get x=0 Best Response You've already chosen the best response. your graph is symmetric f(-2) - f(2) should be zero ... probably you made mistake somewhere. Best Response You've already chosen the best response. okay so c = to 0 right? i made a mistake in putting -2 and 2 ? Best Response You've already chosen the best response. @zepdrix are u there? Best Response You've already chosen the best response. @wio can u help Best Response You've already chosen the best response. okay so its 0? Best Response You've already chosen the best response. yes yes it is .. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513a5d99e4b029b0182aab15","timestamp":"2014-04-18T14:11:37Z","content_type":null,"content_length":"71484","record_id":"<urn:uuid:b460b62a-2c12-4420-8545-a29394b94e39>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Pages: 1 2 Post reply My lesson review. Need help checking if answers are correct and getting the right answers 1. If a hexagon has a side of 3 units, what is the area of the hexagon? 3^2 x 6 / 4 x tan(180/6) 54 / 4 x tan(180/6) 54 / 2.3094 Answer: 23.38 2. If a hexagon has an area of 100 units, what is the length of one side? 100= 3√3 / 2 * s² | s = side s^2 = 100 / 3√3 / 2 s^2 = 100 / 2.598076 s^2 = 38.490021 Answer: s = 6.204032 3. If a hexagon has a radius (center to point of angle) of 6, what is the side of the hexagon?**need help 4. If a hexagon has a radius (center to point of angle) of 6, what is the area of the hexagon? Finding the area of ONE triangle: r^2 sqrt3 /4 6^2 sqrt3 /4 36 x sqrt3 /4 36 x .433021 a = 6 x 15.588756 Answer: 93.53 5. If a hexagon is resting on a flat side, and has a total height of 18, what is the length of each side of the hexagon?**need help 6. If a hexagon is resting on a flat side, and has a total height of 18, what is the area of the hexagon?**need help 7. Problem solver (worth 4 points): Come up with a way to find the area and volume of a football. Include in your answer a way to acquire any necessary measurements without cutting or otherwise destroying the football. Also include all necessary formulas to implement your idea. (You don't need to find actual numbers, just outline the method in step by step detail--think of all the measurements you'll need to acquire and how you'll get them.) **i have an idea for this but would like to post it after finishing up #1 - #6** Last edited by demha (2013-09-24 06:31:50) "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review demha wrote: My lesson review. Need help checking if answers are correct and getting the right answers 1. If a hexagon has a side of 3 units, what is the area of the hexagon? 3^2 x 6 / 4 x tan(180/6) 54 / 4 x tan(180/6) 54 / 2.3094 Answer: 23.38 2. If a hexagon has an area of 100 units, what is the length of one side? 100= 3√3 / 2 * s² | s = side s^2 = 100 / 3√3 / 2 s^2 = 100 / 2.598076 s^2 = 38.490021 Answer: s = 6.204032 3. If a hexagon has a radius (center to point of angle) of 6, what is the side of the hexagon?**need help Assuming it is a regular hexagon, the radius is equal to the side of the hexagon. 4. If a hexagon has a radius (center to point of angle) of 6, what is the area of the hexagon? A = 4(PI)r^2 A = 4(PI)6^2 A = 4(PI)36 A = 144(PI) Answer: 144(PI) or 452.389 See previous question. 5. If a hexagon is resting on a flat side, and has a total height of 18, what is the length of each side of the hexagon?**need help Split the hexagon into 6 equilateral triangles. You should notice that the height of the hexagon is twice the height of a triangle. 6. If a hexagon is resting on a flat side, and has a total height of 18, what is the area of the hexagon?**need help See previous question. 7. Problem solver (worth 4 points): Come up with a way to find the area and volume of a football. Include in your answer a way to acquire any necessary measurements without cutting or otherwise destroying the football. Also include all necessary formulas to implement your idea. (You don't need to find actual numbers, just outline the method in step by step detail--think of all the measurements you'll need to acquire and how you'll get them.) **i have an idea for this but would like to post it after finishing up #1 - #6** I would have to think about this one, but, as you said, we'll leave it for after the questions above are solved. Last edited by anonimnystefy (2013-09-24 06:37:24) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Review Hi demha; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Review 3. If a hexagon has a radius (center to point of angle) of 6, what is the side of the hexagon? Answer: 6 If the radius is 6, so are the sides. NOTE: i have edited my first post for #4 but it seems as if you quoted my text BEFORE I edited them. See my new answer: 4. If a hexagon has a radius (center to point of angle) of 6, what is the area of the hexagon? Finding the area of ONE triangle: r^2 sqrt3 /4 6^2 sqrt3 /4 36 x sqrt3 /4 36 x .433021 a = 6 x 15.588756 Answer: 93.53 "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review Hi demha Thise answers are correct! This means that only Q5 and Q6 are left. Do you want to try it with the hint or do you want me to help a bit more? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Review No I think I got it. 5. If a hexagon is resting on a flat side, and has a total height of 18, what is the length of each side of the hexagon? A = 9^2 x 6 * tan x (180/6) A = 280.59 (answer for #6) Now I have the area, I will solve for S = side of the hexagon: 3 (√3 /2) x S^2 = 280.59 S^2 = 280.59 x 2 / 3 x √3 S^2 = 561.18 / 5.196152 S^2 = 107.99 Final Answer: S = 10.39 6. If a hexagon is resting on a flat side, and has a total height of 18, what is the area of the hexagon? A = 9^2 x 6 * tan x (180/6) Final Answer: A = 280.59 I'll be doing #7 tomorrow since it will take me a little research and time. Right now I need to get some shut eye. Thank you for helping me out so far! "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review Hi demha Those are the correct naswers! And no problem! Glad to be able to help. Just one thing: an easier way to get the side of the hexagon from the height is to set it up like this: Last edited by anonimnystefy (2013-09-24 11:48:20) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Review 7. Problem solver (worth 4 points): Come up with a way to find the area and volume of a football. Include in your answer a way to acquire any necessary measurements without cutting or otherwise destroying the football. Also include all necessary formulas to implement your idea. (You don't need to find actual numbers, just outline the method in step by step detail--think of all the measurements you'll need to acquire and how you'll get them.) I need to find the area and volume of a football without destroying the ball. To do this, I would first start with measuring the ball from top to bottom. I would mark a line exactly on the middle of the ball. Once I have done that, I'll use paper to cast a mold on exactly half of the ball. I will then take it off to have a half football mold which would look like a cone. With this cone, I would find the measurements required to fit into these two equations: T = (PI)rs + (PI)r^2 - for the cone area V = (1/3)(PI)(r^2)h - for the cone volume One I have gotten the answer for each equation, I will simply just x2 both of them to find the final answer of what the volume and area of the football is. "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review hi demha What sort of football are we talking about? A soccer football is roughly a sphere and there are formulas for both the surface area and the volume. An American football is roughly a prolate spheroid and, again, there are formulas for this. (My third picture below is a prolate spheroid.) I wonder what is expected for 4 marks ? There is also a way you can get the volume very accurately with just a large bucket and a measuring jug. Once you have this, you could work back to get the measurements needed for the surface area. Your cone area method includes the area of the base. When you join two cones back together to get the ball you don't need this value. And your 'write-up' must include an explanation of what r, s and h are on your paper version. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review Yes, I believe they are talking about an American Football, not a Soccer Ball. For the first equation: T = (PI)rs + (PI)r^2 | solving for area r = radius s = slant hight For the second equation: V = (1/3)(PI)(r^2)h | solving for volume r = radius h - height Would my method be considered a pretty accurate method? Would YOU accept this method? "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review hi demha For the first equation: T = (PI)rs + (PI)r^2 | solving for area The term in red is the area of the base of the cone. When you double your answer you'll end up counting this area (twice) as part of the outside of the ball. r = radius s = slant hight For the second equation: V = (1/3)(PI)(r^2)h | solving for volume r = radius h - height Would my method be considered a pretty accurate method? Would YOU accept this method? The cone has straight slant sides whereas the ball is curved so there's some inaccuracy there. I would be happy if you discussed why your answer is only approximate as part of your analysis. That way you get credit for realising where the inaccuracies lie. I doubt anyone could get the exact answer from a formula because the ball is pumped up and that makes it virtually impossible to say exactly what shape it is. That's why I said the prolate spheroid is also only a rough shape for the ball. Another thing you could do is to find an answer that is certain to be too big, and another that is certain to be too small, and then you know the real answer is somewhere between them. At least that makes any inaccuracy clear. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review I am starting to get confused as to what methods to use to get the area and volume of the football. Now that I see it, the Prolate Spheroid does resemble the football best. I have never done measurements based on this shape and therefore don't have enough knowledge on how to get the answers (although they might not be accurate, but close). I have found a picture online and came up with another idea. Look at the picture below and see how it is cut up into 4 triangles similarly to what you could do with a perfect circle. This is my idea on how to get the answers. I would trace the football in a piece of paper and similar to the image, I would draw two lines forming width and height. I would then use measurements finding the slant height, height (pink line as seen on image) and width(green line as seen on image) of the football. Am I going the right way with this so far? Kind of tough for me to figure out what to do. I would appreciate some help understanding what to do and how to do it. Last edited by demha (2013-09-25 02:18:06) "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review Did you see my hint in post #3? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Review Ah I see, that is for the volume, yes? I see it as: V f = 4/3(PI) x ab^2 Not sure what the f is, but a = area and b = base, correct? "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review a is half of the distance between the 'ends'. This is called the major axis. b is half of the length of the minor axis. (how fat it is in the middle) There is a device called a pair of calipers that would allow you assess these two lengths. You put the points at the end of the legs on the ball and then carefully move the calipers across to a The formula for the surface area is much more complicated. You can find it here: Fortunately, these is a site that will do the calculation for you: A spheroid is a special case of an ellipsoid with b = c. For your course, I think it would be reasonable to say you would use the on-line calculator. You have spent quite a bit of time researching ways to carry out the task. I suggest you write up all of them, saying which you think would be easiest to do and which would be most accurate. That way you ought to get top marks. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review Oh alright, I see the images on the Wikipedia. For the Surface Area. For the Volume. Two questions: First, what does the letters 'e' and 'c' stand for? Second, so after finding the major axis (a) and the minor axes (b), I can do the equation? "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review Not sure what the f is, but a = area and b = base, correct? Sorry, mathematicians love to confuse people with subscripts and then superscripts. When my brother tried to teach me Tensor Calculus with all those sub and superscripts, I quit math and ran for public office. That is read V sub f, f stands for football so is just a label or name. It means volume of the football. a and b are exactly what is in your drawing and mine, they do not stand for area and base. They are axes. But this is all covered in Bob's In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Review Ok, let me do #7 again. I don't need to find EXACT answers, just find what I think may be the best method to find the area and volume of a football. I need to get the area and volume of the football. The shape that resembles the ball most would be the Prolate Spheroid. I have two equations that can give me the volume and area of the spheroid. They may not be entirely accurate but they can get me close. To get the area, I first must do: e^2 = 1 - a^2/b^2 I need to solve for e. a = major axis and b = minor axis and c = b. To find the two axis, I will need to use a tool called caliper. With the "legs" of this tool, I can get the size of the axis which I will then set beside a ruler to find the measurements. Now once I have finished the equation above, I will then use this final equation which will give me the true answer (or at least the most PSA = 2(PI)a^2 (1 + c/ae[sin^-1] e) PSA means Prolate Spheroid Area Now that I have my self the area, I will now get the volume using this following equation: V = (4[PI]/3)a^2 x c OR V = 4.19 x a^2 x c Now do you think this is ok? If not, what am I missing here? Also, do you think I should send in my previous method with this one too? Last edited by demha (2013-09-25 05:37:08) "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review Yes, I would include first the cone method, then say it may lack accuracy because .... (give reason here) .... and go on to the spheroid method. For this you might as well replace all the 'c' instances with 'b' as it looks like you don't know whether to use c or b. Why might this still not be entirely accurate? After all that you ought to get full marks (well I would give you full marks so they're very rotten bounders if they don't) You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review Alright, I sent in the lesson and now I am waiting for the reply which may take a little time. While that is being checked, I did another lesson and got 17 out of 20. Here are the three I got wrong along with the answers I put: I need to put these points on a graph to answer the questions. 1. Draw a Cartesian Coordinate system for yourself, and then determine whether these sets of points form triangles - I said F Set 1: (0,0), (1,1), (0,1) Set 2: (-1, 0), (1,1), (3,2) Set 3: (0,0), (5,0), (3,5) A set 1 line segment, set 2 Triangle, set 3 line segment B set 1 Triangle, set 2 Triangle, set 3 line segment C set 1 Triangle, set 2 line segment, set 3 Triangle D set 1 line segment, set 2 line segment, set 3 line segment E set 1 Triangle, set 2 line segment, set 3 line segment F set 1 line segment, set 2 Triangle, set 3 Triangle 2. Now do the same with this list, and describe the shape. - I said E (previous question asks the same thing and I said it looks like a rhombus, which was right, this one is telling me to describe it [in other words, describe what a rhombus looks like] but I'm not exactly sure which triangle would best describe it) (0,-1), (0,3), (1,1), (-1,1) A A square B Two acute angles that meet at the acute angles CA triangle D Two right triangles that meet at the right angles E A rhombus F Two obtuse triangles that meet at the hypotenuse 3. (-1, 0), (2, 0), (2, 4), and (-1, 4) - I said B (this question is asking what shape do these points make. I said square but I guess this is more of a RECTANGLE so I would say A) A a rectangle B a square Ca parallelogram D a rhombus E a triangle F a line segment "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review hi demha 1. Draw a Cartesian Coordinate system for yourself, and then determine whether these sets of points form triangles - I said F Set 1: (0,0), (1,1), (0,1) Set 2: (-1, 0), (1,1), (3,2) Set 3: (0,0), (5,0), (3,5) A set 1 line segment, set 2 Triangle, set 3 line segment B set 1 Triangle, set 2 Triangle, set 3 line segment C set 1 Triangle, set 2 line segment, set 3 Triangle D set 1 line segment, set 2 line segment, set 3 line segment E set 1 Triangle, set 2 line segment, set 3 line segment F set 1 line segment, set 2 Triangle, set 3 Triangle I could do a diagram, but I know you like to work these out for yourself. When I did this one, I accidentally put one point in the wrong place (swapped the x and y round) and that alters the question completely. So my advice is, do a new graph and check your points very carefully. 2. Now do the same with this list, and describe the shape. - I said E (previous question asks the same thing and I said it looks like a rhombus, which was right, this one is telling me to describe it [in other words, describe what a rhombus looks like] but I'm not exactly sure which triangle would best describe it) (0,-1), (0,3), (1,1), (-1,1) A A square B Two acute angles that meet at the acute angles CA triangle D Two right triangles that meet at the right angles E A rhombus F Two obtuse triangles that meet at the hypotenuse I can see why you chose a rhombus for this. But I think they intend you to join the points together in the order given. Try this and I think you'll see that another answer is then visible. 3. (-1, 0), (2, 0), (2, 4), and (-1, 4) - I said B (this question is asking what shape do these points make. I said square but I guess this is more of a RECTANGLE so I would say A) A a rectangle B a square Ca parallelogram D a rhombus E a triangle F a line segment For a square all the sides must be equal. Your revised answer, A, is correct. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review Hi Bob The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Review Hi Anonimnystefy, It was really tempting to click on that spoiler, but I have resisted... for now. Hi Bob, I did #1 again. First Set looks like a small triangle. Second Set looks like a line segement. Third Set looks like triangle. My new answer would be C. "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Re: Review demha wrote: My new answer would be C. Yes, that's what I think too. Stefy wrote: hidden answer Yes, I do. And thank you for the implied compliment. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Review 2. Now do the same with this list, and describe the shape. - I said E (previous question asks the same thing and I said it looks like a rhombus, which was right, this one is telling me to describe it [in other words, describe what a rhombus looks like] but I'm not exactly sure which triangle would best describe it) (0,-1), (0,3), (1,1), (-1,1) A A square B Two acute angles that meet at the acute angles CA triangle D Two right triangles that meet at the right angles E A rhombus F Two obtuse triangles that meet at the hypotenuse I would say for this one - F By the way, my teacher replied for the previous math lesson and got 10.000! "The thing about quotes on the Internet is you cannot confirm their validity" ~Abraham Lincoln Post reply Pages: 1 2
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=285307","timestamp":"2014-04-20T08:52:15Z","content_type":null,"content_length":"59051","record_id":"<urn:uuid:a487c783-c6ea-41a1-8bd9-9276e3401259>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Are the math questions on the GMAT extremely difficult and complex? The "math" section of the GMAT (Graduate Management Admissions Test) is officially known as the Quantitative section. It consists of 37 multiple-choice questions covering both Problem Solving and Data Sufficiency. Problem Solving questions are mostly word problems. They test your ability to solve mathematical problems involving arithmetic, algebra, and geometry and word problems by using problem-solving insight, logic, and applications of basic skills. The basic skills necessary to do well on this section include high school arithmetic, algebra, and intuitive geometry — no formal trigonometry or calculus is necessary. The exam tests these skills as well as logical insight into problem-solving situations. Occasionally, questions refer to a graph, chart, or table, so you should understand and know how to derive information from them. Data Sufficiency questions don't necessarily require you to calculate a specific mathematical answer. You must decide if the data given in the statements is sufficient to answer the question. This section tests your ability to analyze a problem; to recognize relevant or irrelevant information in determining the solution of that problem; and to determine when you have sufficient information to solve that problem. You use the data given in the statements, plus your knowledge of mathematics and everyday facts (such as the number of days in July or the meaning of counterclockwise). Correctly answering these questions requires competence in high school arithmetic, algebra, and intuitive geometry. You also need mathematical insight and problem-solving skills. No advanced mathematics is required.
{"url":"http://www.cliffsnotes.com/cliffsnotes/test-prep/are-the-math-questions-on-the-gmat-extremely-difficult-and-complex","timestamp":"2014-04-18T08:37:31Z","content_type":null,"content_length":"84609","record_id":"<urn:uuid:9b72e054-6b9e-43c0-aa85-983e6e5851ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Varsity Numbers: Six Missing Points Most Recent FO Features Want to see what good middle linebacker play looks like? Watch this Alabama prospect work. Most Recent Extra Points 19 Sep 2008 Varsity Numbers: Six Missing Points by Bill Connelly (Ed. Note: A few of the numbers below have been slightly changed due to mistakes in the original editing of this article.) Some people like being able to take a car apart piece by piece and put it back together, knowing where every part goes and accounting for everything (aside from a couple leftover screws or something). I'm not one of those people. I don't like living up (or down) to stereotypes, but there's way too much geek in me for that. And not to generalize or anything, but if you're reading this site on a daily basis, odds are decent that there's too much geek in you too. But what about taking a football game apart and putting it back together? Awesome, right? It can be done on the computer, you don't get black gunk on your hands, you can do it while watching TV ... win-win situation. The thought behind my EqPts measure from last week (and therefore the PPP and S&P measures as well) is only one part of scoring points. It's the most important part by all means, but there are other factors involved -- namely, turnovers and special teams (and luck, but we're not measuring that yet -- consider that the leftover screws). Is it possible to assign a point value to every play -- even "special event" plays like kicks and turnovers -- and piece together the score of a game? Let's find out. We're going to explore the point values of turnovers, special teams, and penalties, but a couple of numbers should be noted right up front. • Actual Points Scored Per Game in 2007: 55.34 • EqPts Per Game: 49.31 So before we go delving into these other categories, it should be noted that we're pretty close already. Do turnovers, special teams and penalties account for those missing six points? In my last column, I referenced a method FO used to assign a point value to turnovers. I also mentioned that, as soon as I got rolling with my own data entry, I stopped looking at what others in the football stat world had done because I wanted to see what I could come up with on my own. Well, what I came up with turned out to be pretty damn similar. Again, the only difference is that the point values I ascribe to a play are based on the likelihood that a team is going to score on a particular drive; FO's work focused on where the next points were going to come from, on that drive or another one. In just about any football game, you'll see a reference to turnover margin, or maybe points off of turnovers. But it doesn't take in-depth thinking to realize that not all turnovers are created equal. If a running back fumbles on his opponent's 1-yard line, that's a huge turnover because his team had a high level of expected points, and he threw them away. And if he fumbles on his own 1, that's also huge because it hands his opponents a high level of expected points. And if he fumbles on his opponent's 1, and it's returned for a touchdown, that's doubly huge -- it cost his team quite a few points and handed his opponents a touchdown. But in turnover margin, all three of those fumbles count the same as if some backup quarterback fumbled at midfield on the last play of a 49-7 game. It seems clear that, as FO has covered in the past, counting the significance of two values -- the team's field position when the turnover happened, and the opponent's resulting field position -- gives you a much better view of a turnover's true costliness. And that's what we're going to try to do. Let's look at two turnovers: • Turnover 1: Team A is about to score. Their running back carries the ball inside the 1, then fumbles. The ball bounces into the arms of a safety, who carries it 100 yards for a touchdown. • Turnover 2: Team A's quarterback fumbles at his 45-yard line, and Team B recovers. Team B then immediately goes three-and-out and punts the ball back to Team A. Using my numbers, Turnover 1 was worth 12.62 points (5.62 points for being at the opponent's 1, 7.00 points for being returned for a touchdown). Turnover 2 was worth 4.26 points (1.92 points lost/ prevented, plus 2.34 points given/taken). Is that not a much more accurate read of which turnover truly impacted the result of the game and which did not? So looking at these point readings can give us a much more accurate feel for teams' "Turnovers = Turnaround" potential in 2008*. Certain teams like Hawaii, Kansas, and Middle Tennessee benefited greatly from turnovers (the Turnover Points Margin solidifies that even further than Turnover Margin) and will almost certainly be due a turnaround in 2008. (Then again, Middle Tennessee just beat Maryland, so what do I really know?) * Pretty sure Phil Steele has copyrighted "Turnovers = Turnaround" at this point, so I should probably credit him just to be on the safe side. Also, through all of these numbers, realize this: I also count botched punts/field goals as turnovers, so my Turnover Margin figures will likely be different than the official NCAA stats. One other thing to remember about turnover numbers is that the net gain is 0. Turnovers produce points for one team and against another. Special Teams So now it's time to establish point values for special teams. Leaving PATs out of it for now, there are three major special teams categories (and a fourth minor one): Field goals, punts, kickoffs, and (here's the minor one) free kicks. Let's attack them one at a time. Field Goals Figuring out what to do about field goals was by far the easiest of these categories. I sorted field goals by distance in five-yard increments (18 to 22 yards, 23 to 27, 28 to 32, etc.), looked at the percentage made in each group, and multiplied the percentage by three (the value of a successful field goal) to determine the expected number of points from each kick. Here's what I found: Expected points by field goal distance FG Range (yards) Average percentage Expected points 18 to 22 yards 91.4% 2.74 23 to 27 yards 88.1% 2.64 28 to 32 yards 80.3% 2.41 33 to 37 yards 69.4% 2.08 38 to 42 yards 67.1% 2.01 43 to 47 yards 58.1% 1.74 48 to 52 yards 45.6% 1.37 53 to 57 yards 35.0% 1.05 58-plus yards 20.0% 0.60 So with that, we can treat every field goal like an addition or loss of points. For instance, if you miss a 25-yard field goal, it's a loss of 2.64 points. If you make it, it's worth 0.36 points. That may not seem like a lot, but you have to remember that the team has been adding (and possibly subtracting) points all the way up the field. To get to the opponent's 8-yard line, they've probably earned at least somewhere in the neighborhood of 2-3 EqPts, so the 0.36 points seems a lot more reasonable in that regard. The field goal idea above was something of a no-brainer for me, but for punts, kickoffs, and free kicks, I had to toss around a few different ideas. Here's what I did (and this applies roughly to all • Take the receiving team's point value for the line of scrimmage of the punt. For example, you're punting from your 20-yard line. If your opponent had the ball on your 20-yard line, it would be worth 3.898 points. • Take the point value of where the ball ended up. In the above example, let's say the ball was punted 40 yards (to your opponent's 40) and returned 10 (to the 50). The point value of the 50 is 2.095 points. • Subtract the value of the end point (2.095) from the value of the starting point (3.898). That punt was worth 1.803 points. Got it? So the higher the point total, the better it is for the kicking team. The lower, the better for the receiving team. It's like net punting, only more useful and more confusing. For simplicity's sake, I measured these exactly the same way as I did punts. You kick off from the 30, so that's the first point value in consideration. The second is, naturally, where the ball ends up. I played around with the idea of figuring out the average point value of each kick (for kickoffs that was 1.46) and comparing teams' averages to that (so that about half the teams would be positive, half negative). However, that leads you to the same order of teams, just with different values, so in the end it just became an extra, meaningless step. Free Kicks This was a minor category. Out of more than 141,000 plays in 2007, there were 54 free kicks. They make a difference ... but not really. Very few teams were involved in more than one free kick in 2007. They're measured exactly the same way as kickoffs, only they're from the 20 instead of the 30, but nobody's "per game" totals are going to be much of anything. Special Teams Average Part of the reason I've done all these "points" measures is for predictive purposes, by all means, but I have another motive: I just love ranking things. And I thought that a "special teams points per game" type of measure would be great rankings fodder. However, there's a problem with that: Teams that score a lot are penalized in "per game" rankings because, well, they also kick off a lot. Per-game numbers will serve the purpose of "putting the car together," but I had to find a different idea for ranking special teams units. I did this by adding together the "higher is better" numbers, then subtracting the "lower is better" numbers. So we get something like this: Special Teams Avg. = Kickoff Return Avg. + Punt Av.g + (FG Avg. * 2) - Kickoff Avg. - Punt Return Avg. (I multiplied field goal average by two so that field goals would carry the same weight as kickoffs and punts.) So that leads to averages from No. 1 San Diego State (1.69) to No. 120 Duke (-2.95). That's right, San Diego State had the best special teams unit in the country last year. If only every play were based on special teams. With Special Teams Points Per Game, however, you get a much wider spread. The No. 1 team in the country in per-game terms was Florida International (+8.94), simply because they returned a ton of kickoffs. Next up were San Diego State (+8.49), Syracuse (+7.62), Idaho (+7.32), and Eastern Michigan (+6.98). Worst? Kansas (-9.77 PPG), Ohio State (-9.42), Hawaii (-7.18), West Virginia (-6.46), and Boise State (-6.27). This one's easy. We've got two Penalty Points numbers: Offensive Penalty Points and Defensive Penalty Points. Both numbers are based on obvious concepts: Offensive Penalty Points = EqPts gained from your opponents' defensive penalties – EqPts lost from your own offensive penalties. Defensive Penalty Points are exactly the opposite. On a per-game basis, penalty margins ranged from Kansas (+4.37 per game), San Jose State (+3.87), and UConn (+3.76) at the high end to Florida International (-6.29), NC State (-5.66), and Idaho (-5.17) at the low end. Add it all up, and what have we got? So here's the coolest part. I took the car apart, not knowing what would happen when I attempted to reassemble it, and here's what I got: Average Points Per Game of Various Events Average Turnover Points Per Game* 2.15 Average Penalty Points Per Game* 2.15 Average Special Teams Points Per Game* -0.39 Sum 3.91 Average EqPts Per Game 49.31 Total Projected Points Per Game 53.22 * As a reminder, these are based off of margins. That’s why the numbers are just a bit over or under zero. On average, there are about 2.15 more Offensive Turnover Points per game than Defensive Turnover Points (remember, that number is based off of starting and ending field position); similarly, there are about 2.15 more Offensive Penalty Points per game than Defensive Penalty Points. Meanwhile, Special Teams points trended slightly toward the defensive side of the ledger. Not bad at all. We can account for 53.22 of 55.34 points per game. Only a couple screws here and there are missing. But how are they distributed? Do individual teams' per-game Projected Points averages resemble their actual points? Yes and no. Some teams match up unbelievably well between their actual points and projected points. Navy averaged 39.31 points per game in 2007. Their projected total? 39.27. Texas A&M: 27.92 vs 28.00. Washington: 29.23 vs 29.07. But teams at the extreme ends of the scale saw bigger differences. West Virginia averaged 39.62 points per game but only put up a projected total of 29.70. Kansas' 42.77-point offense only saw 35.86 projected points. And on the low end, Syracuse managed only 16.42 points compared to 21.52 projected points. UNLV's numbers were just as different -- 17.25 points vs. 22.92 projected points. So I'll wrap this up with a couple of questions: 1) What do you think causes the variance at the ends of the scale? 2) What should be done about it? Is it as simple as applying an exponential multiplier, making the high numbers higher and the low numbers lower? If this question can be answered reasonably and accurately, then the world is our statistical oyster. We can look at the specific points in the game most directly tied to wins and losses. We can come up with a reasonable way to account for the massive difference in talent from team to team in college (something that's obviously not as much of a problem in the NFL). We can look at college football in an entirely new way. Which is a lot more fun to me than working on a car. A couple of responses to last week's comments: "Also, I wonder about the fact (if I'm understanding this right) that yards in your own territory are less valuable than yards gained elsewhere on the field. It would seem to me that the ability to get out of the shadow of your own endzone can be especially valuable." There's definitely something to this, though I think some of it comes into play with punting and some of the special teams numbers. If you move the ball from your 1 to your 20, then uncork a 45-yard net punt, that will account for some EqPts that basically serve as a "points prevented" figure. "To make comparisons, you need to adjust for defense. Sort of like the difference between OPS and OPS+, only even more so." Remember last week's S&P measure? I used the OPS+ as a jumping-off point for my S&P+ idea, which I will discuss next week. No "park factors" involved as in baseball, but it is indeed an attempt to place everybody on an even playing field. Hawaii probably did not, indeed, have the No. 3 offense in the country last year, but they did have the No. 3 offensive stats, which is all we've been able to discuss so far. S&P+ will take a stab at the rankings. "Do these numbers mean those offenses are 'good at winning college football games' or 'good at exploiting superiority in college football games?'" A concept we'll look at in a couple weeks (after we've exhausted the '+' concept) is Win Correlations, which simply draws correlations between specific statistical categories and wins/losses, both for college football as a whole and for specific teams. It's a lot of fun -- I based my college football previews (scroll down for conference posts) off it. 12 comments, Last at 29 Dec 2008, 12:36pm by dogstar30 by FourteenDays :: Fri, 09/19/2008 - 3:14pm Isn't 55.33-49.14 equal to 6.19, or is one of those numbers mistyped? by Bill Connelly :: Fri, 09/19/2008 - 3:37pm Misprint. The numbers should be fixed now. Thanks for the good eye. by Dales :: Fri, 09/19/2008 - 3:22pm by Anonymous (not verified) :: Fri, 09/19/2008 - 3:57pm Good statistics are either explanatory or predictive -- they either show why something happened, or predict what will happen in the future. These stats are neither. One of the most important aspects of football analysis is that explanatory stats must take context into account. If you fumble on the 1, and the opponent runs it in for a touchdown, and you're up by 45 with 12 seconds left, the value is ZERO. One thing FO has discovered is that fumbling is a skill, and recovering fumbles is random. So predictive stats must treat all fumbles the same -- someone who fumbles on the 50 yard line is just as likely to do it again as someone who fumbles on either 3 yard line, and just as likely to do it again no matter who recovered (even though that might produce big swings in the actual game). So an explanatory fumble stat is contextual, while a predictive fumble stat is linear with number of fumbles/number of fumbles caused. The fumble stat given here is neither. What's the third category of stats (other than explanatory and predictive)? "Lies"? "Damned lies"? "Red herrings"? Well, the unfortunate point is that the slope being different between actual points per game and EqPts/game is a bit worrying - especially because I'd figure that in general, teams would have higher EqPts/game because there's almost always points left on the table at the end of each half - because each half ends, which you're not taking into consideration. Driving from the 1 yard line to the 50 yard line with 10 seconds left on the clock won't increase your likelihood of scoring on that drive nearly as much as driving from the 1 yard line to the 50 yard line with 10 minutes left. So I'd figure most teams would gain more EqPts/game. So, that's one limitation still - no consideration of clock or end of game. But the reason that EqPts is falling behind true points, I think, is a lot simpler. I don't think you're handling punts right at all. Consider, for instance, a completely inept team. They gain no yardage on their own possession. They always punt the ball, and that punt is always returned for a touchdown. They also never return the ball, and always start on their 20. Each punt return is worth -3.102 points to the punting team (3.898-7, presumedly), and presumedly 3.102 points to the receiving team. Except the receiving team always scores 7 points, so very rapidly, the EqPts will fall behind. A lot. Yes, it's a bit of a contrived example, but I think it's still informative. I think punts would end up a really, really negative play if they were done correctly in this framework: you're sacrificing your own field position, and handing field position to your opponent. So if you're on your 50, the punt immediately gives up 2.095 points. Now, imagine they return that punt all the way to their own 20. You've now handed them the ball such that their point value is 3.898. So that's a change of 5.993 points (5.993 for the receiving team, -5.993 for the punting team). Nearly a touchdown! The point value, as calculated by you for that would be -1.803 for the punting team, and 1.803 for the receiving team. Am I calculating this right? If so, I think the "5.993" point value for that punt return is more indicative. It was probably an 80 yard punt return, and the team went from being in a not-so-terrible position to score (the 50 yard line), to handing fantastic position to their opponent. Anyway, comments appreciated. I could easily be missing something. You know, now that I think about it, a much, much simpler way to put it would've been "what you should be doing is considering punts as turnovers, because that's what they are." The calculation for a turnover that's in the article is exactly the same as what I was saying. by Anonymous (not verified) :: Sat, 09/20/2008 - 10:55am If the punts are mishandled in the model, you will get the kind of discrepency you r observing. Good teams punt less often and are punted tomre often and vise versa by David Lewin (not verified) :: Sat, 09/20/2008 - 11:09am Right now you are not evaluating field goals the same way that you are evaluating the other aspects of special teams. In all other aspects of special teams you are valuing them not with respect to average performance (which is what you do with field goals) but with respect to zero level performance. I don't recommend that you evaluate anything with respect to zero level performance, but the equivalent way to do things for field goals is to just see how many more points the field goal produced than just giving the ball to the other team at the line of scrimmage (which is what you do for by Joseph :: Sat, 09/20/2008 - 4:11pm Here's my opinion on the variance: 1. In college football, you have a lot of uneven matchups (at least on paper--how many times have you seen 14+ point spreads in college? They are EXTREMELY rare in the NFL). In these uneven matchups, you prob. (IMO) have a lot of "big play" TD's. Now, I'm not for sure because I don't know the details of your formula, but my guess is that Big State U scoring 7 TD's against Little Directional State would NOT earn as many EqPts as true points because several big plays, maybe a return TD, etc. 2. On the other end of the scale, the reverse is true--esp. in a blowout, no matter who it involves (and, from your graph, it appears to involve the worst teams--those who don't score many points--and thus lose). Blown-out team will make some EqPts by making a few first downs, but esp. because of failed 4th downs in trying to come back, gets 0 real points. Missing short field goals prob. adds to this "lower end" variance also. 3. Might the other 3 "missing" points be from OVERTIME possessions, as both teams start on the other's 25 yd line? Not sure, but I bet a 25 yd run/pass TD on 1st down wouldn't generate 6/7 EqPts but would on the scoreboard. by bradluen :: Sun, 09/21/2008 - 8:02pm What I *think* you want is a measure of expected points that adds up **exactly** to the number of points scored each game. I'm assuming your system is trying to be more explanatory than predictive. I don't think it'd require much fiddling of your work to get a beta version of this. First, you'd have to choose either "points on particular drive" or "next points", and stick with it - no mixing and matching. Let's say for the sake of argument you choose points on a particular You have a starting expected number of points that's a function of field position. Each play (or penalty), you recalculate that expected number based on the new field position (and, if you really want to do it properly, on down number) (and if you really, really want to do it properly, on game situation - score and time remaining). The difference between the new expectation and the old expectation is the equivalent point value of that play. This is the hard part; if I'm reading you correctly, you've done this already. Continue doing this until the drive ends. If the drive ends in a touchdown, the final expectation is 6.9 something (or whatever, depending on how you decide to handle conversions). If there's a successful field goal, the final expectation is 3. (In that case, the expectation's exact: you know exactly how many points were scored on that drive, so there's no probability distribution.) If the defense scores points, the final expectation is negative. Otherwise, the drive ends in zero points (punt/missed FG/TO that the defense doesn't immediately score from, or end of half), the final expectation is zero. Then a new drive starts and you calculate a new starting expectation for the other team. (If you want to consider special teams and such, you look at the swing in expectations.) Then you'll be able to split up USC's 35 points against Ohio State between (i) each starting field position, (ii) each USC play, and (iii) Rey Maualuga. Lots of things you could do once you reached this stage, like divide up value by player or put retrospective win probabilities on each game, but that's getting ahead. Now, as a first order solution, you simply attribute the swing in expectations caused by a change/end of possession to the kickoff or punt or TO or end-of-half or whatever. Basically do what you're doing for turnovers for all changes of possession. This means some points would be attributed to categories like "change in expected points caused by unsuccessful 4th downs", but these categories may be the most interesting: who wouldn't want to know how "change in expected points on successful 4th down" compares to "change in expected points on unsuccessful 4th down" over a season? Naturally, if this isn't what you want to do, do something else. by bradluen :: Sun, 09/21/2008 - 11:22pm To illustrate what I'm proposing, let's have a look at LSU's opening drive at Auburn. Even if Bill doesn't want to do what I'm suggesting, hopefully this may clarify a few points (if only for me). Approximate expected points on a particular drive as a linear function of field position from 0 to 5 (with 2 bonus points if you get over the goal line). Start of game: Auburn kicks off to LSU 37: LSU expects to get 1.85 points on this drive. You can divide that between the kicker, the returner and the coin toss if you like, but set that aside for LSU takes five plays to get to their 47: LSU ends the drive with 2.35 expected points, so the offense gets credit for 0.5. This can be divided up by play. LSU punts to the Auburn 1: Auburn starts with 0.05 expected points. So there's a loss of 2.35 points by LSU, plus a gain of 0.05 points by Auburn. That's a swing of 2.4 points. But! LSU had really lost those 2.35 expected points *before* they punted: because they failed to complete a third down, and so could no longer expect to score *any* points on that drive, Les Miles insanity notwithstanding. (Also ignoring punt return TDs.) So a system that divides up expected points by play, if it aspires to be less terrible than my linear system above, pretty much has to take down into account. Expected points on drive are a linear function of field position, from 0 to 6 on 1st down, 0 to 5 on 2nd down, 0 to 4 on 3rd down. On 4th down, let's say it's 0 in your own half, more than 0 in your opponent's. So: Start of game: Auburn kicks off to LSU 37: LSU expects to get 2.22 points from this drive. Rush to LSU 43: Expected pts = 2.15, loss of 0.07 (unfair, because my approximation can't tell the difference between 2nd and 10 and 2nd and 4). Pass to Auburn 48, first down: EP = 3.12, gain of 0.97. Rush to 50: EP = 2.5, loss of 0.62. Sack to LSU 47: EP = 1.88, loss of 0.62. Incomplete pass, forced to punt: EP = 0, loss of 1.88. Total contribution to expected points = 2.22 - 0.07 + 0.97 - 0.62 - 0.62 - 1.88 = 0, exactly the number of points scored on the drive. Punt to Auburn 1: Auburn starts with 0.06 expected points. Again, the points LSU expected to score on their drive have already been lost. If you want to know how much to attribute to net punting, you can find "expected points after this punt" minus "average expected points of all punts from your own 47". And so on. Anyway, yeah, next points scored is a better measure. by dogstar30 (not verified) :: Mon, 12/29/2008 - 12:36pm Wouldn't it be better to treat the starting point for kickoffs as the receiving team's 40-yard line (where they would get the ball, by default, if the kick went out of bounds, which is a strategic alternative for the kicking team), or, alternatively, at the average return yard-line (so that kickoffs would have a net effect of zero)?
{"url":"http://www.footballoutsiders.com/varsity-numbers/2008/varsity-numbers-six-missing-points","timestamp":"2014-04-18T01:22:57Z","content_type":null,"content_length":"64724","record_id":"<urn:uuid:c7cd6a8f-a3c7-4a46-b9fa-a0d267d9f2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
A stone is dropped vertically from the top of a tower of height 40 m. At the same time a gun is aimed directly at... - Homework Help - eNotes.com A stone is dropped vertically from the top of a tower of height 40 m. At the same time a gun is aimed directly at the stone from the ground at a horizontal distance 30 m from the base of the tower and fired. If the bullet from the gun is to hit the stone before it reaches the ground, what is the minimum velocity of the bullet must be, approximately. The stone drops down due to gravity, while the bullet travels due to its firing burst. The greater the velocity of the bullet the earlier (and higher up) it hits the stone. Lowest and latest, it can hit the stone at the moment the latter touches the ground. Time taken by the stone to hit the ground can be obtained from the laws of motion. `40 = 0*t+1/2 *9.81*t^2` `rArr t = 2.855686 s` If the stone is to be hit at the latest instance (which is the lowest point too), the bullet has to travel at least 30 m by this time (assuming the trajectory of the bullet to be linear in short Therefore the required minimum velocity of the bullet = 30/2.855686 = 10.51 m/s. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/stone-dropped-vertically-from-top-tower-height-40-440936","timestamp":"2014-04-21T12:36:02Z","content_type":null,"content_length":"25775","record_id":"<urn:uuid:8207f831-020d-48c7-87f1-bf55f7e3a164>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Decomposition theorem and blow-ups up vote 4 down vote favorite Yet another question of the form 'How to apply the decomposition theorem?' The example that I am considering ought to have a simple answer, but I'm getting confused and I would appreciate if someone could point out where I'm going astray. The confusing point can be stated briefly, at the end of observation 3. But I'll give an explanation of what I do understand, hoping that this will be helpful to other people and make clear what I am missing. Let $Y$ be a quasi-projective 3-fold with a unique singular point $0 \in Y$ and suppose that the blow-up at $0$ is a resolution $p: X \rightarrow Y$ and the exceptional divisor $p^{-1}(0)=S$ is a smooth projective surface.The goal is to understand the summands of $Rp_{\ast}IC_{X} \simeq \bigoplus_{i} {}^{\mathfrak{p}}\mathcal{H}^{i}(Rp_{\ast}IC_{X})[-i]$, where the decomposition is into perverse cohomology sheaves given by the decomposition theorem. 1. By base change, the fact that $p$ is an isomorphism over the open set $U=Y\setminus 0$ implies that $Rp_{\ast} IC_{X}$ restricted to $U$ is just $IC_{U}$, so ${}^{\mathfrak{p}}\mathcal{H}^{0}(Rp_ {\ast}IC_{X})\simeq IC_{Y} \oplus E$ for some sky-scraper $E$ at $0$. Further, the other perverse cohomology sheaves ${}^{\mathfrak{p}}\mathcal{H}^{i}(Rp_{\ast}IC_{X})[-i]$ must be supported at $0$ and so consist of shifted sky-scrapers. Thus we just have to understand the stalk of $Rp_{\ast} IC_{X}$ at $0$. 2. By base change and the fact that $p^{-1}(0)=S$ a smooth projective surface, we have that the stalk of $Rp_{\ast} IC_{X}$ at $0$ is $H^{0}(S,\mathbb{Q})[3]\oplus H^{1}(S,\mathbb{Q})[2]\oplus H^{2} (S,\mathbb{Q})[1]\oplus H^{3}(S,\mathbb{Q})\oplus H^{4}(S,\mathbb{Q})[-1]$. Since $IC_{Y}$ is concentrated in degrees $-3,-2,-1$ with respect to the standard t-structure, $E \simeq H^{3}(S,\ mathbb{Q})\otimes \mathbb{Q}_{0}$ and ${}^{\mathfrak{p}}\mathcal{H}^{1}(Rp_{\ast}IC_{X})[-1] \simeq H^{4}(S,\mathbb{Q})\otimes \mathbb{Q}_{0}[-1]$. Further, there are no higher perverse cohomology sheaves, for degree reasons. 1. By Verdier duality and self-duality of $Rp_{\ast}IC_{X}$, the only other perverse cohomology sheaf in the decomposition is ${}^{\mathfrak{p}}\mathcal{H}^{-1}(Rp_{\ast}IC_{X})[1]$, which must be dual to ${}^{\mathfrak{p}}\mathcal{H}^{1}(Rp_{\ast}IC_{X})[-1] \simeq H^{4}(S,\mathbb{Q})\otimes \mathbb{Q}_{0}[-1]$. I would think that it should look like $H^{0}(S,\mathbb{Q})\otimes \mathbb{Q}_{0}[1]$, but then the degree in which $H^{0}(S)$ sits is off by $-2$ from what appears in observation 2. The only sky-scraper sitting in degree $-1$ is $H^{2}(S)$, but that isn't dual to $H^{4}(S)$ in general. Presumably I am having a problem with applying Verdier duality, but I don't see where the problem lies. Any comments are very welcome. add comment 2 Answers active oldest votes In this example we have $p : X \to Y$ and we may assume, wlog, that $X$ is isomorphic to the total space of the normal bundle to the surface, and $p$ is the contraction of the zero Then, by the Deligne construction, $IC(Y) = \tau_{\le -1} j_* \mathbb{Q}[3]$, where $j : Y^0 \hookrightarrow Y$ is the inclusion of the smooth locus (which is isomorphic to $X^0$ the complement of the zero section in $X$). In order to work this out, we can use the Leray-Hirsch spectral sequence $E_2^{p,q} = H^p(S) \otimes H^q(\mathbb{C}^*) \Rightarrow H^{p+q}(X^0)$ this converges at $E_3$ and we get that the degree 0, 1 and 2 parts of the cohomology of $X^0$ is given by the primitive classes in $H^i(S)$ for $i = 0, 1, 2$. Note that this is everything in degrees 0 and 1, but in degree two the primitive classes form a codimension one subspace $P_2 \subset H^2(S)$. up vote 3 down vote accepted The Deligne construction above, gives us that $IC(Y)_0 = H^0(S)[3] \oplus H^1(S)[2] \oplus P_2[1]$. (This is a general fact: whenever you take a cone over a smooth projective variety, the stalk of the intersection cohomology complex at 0 is given by the primitive classes with respect to the ample bundle used to embed the variety. This follows by exactly the same arguments given above.) Then the decomposition theorem gives $p_* \mathbb{Q} = \mathbb{Q}_0[1] \oplus ( IC(Y) \oplus H^3(S) ) \oplus H^4(S)[-1]$. EDIT: fixed typos pointed out by Chris. Thanks! This was the second example that I've tried to work out and I hadn't realized that the primitive cohomology would be relevant, but now it makes sense. – Chris Brav Mar 15 '10 at 23:18 Just for the record, I think that there are a few minor typos in Geordie's answer. First, we should have $IC(Y)_{0}=H^{0}(S)[3]\oplus H^{1}(S)[2]\oplus P_{2}[1]$ and second, $p_{*} \mathbb{Q}=\mathbb{Q}_{0}[1]\oplus (IC(Y)\oplus H^{3}(S))\oplus H^{4}(S)[-1]$. Also, I think $j: X^{0} \hookrightarrow X$ is meant to be $j: X^{0} \hookrightarrow Y$. – Chris Brav Mar 16 '10 at 1:52 Thanks! I fixed everything you pointed out. – Geordie Williamson Mar 16 '10 at 7:40 add comment I'm pretty sure your intuition is just leading you astray here. The dual to H^4 in the stalk really is H^2. I think the pairing is just pullback your classes to the blowup and intersect them there. The kernel of this is the top degree of the IC sheaf, and the one bit left over is the negative perverse cohomology. Though, this is mostly irresponsible speculation based on my up vote 0 intuition... down vote Thank you for the reply, but I don't follow. Pullback classes of what? Let me elaborate on why I'm surprised that the dual should be H^2. If I have the closed embedding of a point $i: 0 \ rightarrow Y$, then $i_{\ast}=i_{!}$ and so $D_{Y}i_{*} \simeq i_{!}D_{0} s\imeq i_{*}D_{0}$, where $D$ is the Verdier duality functor. This means that if I want to compute the Verdier dual of $i_{*}H^{4}(S)[-1]$, then I can first compute the dual of $H^{4}(S)[-1]$ on the point to get $H^{0}(S)[1]$ (by usual Poincare duality) and then pushforward. What am I missing? – Chris Brav Mar 15 '10 at 21:31 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/18301/decomposition-theorem-and-blow-ups?sort=votes","timestamp":"2014-04-21T05:02:30Z","content_type":null,"content_length":"60944","record_id":"<urn:uuid:aaeee428-49b8-4578-8e4a-254f88c8b4ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Mining Maximal Cliques from a Large Graph using MapReduce: Tackling Highly Uneven Subproblem Sizes Svendsen, Michael and Tirthapura, Srikanta (2012) Mining Maximal Cliques from a Large Graph using MapReduce: Tackling Highly Uneven Subproblem Sizes. Publisher UNSPECIFIED. Full text available as: PDF - Requires Adobe Acrobat Reader or other PDF viewer. We consider Maximal Clique Enumeration (MCE) on a large graph. A maximal clique is perhaps the most fundamental dense substructure within a graph, and MCE is an important tool to discover densely connected subgraphs in a graph, with numerous applications in data mining on web graphs, social networks, and biological networks. While MCE is well studied in the sequential case, relatively little is known about eective methods for parallel MCE. We present a new parallel algorithm for MCE, Parallel Enumeration of Cliques using Ordering, PECO. Unlike previous works, which require a post-processing step to remove duplicate and non-maximal cliques, PECO enumerates only maximal cliques with no duplicates. The key technical ingredient is a total ordering of the vertices of the graph which is used to achieve a load balanced distribution of work, and to eliminate redundant work among processors. An important feature of PECO is that the total work of parallel enumeration is equal to the work of a variant of a popular sequential algorithm. We have implemented PECO on the MapReduce framework, and our experiments on a large cloud computing testbed show that the algorithm is able to enumerate maximal cliques in a variety of large real-world graphs of millions of vertices and tens of millions of maximal cliques, and scales well with an increasing number of reducers. A comparison of ordering strategies shows that an ordering based on vertex degree performs the best, improving load balance and reducing total work when compared to the other strategies. Archive Staff Only: edit this record
{"url":"http://archives.ece.iastate.edu/archive/00000623/","timestamp":"2014-04-16T10:12:46Z","content_type":null,"content_length":"9421","record_id":"<urn:uuid:a59d35b7-2e1b-4183-a02d-d32b0eea3ed6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on October 11, 2010. By financial mathematics you can work anywhere in the field of financial devision as financial mathematician. The cost it depend on the institution where you wanna studying, but to become an real financial mathematician, you must study towards BSC in Applied Mathematics and is 3yrs at any university in RSA, then continue towards Honours in Applied Mathematics to become an professional financial mathematician.By Mr PP Diale from Tzaneen Lephepane.
{"url":"http://plus.maths.org/content/comment/reply/2369/1779","timestamp":"2014-04-18T16:22:35Z","content_type":null,"content_length":"20227","record_id":"<urn:uuid:c329bfa2-02d0-4770-903b-40c2b74cfdd9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Data format AFTER import into Mathematica from Excel.xls worksheet Replies: 1 Last Post: Nov 7, 2013 12:21 AM Re: Data format AFTER import into Mathematica from Excel.xls worksheet Posted: Nov 7, 2013 12:21 AM Very clear explanation, Alexei. Thank you so much for your effort. Nicholas Kormanik -----Original Message----- From: Alexei Boulbitch [mailto:Alexei.Boulbitch@iee.lu] Sent: Wednesday, November 06, 2013 2:14 AM Cc: nkormanik@gmail.com Subject: Re: Data format AFTER import into Mathematica from Excel.xls I have a large Excel worksheet (200 columns, 4,000 rows). Every column has a unique and carefully chosen title/label. I'm attempting to use it within Mathematica to create a large number of contour plots (1,000) -- using three columns at a time, for x, y, z. The import, I suppose, occurs. At least it appears to have. Here's the question: -- Is Mathematica using my column titles, so that I can refer to them in commands subsequent to import? -- If not, is there any format I can put my data into to begin with so that Mathematica will readily recognize my named columns? I certainly wanted to be getting on with the contour plots (SmoothDensityHistogram), but am having trouble getting my data into the required form. Any help appreciated. Nicholas Kormanik nkormanik at gmail.com Hi, Nicholas, The answer to your question is - yes. Mathematica imports the Excel file as it is with all its numeric and textual elements. In the result you get a nested list. I just prepared a small rectangular Excel table for the example purposes with the first line consisting of the column headings: "Time", " StartTemperature", "Temperature", "Density", " Number". The rest elements are something else including numerical data. I just took a part of my old table with results. That is, I guess, what you need, but much smaller. I cannot, unfortunately show the excel file file here, since the MathGroup only accepts plain text messages. Now, my notebook is in the same directory. This imports the Excel file called "example.xls": lst = Import[NotebookDirectory[] <> "example.xls"] // First The result is the list, and it is shown here: {{"Time", " StartTemperature", "Temperature", "Density", " Number"}, {"18:21:35", "40,000", "39,995", "---", "1"}, {"11:07:13", "40,000", "39,999", "1,006553", "2"}, {"11:08:44", "40,000", "40,000", "1,006638", "3"}, {"11:09:02", "40,000", "39,999", "1,006659", "4"}, {"11:44:21", "40,000", "39,999", "1,007191", "1"}, {"11:45:51", "40,000", "39,999", "1,007164", "2"}, {"11:52:34", "Out of range", "Out of range", "Out of range", If there is a large list to import there are several possibility to check, if you have, indeed, imported it or not: a) Just check this: If you get anything over 0 you have, indeed, imported something. b)Then check this: lst // Last {"Time", " StartTemperature", "Temperature", "Density", " Number"} {"11:52:34", "Out of range", "Out of range", "Out of range", "1"} You will see the first and the last elements of the list. If, say, the first element is too long or nested or both, check this: lst[[1, 1]] And so on, until you get the vision of what did you import. c) For very long lists one can always use the short form: Here it is senseless, though, since this list is already short. OK, so we have got the list, its first line being the line is the column headings. You can address it as follows: {"Time", " Set Temp.", "Temperature", "Density", " Number"} You might wish to somehow format it. You may do it as follows: MapAt[Style[#, 16, Bold, Blue] &, lst, {1}] This returns the same list but the headings line will be formatted according to your wish. Try it. To view it in a form of a table you may use, say, Grid, or TableForm. But this is already another subject to be discussed separately, if needed. Now, to address the individual columns by their names I can offer a simple function as follows: getColumn[lst_List, columnName_String] := Module[{pos}, pos = Position[Transpose[lst], columnName][[1, 1]]; Its first argument is the list in question, and the second one is the string - the name of the column: getColumn[lst, "Temperature"] {"Temperature", "39,995", "39,999", "40,000", "39,999", "39,999", \ "39,999", "Out of range"} Have fun, Alexei Alexei BOULBITCH, Dr., habil. IEE S.A. ZAE Weiergewan, 11, rue Edmond Reuter, L-5326 Contern, LUXEMBOURG Office phone : +352-2454-2566 Office fax: +352-2454-3566 mobile phone: +49 151 52 40 66 44 e-mail: alexei.boulbitch@iee.lu
{"url":"http://mathforum.org/kb/message.jspa?messageID=9318702","timestamp":"2014-04-21T15:15:34Z","content_type":null,"content_length":"18859","record_id":"<urn:uuid:248c5039-a874-4d6e-a0df-90812b061b04>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Decentralized Detection in Censoring Sensor Networks under Correlated Observations The majority of optimal rules derived for different decentralized detection application scenarios are based on an assumption that the sensors' observations are statistically independent. Deriving the optimal decision rule in the canonical decentralized setting with correlated observations was shown to be complicated even for the simple case of two sensors. We introduce an alternative suboptimal rule to deal with correlated observations in decentralized detection with censoring sensors using a modified generalized likelihood ratio test (mGLRT). In the censoring scheme, sensors either send or do not send their complete observations to the fusion center. Using ML estimation to estimate the censored values, the decentralized problem is converted to a centralized problem. Our simulation results indicate that, when sensor observations are correlated, the mGLRT gives considerably better performance in terms of probability of detection than does the optimal decision rule derived for uncorrelated observations. 1. Introduction The theory of signal detection and estimation is used in a wide variety of target-detection applications. Different formulations for the target detection problem have been suggested depending on the cost of communication. The classical detection theory considered the canonical detection problem in which a decision is made based on observations present in one central location, that is, centralized detection. The classical detection theory suits applications in which the complete observations are available at one location for decision making, for example, detection of the presence of a target using a radar. Decentralized detection addresses the issue of detection in sensor networks. A typical decentralized detection system with fusion is shown in Figure 1, in which a fusion-center produces an estimate of the state of nature based on the data sent by geographically dispersed sensor nodes. Some preliminary processing of data is carried out at each sensor node. For example, in the canonical decentralized detection problem, each sensor node quantizes the likelihood ratio of its observation before sending it [1]. The performance of decentralized detection systems is suboptimal compared to the centralized systems because the decision maker (fusion-center) does not receive a sufficient statistic. Nonetheless, factors such as communication bandwidth and limited energy motivate the use of decentralized detection systems. Moreover, in systems with a large number of sensor nodes, uncompressed information could overwhelm the fusion-center. Figure 1. Decentralized detection system with fusion center. In battery-powered sensor networks, the most valuable resource is the limited energy available to each sensor. In a scenario where the absence of the target (null hypothesis) is much more likely than its presence (target hypothesis), an alternative formulation of the decentralized detection problem would allow the sensors to not communicate all the time to the fusion center, and thereby conserve energy. In such a case, the sensors are said to censor their observations by not sending observations that fall within a certain criterion. Rago et al. [2] considered a censoring scheme in which sensors either send or do not send some real-valued function of their observation to a fusion center based on a communication-rate constraint. The work in [2] shows that in the censoring scenario with independent sensors' observations, it is optimal (in both the Bayesian and the Neyman-Pearson (NP) sense) for the sensors to not send their likelihood ratios if they fall in a particular interval and that the optimal censoring region is a single interval of the likelihood ratio. In the canonical decentralized detection problem introduced in [1], an optimal decision rule should come up with the optimal quantizer at each sensor and the optimal decision rule at the fusion center. To obtain the censoring thresholds and the optimum decision rule in the censoring formulation, joint optimization is required across the sensors and the fusion center. Appadwedula et al. [3, 4] provided a useful simplification of the censoring interval 3, 4] that setting the lower threshold of the censoring interval to 0 for any false-alarm constraints less than or equal to Most of the work done in the area of deriving optimal rules was based on an assumption that the observations at each sensor node are conditionally independent. Although such an assumption reduces the complexity of analysis noticeably, many wireless sensor network applications experience correlated observations, for example, in target-detection problems in which the sensors are close to each other and are prone to the same noise sources. The effect of correlation on the performance of a decentralized detection system has been explored in the literature. Some of this work was done in [5–9]. The results obtained are often not easy to implement because the observation space cannot be divided into two contiguous portions. Willett et al. [8], for example, derive the optimal sensor rule for a two-sensor system with correlated noise. Their findings show that as the correlation between the sensors' observations increases, the optimal detection scheme cannot use single-interval decision regions at both sensors. In fact, the optimal scheme, if present, would be that either one sensor uses a single-interval decision region and the other sensor's observation is ignored, or both sensors use non-single-interval decision regions and neither sensor is ignored. Furthermore, it was shown in [9] that finding the optimal decision rule at the fusion center requires complete knowledge of the observation statistics. In Section 2, we introduce the problem of detection under correlated observations. We propose a modified generalized likelihood ratio test (mGLRT) as a test implemented at a fusion-center receiving correlated observations. We then show how the mGLRT could be applied in censoring sensor networks. In Section 3, we evaluate the performance of the mGLRT through simulations. We consider two examples, a two-sensor network, and an eight-sensor network. We then find when the mGLRT has a performance similar to that of the optimal centralized test. In Section 4 we consider an example with real data. 2. Correlated Observations and the mGLRT In wireless sensor networks used for detection applications, sensors communicate to detect a certain phenomenon. As mentioned earlier, the sensor observations are sent to a fusion center for decision making. The observations received at the fusion center may have a certain degree of correlation depending on the nature of the communication links between the sensors and the fusion center and the sources of signal and noise. Correlated noise is encountered in different communication scenarios such as wireless communication in fading channels. In the following discussion, we consider the problem of detection using censoring sensors under correlated noise. We introduce the problem by considering a simple sensor network consisting of two sensors communicating to a fusion center. Consider a decentralized detection system consisting of two sensors sending their censored observations to a fusion center in which the final decision is made. The optimal fusion rule in the NP sense would be one that solves the following optimization problem: Finding the optimal solution requires joint optimization for the censoring and the fusion center decision thresholds even with the assumption of independence across sensor observations. With the independence assumption removed, finding the optimal decision rule becomes intractable for the original decentralized detection problem [1, 6–8]. For a two-sensor system, censoring converts the simple binary hypothesis problem to a composite hypothesis problem in which we get four different sensor-output combinations under each hypothesis. These combinations are The optimal NP test in such a case is a uniformly most powerful (UMP) test. However, the UMP test does not exist in this case because the likelihood ratio between the pair of observations depends on which one of the four different sensor-output combinations was received at the fusion center. When the UMP test fails, a common practice in solving composite hypothesis problems is to apply a generalized likelihood ratio test (GLRT) which is known to give a performance close to the optimal rule in a wide range of detection problems when the optimal rule is hard to find [10]. The GLRT uses the following likelihood ratio test (LRT) for the case of two sensors: If none of the sensors' observations are censored, then the optimal NP test in the centralized case is applied. However, if observations get censored, then the ML estimate of each censored observation under both hypotheses is used in the LRT. For a composite hypothesis-testing problem, the generalized likelihood ratio test uses the probability density function (pdf) of the state of nature with the highest posterior probability based on the observations under the two hypotheses in the final likelihood ratio. We use the mGLRT to estimate the values of the censored observations under both hypotheses. The use of mGLRT in the censoring sensors scenario will be illustrated for a two-sensor network. The generalization to an N-sensor network will then be considered. The following assumptions are made throughout the discussion. (i)The sensors censor their observations; that is, they either send their complete observations to a fusion center or send nothing. (ii)The sensors communicate with the fusion center through an ideal noise-free channel. (iii)The noise accompanied with the sensors' observations is Gaussian. Therefore, the notions of statistical correlation and dependence will be used interchangeably [11]. (iv)The fusion center knows the statistics of the sensors' observations. (v)The covariance matrix of the sensors' observations is positive definite. (vi)If all sensors' observations are censored, then decide 2.1. A Two-Sensor Network To demonstrate how the mGLRT can be applied to a censoring sensor network, we begin with a simple two-sensor network. Consider the problem of detecting a mean-shift in Gaussian noise using a censoring two-sensor network. In detection-theory terminology, the problem can be expressed as is assumed to be positive definite and To limit our attention to finding the fusion-center decision rule, the communication-rate constraints are assumed to be identical across the sensors, and so 12] for the statistically independent observations case. When one of the sensors sends its observation to the fusion center and the other does not, the censored observation could be estimated based on the correlation information. For example, if sensor 1 censors while sensor 2 sends, the fusion center could estimate the value of The likelihood ratio of the censored and uncensored observations at the fusion center is which can be expressed as a function of which can easily be simplified to So, by calculating the inverse of the covariance matrix and estimating the mean of the observations, 2.2. Applying mGLRT to an N-Sensor System In the previous discussion, we considered applying the mGLRT to a system consisting of two sensors communicating with a fusion center. The mGLRT can be easily generalized to N sensors using the same mathematical formulation. Consider a system with N sensors that detect the presence of a target under correlated Gaussian noise. Let The fusion center will have Problem 1 (under We have such that Problem 2 (under We have such that The censoring interval of the 2] state that choosing the censoring interval to be a single interval does not reduce the probability of detection. The choice of uninformative, and therefore informative; therefore, the censoring intervals should be chosen differently [13]. The optimization problems under consideration are convex because the equations to be maximized are quadratic. Also, the set over which the optimization is carried out is convex because the censoring intervals are continuous in The solution to Problem 1 is and to Problem 2 is After estimating the censored observations under the two hypotheses, the values of the received observations 3. Performance Evaluation of the mGLRT through Simulations Analytically comparing mGLRT to the optimal independent test is difficult. Therefore, in this section, we compare the proposed mGLRT to the optimal independent test through simulations. Two sensor-network scenarios will be considered, a two-sensor network and a network consisting of eight sensors. The results are obtained using Monte Carlo (MC) simulations. Table 1 shows the required number of trials for each figure. The second column shows the number of points per MC trial. At each point, a value is obtained on the graph. Table 1. Simulation details for each figure. Decentralized detection has a suboptimal performance when compared to centralized detection, because in centralized detection all the data are available at one location for decision making; therefore, the performance of the centralized detector is an upper bound on the decentralized detection performance. Consequently, when evaluating a decentralized detection scheme, our comparison criteria will be based on the difference between the centralized detection performance and the performance of the scheme under consideration. We assume the following about the fusion center in the centralized detection: (i)complete observations are available at the fusion center with no censoring or quantization. (ii)the fusion center has complete knowledge of the correlation structure between the observations. 3.1. Two-Sensor Network In the following simulations, the performance of the mGLRT is compared to the optimal independent test for a two-sensor network that communicates to a fusion center. The sensors receive Gaussian observations that have a certain correlation structure. As discussed earlier, the mGLRT test exploits the correlation information with a slight increase in computations compared to the optimal independent test derived in [3, 4]. Therefore, we expect the mGLRT to have a better performance compared to the optimal independent test when the observations are significantly correlated. The results of our simulations match the expectation. Figure 2 shows the performance of the mGLRT compared to the optimal independent test in the case of correlated observations for different values of signal amplitudes. The figure shows that there is a slight degradation in performance when using the mGLRT in a censoring environment compared to our performance benchmark. This degradation in performance is sacrificed to save energy in the sensors via censoring. It is worth noting that for low probabilities of false alarm, there is essentially no loss in performance due to using censoring with the mGLRT. Figure 2. ROC curve of the mGLRT and the optimal independent test for the two-sensor Gaussian example with 75, and different signal amplitudes ( The performance of the mGLRT compared to the optimal independent test should vary depending on the degree of correlation between the sensor observations. The reason is that the fusion center using the mGLRT uses the correlation information and thus gets better performance, unlike the optimal independent test which assumes that the sensor observations are independent. To examine the effect of correlation, we define a correlation index Figure 3 shows that as the sensor observations become more correlated with unequal variance (i.e., the diagonal elements in the covariance matrix are not equal), the mGLRT performs considerably better than the optimal independent test and does not differ much from the ideal centralized detector. The optimal independent test has a better performance than the mGLRT when Figure 3. Performance of the mGLRT compared to the optimal independent test as the correlation index changes at As seen in Figure 3, the structure of the correlation matrix plays a role in the performance of the mGLRT compared to the optimal independent test. Figure 4 shows that as the diagonal elements of the correlation matrix (variance of each sensor's observation) get close to each other, the performance of the mGLRT degrades until it matches the optimal independent test when the variance of the sensor observations is equal. However, as the diagonal elements of the covariance matrix stray from each other, the mGLRT performs better. Figure 4. Performance of the mGLRT compared to the optimal independent test and the centralized detection with complete observations at Based on the observations from Figures 3 and 4, we conclude that when the sensor observations are highly correlated with unequal variances the mGLRT performs much better than the optimal independent test. When the variances are equal and the sensor observations are highly correlated, either both sensors get the same reading, which is equivalent to a one-sensor observation, or both sensors do not get any. Therefore, the performance of the mGLRT degrades since the fusion-center receives less information. Also, the optimal independent test appears to be optimal for correlated observations when the correlation matrix is symmetric. Since the optimal independent rule partitions the observation space into two continuous parts via the censoring threshold, the result obtained here agrees with Corollary1 in [14] which states that for a two-sensor system with The performance loss due to using the optimal independent test is higher compared to the mGLRT as the correlation increases. However, when the sensors' observations are of equal variance and all observations are highly correlated, the performance of the optimal independent test matches that of the mGLRT. The reader is referred to [13] for a more thorough comparison between the mGLRT and the optimal independent test. 3.2. Eight-Sensor Network In the following simulations, the performance of the mGLRT is compared to the optimal independent test for an eight-sensor network that communicates to a fusion center with the following assumptions. (i)The sensors are placed as shown in Figure 5. (ii)The amplitude of the signal emitted by the source to be detected decreases as (iii)The observations of the sensors contained in the circle and the square are highly correlated among their peers in the same group; whereas the sensors in the circle are weakly correlated with the sensors in the square. Figure 5. Locations of the eight senors and the source. Figure 6 shows the performance of the mGLRT compared to the optimal independent test and our performance benchmark for the system described above. The figure shows that the mGLRT performs better even when the sensors' observations are of equal variance. Moreover, the ideal centralized detection performs slightly better than the mGLRT at low false-alarm probabilities. Because of censoring, the mGLRT and the optimal independent test could not get to the point where 3]. Figure 6. ROC curve of the mGLRT, and optimal independent test for the eight-sensor example with The communication rate constraint for each sensor is The performance of the mGLRT compared to the optimal independent and centralized tests depends on the communication-rate constraint. As the communication-rate constraint increases, the mGLRT should get closer in performance to the centralized test because they both have similar structure. Figure 7 shows a comparison between the mGLRT and the optimal independent test as the communication-rate constraint changes. The optimal independent test does not take advantage from increasing the communication rate constraint beyond Figure 7. Performance of the mGLRT compared to the optimal independent test as the communication-rate constraint varies with 4. A Real-Life Application The performance analysis carried out for the detection tests so far was based on pure simulations. We will now consider a real-world detection problem from the work in [15], where the presence of a frog is to be detected using sound signals collected from an array of microphones placed as shown in Figure 8. For our censoring setup, the sensors (microphone and a transmitter) are assumed to have a communication-rate constraint of 0.33; that is, the sensors will communicate at most one-third of the time when the frog is not active. The signals collected from the microphones are shown in Figure 9. The spikes represent the call of a frog. Figure 9. Signals (amplitude versus time) collected from the 15 microphones with a duration of 30 seconds for each signal. To apply the mGLRT, we assume that the noise is Gaussian, 9, 10 shows the performance of the mGLRT compared to the optimal centralized test. The performance of the mGLRT is almost similar to that of the optimal centralized test while saving energy through censoring. On the other hand, using the optimal independent test results in a degradation in performance especially for low false alarm Figure 10. ROC curve of the mGLRT with a 2/3 censoring probability and the optimal independent test for the frog detection problem. 5. Conclusion A modified GLRT provides a simple and well-performing method for decentralized detection with censoring sensors in correlated noise. Our proposed mGLRT generally performs better than the optimal independent test when the sensor observations are significantly correlated. In our analysis, we showed where the mGLRT could be used while preserving performance and saving energy. The mGLRT uses the knowledge of correlation between sensor observations to estimate the censored values at the fusion center. Depending on the degree of correlation between the sensors' observations and the variance of the noise affecting the sensors, the performance of the mGLRT varies when compared to the optimal independent test. The mGLRT out-performs the optimal independent test the most when the sensors' observations are highly correlated and the variances of the sensors' observations are not equal. Interestingly, when the sensors' observations are of equal variance and all observations are highly correlated, the performance of the optimal independent test matches that of the mGLRT. Allowing sensors to censor in the case when the null hypothesis is more likely to occur prolongs the lifetime of battery-powered sensors by saving energy while sacrificing some of the system's performance. Our simulations showed that for a wide range of values for the correlation indices and variance ratios, the degradation in performance when using the mGLRT with censoring is very low. Moreover, as the variance of the observations increases, the mGLRT gets closer in performance to the optimal centralized test. In a scenario where the fusion center could estimate the covariance matrix of the sensors' observations, the fusion center could choose between either using the optimal independent test or resorting to the mGLRT. If the observations are statistically independent, the fusion center could save on computations by using the optimal independent test. However, if the mGLRT is used all the time, a slight degradation in performance will be experienced in the following two cases. (i)The sensors' observations are independent. (ii)The sensors' observations have equal variance and the correlation among them is equal. This work was partially supported by the Gigascale Systems Research Center and the Multiscale Systems Center, funded under the Focus Center Research Program (FCRP), a Semiconductor Research Corporation program. Sign up to receive new article alerts from EURASIP Journal on Advances in Signal Processing
{"url":"http://asp.eurasipjournals.com/content/2010/1/838921","timestamp":"2014-04-17T15:53:15Z","content_type":null,"content_length":"106546","record_id":"<urn:uuid:0baafde1-5779-4232-af2c-2d2d08e9f1f8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
News from the world of maths: Pandora's 3D box Wednesday, November 25, 2009 An amateur fractal programmer has discovered a new 3D version of the Mandelbrot set. Daniel White's new creation is based on similar mathematics as the original 2D Mandelbrot set, but its infinite intricacy extends into all three dimensions, revealing fractal worlds of amazing complexity and beauty at every level of magnification. Labels: Latest news posted by Plus @ 9:45 AM
{"url":"http://plus.maths.org/content/os/blog/2009/11/pandoras-3d-box","timestamp":"2014-04-21T12:28:49Z","content_type":null,"content_length":"24808","record_id":"<urn:uuid:f090746f-7fc5-4434-aec4-c5be6de6a432>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
idempotent semiring idempotent semiring Idempotent semirings Recall that a semiring is a set $R$ equipped with two binary operations, denoted $+$, and $\cdot$ and called addition and multiplication, satisfying the ring (or rng) axioms except that there may not be a zero nor a negative. An idempotent semiring (also known as a dioid) is one in which addition is idempotent: $x + x = x$, for all $x\in R$. The term dioid is sometimes used as an alternative name for idempotent semirings. • Any quantale is an idempotent semiring, or dioid, under join and multiplication. • The powerset of the set of languages over a given alphabet forms an idempotent semiring in which $L + L' = L \cup L'$ and multiplication is given by concatentation. • The tropical algebra and the max-plus algebra are idempotent semirings.
{"url":"http://ncatlab.org/nlab/show/idempotent+semiring","timestamp":"2014-04-21T14:49:13Z","content_type":null,"content_length":"14351","record_id":"<urn:uuid:24fe1fc6-0d83-4422-aceb-d0a12b10e2c5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Thorie de la Speculation, Annales de lEcole Normale Suprieure Results 1 - 10 of 26 , 1997 "... This paper presents the multifractal model of asset returns (“MMAR”), based upon ..." - EEG.” InInternational Conference on Neural Information Processing (ICONIP’96 , 1996 "... Abstract—A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabi ..." Cited by 16 (16 self) Add to MetaCart Abstract—A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. 1. - PHYSICA A , 2000 "... The Black-Scholes theory of option pricing has been considered for many years as an important but very approximate zeroth-order description of actual market behavior. We generalize the functional form of the diffusion of these systems and also consider multi-factor models including stochastic volati ..." Cited by 12 (10 self) Add to MetaCart The Black-Scholes theory of option pricing has been considered for many years as an important but very approximate zeroth-order description of actual market behavior. We generalize the functional form of the diffusion of these systems and also consider multi-factor models including stochastic volatility. Daily Eurodollar futures prices and implied volatilities are fit to determine exponents of functional behavior of diffusions using methods of global optimization, Adaptive Simulated Annealing (ASA), to generate tight fits across moving time windows of Eurodollar contracts. These short-time fitted distributions are then developed into long-time distributions using a robust non-Monte Carlo path-integral algorithm, PATHINT, to generate prices and derivatives commonly used by option traders. - J MATHL COMPUTER MODELLING , 1998 "... A modern calculus of multivariate nonlinear multiplicative Gaussian-Markovian systems provides models of many complex systems faithful to their nature, e.g., by not prematurely applying quasi-linear approximations for the sole purpose of easing analysis. To handle these complex algebraic construc ..." Cited by 11 (11 self) Add to MetaCart A modern calculus of multivariate nonlinear multiplicative Gaussian-Markovian systems provides models of many complex systems faithful to their nature, e.g., by not prematurely applying quasi-linear approximations for the sole purpose of easing analysis. To handle these complex algebraic constructs, sophisticated numerical tools have been developed, e.g., methods of adaptive simulated annealing (ASA) global optimization and of path integration (PATHINT). In-depth application to three quite different complex systems have yielded some insights into the benefits to be obtained by application of these algorithms and tools, in statistical mechanical descriptions of neocortex (short-term memory and electroencephalography), financial markets (interest-rate and trading models), and combat analysis (baselining simulations to exercise data). "... Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It ..." , 2001 "... We describe an end-to-end real-time S&P futures trading system. Inner-shell stochastic nonlinear dynamic models are developed, and Canonical Momenta Indicators (CMI) are derived from a fitted Lagrangian used by outer-shell trading models dependent on these indicators. Recursive and adaptive optimiza ..." Cited by 7 (6 self) Add to MetaCart We describe an end-to-end real-time S&P futures trading system. Inner-shell stochastic nonlinear dynamic models are developed, and Canonical Momenta Indicators (CMI) are derived from a fitted Lagrangian used by outer-shell trading models dependent on these indicators. Recursive and adaptive optimization using Adaptive Simulated Annealing (ASA) is used for fitting parameters shared across these shells of dynamic and trading models. , 1999 "... The price of financial assets are, since [1], considered to be described by a (discrete or continuous) time sequence of random variables, i.e a stochastic process. Sharp scaling exponents or unifractal behavior of such processes has been reported in several works [2] [3] [4] [5] [6]. In this letter ..." Cited by 6 (0 self) Add to MetaCart The price of financial assets are, since [1], considered to be described by a (discrete or continuous) time sequence of random variables, i.e a stochastic process. Sharp scaling exponents or unifractal behavior of such processes has been reported in several works [2] [3] [4] [5] [6]. In this letter we investigate the question of scaling transformation of price processes by establishing a new connexion between nonlinear group theoretical methods and multifractal methods developed in mathematical physics. Using two sets of financial chronological time series, we show that the scaling transformation is a non-linear group action on the moments of the price increments. Its linear part has a spectral decomposition that puts in evidence a multifractal behavior of the price increments. - J. Stoch. Anal. Appl , 2007 "... In this article we develop an explicit formula for pricing European options when the underlying stock price follows a non-linear stochastic differential delay equation (sdde). We believe that the proposed model is sufficiently flexible to fit real market data, and is yet simple enough to allow for a ..." Cited by 5 (0 self) Add to MetaCart In this article we develop an explicit formula for pricing European options when the underlying stock price follows a non-linear stochastic differential delay equation (sdde). We believe that the proposed model is sufficiently flexible to fit real market data, and is yet simple enough to allow for a closed-form representation of the option price. Furthermore, the model maintains the no-arbitrage property and the completeness of the market. The derivation of the option-pricing formula is based on an equivalent martingale measure. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=115191","timestamp":"2014-04-20T13:01:54Z","content_type":null,"content_length":"33626","record_id":"<urn:uuid:e3507519-086a-4316-a4e8-e1efcb1203b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Lester, PA Math Tutor Find a Lester, PA Math Tutor ...My Bachelor's degree is in Agricultural-Biology, which is a more experiential, hands-on biology degree. My Master's degree is in Plant Genetics, which is a more statistical analysis of a specific biological component. English is my first language and one of my hobbies is creative writing. 20 Subjects: including statistics, algebra 1, algebra 2, biology ...I am a middle school mathematics teacher in Gloucester County, New Jersey. I have been teaching Math for ten years now after many years in the business world. I spent the first 3 years of my career teaching high school math in my current district. 10 Subjects: including algebra 1, linear algebra, geometry, prealgebra ...I have a degree in engineering and math. My approach towards tutoring is simple. I perceive tutoring as not just an opportunity to furnish knowledge, but also a learning opportunity for me. 23 Subjects: including algebra 2, physics, statistics, SAT math ...I am also available for SAT preparation. I can come to your home, or we can meet at a mutually convenient location. I am currently on a leave of absence from a high-school math position in southern Maryland while my wife finishes her master's degree, so my available hours are very flexible! 8 Subjects: including calculus, algebra 1, algebra 2, geometry My name is Sarah and I've been teaching 7th-12th grade English for five years. For nine years I have also been a tutor for all grade levels in math, reading, writing and study skills. I enjoy tutoring elementary students just as much as middle and high school students. 24 Subjects: including prealgebra, trigonometry, reading, geometry Related Lester, PA Tutors Lester, PA Accounting Tutors Lester, PA ACT Tutors Lester, PA Algebra Tutors Lester, PA Algebra 2 Tutors Lester, PA Calculus Tutors Lester, PA Geometry Tutors Lester, PA Math Tutors Lester, PA Prealgebra Tutors Lester, PA Precalculus Tutors Lester, PA SAT Tutors Lester, PA SAT Math Tutors Lester, PA Science Tutors Lester, PA Statistics Tutors Lester, PA Trigonometry Tutors Nearby Cities With Math Tutor Billingsport, NJ Math Tutors Black Horse, PA Math Tutors Carroll Park, PA Math Tutors Center City, PA Math Tutors Drexelbrook, PA Math Tutors Elwyn, PA Math Tutors Essington Math Tutors Feltonville, PA Math Tutors Garden City, PA Math Tutors Linwood, PA Math Tutors Moylan, PA Math Tutors Passyunk, PA Math Tutors Penn Ctr, PA Math Tutors Tinicum, PA Math Tutors Verga, NJ Math Tutors
{"url":"http://www.purplemath.com/lester_pa_math_tutors.php","timestamp":"2014-04-20T02:05:42Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:fb644141-f19a-453c-9d93-028e167123cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Notice That S1 And S2 Cannot Be Closed Or Opened ... | Chegg.com Notice that S1 and S2 cannot be closed or opened at the sametime. What would happen if both S1 and S2 were open? What if both S1 and S2 were closed? (Which situation is badfor battery B1, and why?) If S1 and S2 are both open, nocurrent flows, but if S1 and S2 are both closed, this creates ashort circuit. There is essentially no resistance in the pathconnecting the two terminals of the battery, so as much current aspossible flows from the battery (quickly discharging it). As you switch S1 and S2 together (one open and one closed), whathappens to the lightbulbs? Why? (That is, what is thedifference between the two circuits?) Picture 1: Picture 2: Picture 3: Picture 4: With S1 closed and S2 open, now close S3. Whathappens? What is the voltage across each bulb? Whydoesn't it change? You have added a battery to the circuit,but nothing happened. Why? Now close S1 and openS2. What happens? Why?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/notice-s1-s2-cannot-closed-opened-sametime-would-happen-s1-s2-open-s1-s2-closed-situation--q206399","timestamp":"2014-04-24T15:22:38Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:d3064855-bda0-480e-847e-75766e3ba091>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Transitive sets. October 27th 2012, 12:52 PM Also sprach Zarathustra Transitive sets. Hello, I need a help with the following: 1. Let $A$ be a transitive set, prove that $A∪{A}$ is also transitive. 2. Show that for every natural $n$ there is a transitive set with $n$ elements. Thank you all! October 27th 2012, 01:07 PM Re: Transitive sets. The term "transitive set" refers to sets of sets: The set, A, is said to be transitive if and only if whenever $x\in A$ and $y\in x$ then $y\in A$. (Frankly, I had to look that up!) Now, if $x\in A\cup\{A\}$, then either $x\in A$ or x= A. Let $y\in x$ and consider those two cases. As for 2, recall Von Neumann's definition of the natural numbers: 0 is the empty set, 1 is the set whose only member is the 0, 2 is the set whose only members are 0 and 1, etc.
{"url":"http://mathhelpforum.com/discrete-math/206194-transitive-sets-print.html","timestamp":"2014-04-20T16:26:23Z","content_type":null,"content_length":"5008","record_id":"<urn:uuid:35d9eee2-2da5-4f34-b007-f0cc98d6e6f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
M+A = A+A implies M = A? January 16th 2013, 07:51 AM M+A = A+A implies M = A? Let $A$ be a commutative unital ring and $M$ an $A$-module. Suppose that $M\oplus A \cong A\oplus A$. Is it true that necessary $M\cong A$? January 16th 2013, 11:19 AM Re: M+A = A+A implies M = A? The answer is affirmative and contained in these beautiful notes (click) by Brian Conrad. A proof an be find in Weibel's K-Book (click). January 16th 2013, 01:08 PM Re: M+A = A+A implies M = A? It doesn't really require a very complicated proof. In any ring, not just "commutative" or "unary" (in fact, since there is no mention of multiplication, you don't even need a ring), we have additive inverses. Adding the additive inverse of A to both sides immediately results in M= A. January 16th 2013, 01:17 PM Re: M+A = A+A implies M = A? It doesn't really require a very complicated proof. In any ring, not just "commutative" or "unary" (in fact, since there is no mention of multiplication, you don't even need a ring), we have additive inverses. Adding the additive inverse of A to both sides immediately results in M= A. I really think you misunderstood the question in the OP. Maybe I had to specify that with $-\oplus -$ I mean the (bi)product (in the abelian category) of $A$-module, where $A$ is supposed to be commutative and unital in order to easily satisfy the IBP.
{"url":"http://mathhelpforum.com/advanced-algebra/211414-m-implies-m-print.html","timestamp":"2014-04-17T13:31:00Z","content_type":null,"content_length":"6964","record_id":"<urn:uuid:88d5bf93-dae0-47bc-a98c-8ace711425fe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Derivatives Question March 21st 2011, 01:15 AM #1 Junior Member Feb 2010 Higher Derivatives Question Hi all, Here goes the question: Given that $y=xsin3x+cos3x$, show that $x^2\frac{d^2y}{dx^2}+2y+4x^2y=0$. I am quite comfortable in deriving the normal and higher derivatives(*Just to make sure I am on the right track, is $\frac{dy}{dx}$=sin3x-3sin3x+3xcos3x?) and am more concerned about the 'showing' part. Hopefully someone can guide me along. Another one: Given that $xy=sinx$, prove that $\frac{d^2y}{dx^2}+\frac{2}{x}\frac{dy}{dx}+y=0$. It seems like a typical implicit diff. question other than the higher derivatives part. I haven't really learn how to derive higher derivatives using implicit diff. Any help is appreciated. Thanks in advance! When you have found $\displaystyle \frac{dy}{dx}$ and $\displaystyle \frac{d^2y}{dx^2}$, substitute them and $\displaystyle y$ into the LHS of your equation. Show that it simplifies to the RHS. Cheers. Your reply was short and concise but manage to set my thinking straight. Now I am proud that I am finally able to attempt the question. Thanks again. March 21st 2011, 01:19 AM #2 March 21st 2011, 01:53 AM #3 Junior Member Feb 2010
{"url":"http://mathhelpforum.com/calculus/175221-higher-derivatives-question.html","timestamp":"2014-04-17T10:24:33Z","content_type":null,"content_length":"37305","record_id":"<urn:uuid:d4a9da52-0baa-4fcf-9541-b307744d2fc1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph theoretical analysis of complex networks in the brain Since the discovery of small-world and scale-free networks the study of complex systems from a network perspective has taken an enormous flight. In recent years many important properties of complex networks have been delineated. In particular, significant progress has been made in understanding the relationship between the structural properties of networks and the nature of dynamics taking place on these networks. For instance, the 'synchronizability' of complex networks of coupled oscillators can be determined by graph spectral analysis. These developments in the theory of complex networks have inspired new applications in the field of neuroscience. Graph analysis has been used in the study of models of neural networks, anatomical connectivity, and functional connectivity based upon fMRI, EEG and MEG. These studies suggest that the human brain can be modelled as a complex network, and may have a small-world structure both at the level of anatomical as well as functional connectivity. This small-world structure is hypothesized to reflect an optimal situation associated with rapid synchronization and information transfer, minimal wiring costs, as well as a balance between local processing and global integration. The topological structure of functional networks is probably restrained by genetic and anatomical factors, but can be modified during tasks. There is also increasing evidence that various types of brain disease such as Alzheimer's disease, schizophrenia, brain tumours and epilepsy may be associated with deviations of the functional network topology from the optimal small-world pattern. 1. Background The human brain is considered to be the most complex object in the universe. Attempts to understand its intricate wiring patterns and the way these give rise to normal and disturbed brain function is one of the most challenging areas in modern science[1]. In particular, the relationship between neurophysiological processes on the one hand, and consciousness and higher brain functions such as attention, perception, memory, language and problem solving on the other hand, remains an enigma to this day. In the last decades of the 20^th century significant progress has been made in neuroscience with an essentially reductionistic, molecular biologic research programme [2]. The Nobel prize in physiology or medicine awarded to Eric Kandel in 2000 for discovering the molecular mechanisms of memory in the snale aplysia signifies the importance of this work. However, despite the impressive increase of knowledge in neuroscience in terms of molecular and genetic mechanisms, progress in true understanding has been disappointing, and few theories are available that attempt to explain higher level brain processes. For this reason there has been increased interest to search for other approaches to study brain processes and their relation to consciousness and higher brain functions [3]. One strategy has been to conceive the brain as a complex dynamical system and to search for new approaches in other fields of science that are also devoted to the study of complex systems. In recent years considerable progress has been made in the study of general complex systems, consisting of large numbers of weakly interacting elements. Three research areas in physics and mathematics have proven to be particularly valuable in the study of complex systems: (i) nonlinear dynamics and related areas such as synergetics; (ii) statistical physics which deals with universal phenomena at phase transitions and scaling behaviour, and (iii) the modern theory of networks, which is derived from graph theory [4]. Nonlinear dynamics has been applied to the study of the brain since 1985, and has become a very active research field in itself [5,6]. Application of nonlinear dynamics to neuroscience has lead to the introduction of new concepts such as attractors, control parameters and bifurcations as well as to the development of a whole range of new analytical tools to extract nonlinear properties from time series of brain activity. This has resulted for instance in new ways to model epileptic seizures as well as methods to detect and perhaps even predict the occurrence of seizures [7-9]. Recently, the focus in studies of nonlinear brain dynamics has shifted from trying to detect chaotic dynamics to studying nonlinear interactions between brain areas [10,11]. The study of critical phenomena and scaling behaviour in brain dynamics has also been very fruitful. Several studies have shown that time series of brain activity demonstrate scaling with characteristic exponents, suggesting critical dynamics near a phase transition [12-15]. The modern theory of networks, which originated with the discovery of small-world networks and scale-free networks at the close of the last millennium is the most recently developed approach to complex systems [16,17]. The study of complex networks has attracted a large amount of attention in the last few years, and has resulted in applications in such various fields as the study of metabolic systems, airport networks and the brain [18-22]. The aim of the present review is to discuss recent applications of network theory to neuroscience. After a brief historical introduction we summarize the basic properties and types of networks, and some important results on the relation between network properties and processes on these networks, in particular synchronization phenomena. Subsequently we will discuss applications to neuroscience under three headings: (i) modelling of neural dynamics on complex networks; (ii) graph theoretical analysis of neuroanatomical networks; (iii) applications of graph analysis to studies of functional connectivity with functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and magnetoencephalography (MEG). 2. Historical overview The modern theory of networks has its roots in mathematics as well as in sociology. In 1736 the famous mathematician Leonard Euler (1707–1783) solved the problem of 'the bridges of Konigsberg'. This problem involved the question whether it is possible to make a walk crossing exactly one time each of the seven bridges connecting the two islands in the river Pregel and its shores. Euler proved that this is not possible by representing the problem as an abstract network: a "graph". This is often considered the first proof in graph theory. Since then graph theory has become an important field within mathematics, and the only available tool to handle network properties theoretically. An important step forward occurred when random graphs were discovered [23,24]. In random graphs connections between the network nodes are present with a likelihood p. Many important theorems have been proven for random graphs. In particular it has been shown that properties of the graphs often undergo a sudden transition ('phase transition') as a function of increasing p. However, despite the success of classical graph theory, it was not a very good or useful theory for real networks encountered in nature. One empirically observed phenomenon that could not be explained by classical theory was the fact the 'distances' in sparsely and mainly locally connected networks were often much smaller than expected. This phenomenon was probably first observed by the Hungarian writer Frigyes Karinthy in a short story called 'Chains' [25]. In this story he speculates that in the modern world the 'distance' between any two persons is unlikely to be more than five persons. As it turned out, this was a remarkable foresight of an important fact about certain classes of networks. The first person to study this phenomenon more scientifically was Stanley Milgram (1933–1984). He was interested in quantifying distances in social networks. In one experiment he sent letters to randomly chosen subjects in the USA. They were informed that the letter should go to a certain person in Boston. However, the subjects were only allowed to send the letter to another person they knew well, and who might possibly be a little closer to the target in Boston. As it turned out, many letters did reach the target person, and on average each letter was sent only 5.5 times. This experiment could count as the first empirical proof of the 'small-world' phenomenon, also referred to as 'six degrees of separation' [26]. The 'small-world' phenomenon was later confirmed in other experiments (for instance: the letter experiment was repeated with e-mail) but for a long time no satisfactory explanation was available. This situation changed suddenly in 1998 with the publication of a paper in Nature by Duncan Watts and Steven Strogatz [16]. In this paper the authors proposed a very simple model of a one-dimensional network on a ring. Initially each node ('vertex') in the network is only connected to its k nearest neighbours (k/2 on each side). K is called the degree of the network. Next, with a likelihood p, connections ('edges') are chosen at random and connected to another vertex, also chosen randomly. With increasing p more and more edges become randomly re-connected and finally for p = 1 all connections are random. Thus, this simple model allows to investigate the whole range from regular to random networks, including an intermediate range. The intermediate range proved to be crucial to the solution of the problem. To show this, the authors introduced two measures: the clustering coefficient C, which is the likelihood that neighbours of a vertex will also be connected, and the path length L which is the average of the shortest distance between pairs of vertices counted in number of edges. Watts and Strogatz showed that regular networks have a high C but also a very high L. In contrast, random networks have a low C and a low L. So, neither regular nor random networks explain the small-world phenomenon. However, when p is only slightly higher than 0 (few edges randomly rewired) the path length L drops sharply, while the clustering coefficient hardly changes. Thus networks with a small fraction of randomly rewired connections combine both high clustering and a small path length, and this is exactly the small-world phenomenon to be explained. These networks were called 'small-world' networks by the authors, who showed that such networks could be found in the nervous system of C. elegans, a social network of actors and the network of power plants in the United States. Also, they showed that a small-world architecture might facilitate the spread of infection or information in networks. A second major discovery was made a year later by Barabasi and Albert [17]. They proposed a model for the growth of a network where the likelihood that a newly added edge will connect to a vertex depends upon the degree of this vertex. Thus, vertices that have a high degree (large number of edges) are more likely to get even more edges. This is the network equivalent of 'the rich getting richer'. Networks generated in this way are characterised by a degree distribution which can be described by a power law: P(k) = k^-1/a. In the case of the Barabasi Albert model the exponent is exactly 3. Networks with a power law degree distribution are called scale-free. It has been shown that many real networks in nature such as for instance the Internet, the World Wide Web, collaboration networks of scientists and networks of airports are likely to be scale-free [27,28]. Scale-free networks have many interesting properties such as an extremely short path length, which will be discussed in the section below. The discovery of small-world networks in 1998 and of scale-free networks in 1999 was noted by scientists in many different fields, and set off a large body of theoretical and experimental research that is growing to this day. In retrospect these discoveries can be considered to be the starting point of the modern theory of networks. The field is so new that there are only few textbooks yet [28 ,29]. Fortunately there are several excellent reviews that give an overview of the current state of network theory [27,30-33]. A collection of key papers can be found in Newman et al. [34]. 3. Basics of modern network theory 3.1 Definition of graphs and graph measures A graph is an abstract representation of a network. It consists of a set of vertices (or nodes) and a set of edges (or connections) (Fig. 1). The presence of an edge between two vertices indicates the presence of some kind of interaction or connection between the vertices (the interpretation depends upon what is being modelled with the graph). The adjacency matrix A contains the information about the connectivity structure of the graph. When an edge exists between two vertices i and j the corresponding entry of the adjacency matrix is: A[i,j ]= 1; otherwise A[i,j ]= 0. The number of edges connecting to ('incident on') a vertex is called the degree k of this vertex. The likelihood P(k) that a randomly chosen vertex will have degree k is given by the degree distribution: it is a plot of P(k) as a function of k. The degree distribution can have different forms: Gaussian, binomial, Poisson, exponential or power law. The degree distribution is an important determinant of network properties. Figure 1. Representation of a network as a graph. In the case of an unweighted graph (left panel) black dots represent the nodes or vertices, and the lines connecting the dots the connections or edges. The shortest path between vertices A and B consists of three edges, indicted by the striped lines. The clustering coefficient of a vertex is the likelihood that its neighbours are connected. For vertex C, with neighbours B and D, the clustering coefficient is 1. When weights are assigned to the edges, the graph is weighted (right panel). Here the weights of the edges are indicated by the thickness of the lines. With respect to the edges several further distinctions can be made. Graphs can be undirected, when information can flow in both directions along edges connecting vertices, or directed, when information can only flow in one direction. In directed graphs each vertex may have different numbers of ingoing and outgoing edges; correspondingly there are separate in degree and out degree distributions for such graphs. Graphs which contain vertices connected by more than one edge are called multigraphs. Graphs in which edges either exist or do not exist, and in which all edges have the same significance are called unweighted graphs. When weights are assigned to each of the edges the corresponding graph is called a weighted graph (right panel in Fig. 1). Weights can be used to indicate the strength or effectiveness of connections, or the distance between vertices; negative weights can also be used. Two measures are frequently used to characterize the local and global structure of unweighted graphs [16,27,33]. These are the clustering coefficient C and the characteristic path length L. The clustering coefficient C[i ]of a vertex i with degree k[i ]is usually defined as the ratio of the number of existing edges (e[i]) between neighbours of i, and the maximum possible number of edges between neighbours if i. A vertex is called a neighbour of i when it is connected to it by an edge. The formula for C[i ]is: A slightly different definition can be found in Newman (Newman, 2003). The clustering coefficient C ranges between 0 and 1. Usually C[i ]is averaged over all vertices to obtain a mean C of the graph. The clustering coefficient is an index of local structure, and has been interpreted as a measure of resilience to random error (if vertex i is lost, its neighbours remain still connected). Another important measure is the characteristic path length. In the case of an unweighted graph the path length or distance d[i,j ]between two vertices i and j is the minimal number of edges that have to be travelled to go from i to j. This is also called the geodesic path between i and j. The characteristic path length L of a graph is the mean of the path lengths between all possible pairs of vertices: The characteristic path length is a global characteristic; it indicates how well integrated a graph is, and how easy it is to transport information or other entities in the network. A measure related to the path length is the diameter of a graph: this is the length (in number of edges) of the longest geodesic in a graph. The degree distribution, clustering coefficient and path length are the core measures of graphs. On the basis of these measures four different types of graphs can be distinguished: (i) ordered or lattice like; (ii) small-world; (iii) random and (iv) scale-free (Fig. 2, 3). A further subdivision is described in Amaral et al. [36]. In an ordered network, each vertex is connected to its k nearest neighbours. What 'nearest' means depends upon the dimension in which the network is modelled. In most cases, one or two dimensional networks are considered. Ordered or lattice like networks have a high C and a large L. For the one-dimensional model of Watts and Strogatz the theoretical values of C and L are 3/4 and N/2K. A small world network can be thought of as an ordered network where a small fraction of the edges (given by the rewiring probability p) has been randomly rewired. Such a network has a C close to that of an ordered network, but a very small path length close to that of a random network. However, analytical solutions of C and L as a function of p are not known [27]. In a random network, all edges are randomly assigned to vertex pairs (or: edges exist with a certain likelihood). In a random network, C is very small (K/N) and L is very short: ln(N)/ln(K). Finally, a scale-free network is a network with a power law degree distribution. Such a network could be generated by a growth process characterized by preferential attachment (Barabasi and Albert, 1999). However, other growth models for scale-free networks have been proposed [27,33]. We should stress that neither lattice like, small-world or random networks are scale-free. Scale-free networks can have very small path lengths of the order of lnln(N), but the clustering coefficient may also be smaller than that of small-world networks [36]. Figure 2. Three basic network types in the model of Watts and Strogatz. The leftmost graph is a ring of 16 vertices (N = 16), where each vertex is connected to four neighbours (k = 4). This is an ordered graph which has a high clustering coefficient C and a long pathlength L. By choosing an edge at random, and reconnecting it to a randomly chosen vertex, graphs with increasingly random structure can be generated for increasing rewiring probability p. In the case of p = 1, the graph becomes completely random, and has a low clustering coefficient and a short pathlength. For small values of p so-called small-world networks arise, which combine the high clustering coefficient of ordered networks with the short pathlength of random networks. Figure 3. Scale-free graphs are characterized by a scale-free degree distribution P(k). In scale-free graphs, different vertices have very different degrees, and typically a few vertices with extremely high degrees (so-called 'hubs') are present. In the schematic example shown here the white (k = 9) and the striped (k = 7) vertices are examples of hubs. In addition to clustering coefficients, pathlengths and degree distributions other measures have been introduced to characterize properties of interest. Milo et al. introduced the concept of network motifs [37,38]. A motif is a simple subgraph consisting of a small number of vertices connected in a specific way. Triangles are a simple type of motif. To some extent, the clustering coefficient is an index of a specific type of motif, namely the triangle. Alternatively, one could view motif analysis as a kind of generalization of the clustering coefficient. Another measure is the degree correlation. This is an index of whether the degree of a vertex is influenced by the degree of another vertex to which it connects. The average degree k[nn ]of the neighbours of a node with degree k is given by: Graphs with a positive degree correlation are called assortative; in the case of a negative degree correlation a graph is called disassortative. Degree correlations can be quantified by computing the Pearson correlation coefficient of the degrees of pairs of vertices connected by an edge. Interestingly, most social networks tend to be assortative, while most technological and biological networks tend to be disassortative (table 3.1 in [27].). An index of the relative importance of a vertex or edge is the 'betweenness'. This is the number of shortest paths that run through an edge or vertex. The beweenness of a node b[i ]is defined as: This is the ratio of all shortest paths between j and k that run through i (n[j,k](i)), divided by all shortests paths between j and k (n[j,k]). This measure also reflects the consequences of the loss of a particular edge or vertex. Another recently described measure is the transversal time for random walks on small-world networks [39]. Characterization of overlapping communities in complex networks has recently been described by Palla et al. [40]. Most graph measures have only been defined for the simplest case of unweighted graphs. However in many cases weighted graphs may represent more accurate models of real networks. Several authors have discussed the analysis of weighted graphs [41-47]. To characterize such networks one could convert them to unweighted graphs, for instance by setting all edges with a weight above a certain threshold to 1, and the others to 0. Although this approach works and has been used in EEG and MEG studies, it has several disadvantages: (i) much of the information available in the weights is not used; (ii) when the threshold is too high some vertices may become disconnected from the graph which poses problem with the computation of C and L; (iii) the choice of the threshold remains arbitrary. Latora and Marchiori have proposed a framework to address some of these problems [41,42,48]. They consider weighted networks and define the efficiency of the path between two vertices as the inverse of the shortest distance between the vertices (note that in weighted graphs the shortest path is not necessarily the path with the smallest number of edges). In the case where a path does not exist, the length is considered to be infinite, and the efficiency is zero. The average over all pair wise efficiencies is the Global efficiency E[glob ]of the graph: The Local efficiency is the mean of the efficiencies of all subgraphs G[i ]of neighbours of each of the vertices of the graph. The average local efficiency E[loc ]is given by: The approach based upon efficiencies is attractive since it takes into account the full information contained in the graph weights, and provides an elegant solution to handle disconnected vertices. Efficiency has been used to show that scale-free networks are very resistant to random errors, but quite sensitive to targeted attacks [49]. By taking the harmonic mean of the inverse of the efficiencies a weighted path length can be defined, which is a bit closer to the original path length (formula 3.2 in [27].). Slightly modified the formula is: Apart from the Local efficiency, two other definitions of the clustering coefficient have been proposed for weighted networks. In one definition only the weights of the edges connecting the neighbours of a vertex are taken into account, while the edges connecting this vertex to its neighbours are all given a weight of 1 [46]. It is also possible to define a weighted clustering coefficient, that takes into account both the weights between the reference vertex and its neighbours, as well as the weights of the edges between the neighbours [47]. In the last study an approach to the analysis of motifs in weighted graphs was also proposed. Finally we briefly mention a measure of the 'synchronizability' of a graph. This measure is based upon a so-called linear stability analysis. A detailed description can be found in [33]. Briefly, the spectrum of eigenvalues from the graph laplacian L is determined. This matrix L is the difference between the diagonal matrix of node degrees and the adjacency matrix A. The eigenvalues are ordered from the largest to the smallest, where λ[1 ]= 0. The ratio R = λ[N]/λ[2 ]of the largest and one but smallest eigenvalue is a measure of the synchronizability of the graph. This approach has been used for unweighted as well as weighted networks, and will be referred to in the studies discussed in the following section. 3.2 Dynamic processes on graphs One of the most interesting and active research areas in modern network theory is the study of structure function relationships, in particular the relation between topological network characteristics and synchronization dynamics on these networks [50]. The importance of the small-world structure for the spread of infectious disease was already addressed in the original Watss and Strogatz paper [ 16]. Barahona and Pecora used linear stability analysis and the master stability function (MSF) to study the synchronizability of networks with complex topology [51]. They showed that networks with a small-world topology may synchronize more easily than deterministic or fully random graphs, although the presence of small-world properties did not guarantee that the network will be synchronizable. Hong et al. studied the synchronization dynamics of a small-world network of coupled oscillators as a function of rewiring probability p [52]. They found that phase and frequency synchronization arise even for small values of p. The phase transition was of the mean field type, like in the Kuramoto model. For values of p > 0.5 the small-world model synchronized as rapidly as a fully random A surprising phenomenon, later referred to as 'the paradox of heterogeneity' was discovered by Nishikawa et al. [53]. Using linear stability analysis and the ratio λ[N]/λ[2 ](largest divided by second smallest eigenvalue of the graph Laplacian matrix) as an index of synchronizability, they showed that (unweighted, undirected) networks with a more homogenous degree distribution synchronize more easily than networks with a more heterogeneous degree distribution, even when the latter network type has a shorter average path length. This observation implied that the previous idea that synchronizability was directly related to path length had to be rejected. The authors explain the paradoxical effect of heterogeneous degree distributions on synchronizability by the 'overload' of the few highly connected nodes in the network. Another factor with a somewhat unexpected influence on network synchronization is the existence of delays between the coupled dynamic units. Atay and Jost showed in a model of coupled logistic maps that networks with scale-free or random topology could still synchronize despite the delays, whereas lattice like and small-world networks did not synchronize well [54]. However, and this was somewhat unexpected, in some cases where the un delayed network did not synchronize, synchronization did occur when delays were introduced. We should add however that it is not clear to what extent these results obtained with discrete maps can be extrapolated to more general systems of coupled oscillators. In a later paper Atay and Biyikoglu studied systematically the effect of a broad range of graph operations (Cartesion product, join, coalescense, adding/deleting links) on network synchronizability [55]. Especially interesting results were obtained in the case of adding links to networks. First, in some cases adding links between two networks was shown to increase the synchronizability of the individual networks while decreasing the synchronizability of the combined network. Also, adding links to a single network could result in smaller path lengths but at the same time decreased synchronizability. Of course this is reminiscent of the findings of Nishikawa et al. [53]. although the authors claim that the degree distribution of a network in general does not determine its synchronizability. Removing links from networks can also be used to study community structures in networks [43]. Taking the original result of [53]. to the extreme, one would expect that a network with a maximally homogeneous structure would show the highest level of synchronizability. Donetti et al. (2005) described an algorithm to generate such 'hyper homogeneous' networks which they baptized 'entangled networks' [56]. Entangled networks were shown to be optimal not only in terms of synchronizability, but also with respect to resilience against attacks and error. However, the authors state that a full topological understanding of entangled networks has not yet been reached. Synchronizability of scale-free networks of limit cycle oscillators was studied in detail using linear stability analysis by Lee [57]. He found a critical coupling strength for scale-free networks that was smaller than for small-world or random networks. The nature of the synchronization transition depended upon the scaling exponent, and showed a different behaviour for the range 2< exponent<3 as compared to exponent > 3. A relationship between the scaling exponent of the degree distribution and pattern formation in scale-free networks has also been reported by Zou and Lipowsky [58]. An important breakthrough with respect to the 'paradox of heterogeneity' was achieved by Motter et al. [59]. They considered directed, weighted networks, where the weights of the edges were based (with a parameter β) upon the degrees of the nodes. They showed that in the case of weighted, directed networks as opposed to unweighted undirected networks a heterogeneous degree distribution could be associated with an optimal level of synchronizability. The most optimal results, both in terms of synchronizability as well as 'wiring cost' were obtained for β = 1. In contrast for β = 0 (the unweighted case) the results of [53]. were reproduced. The authors also suggested that for large sufficiently random networks the synchronizability is mainly determined by the mean degree, and not by the degree distribution or system size. Taking this approach one step further, Chavez et al. showed that network synchronizability could be enhanced even more by basing the network weights upon the 'load' (fraction of shortest paths using a particular link: see 'betweenness' b[i ]defined in section 3.1) of the links [60,61]. Chavez et al. showed that, in the case of weighted networks, scale-free networks have the highest synchronizability, followed (in order of decreasing synchronizability) by random, small-world and regular/lattice network [60,61]. For small-world networks, synchronizability was shown to increase with the probability of rewiring. Numerical analysis showed that these results obtained with linear stability analysis might hold as well for systems of non-identical oscillators. In particular, the eigenvalue ratio λ[N]/λ[2 ]could be a useful indicator of synchronizability even for these networks. Zhou et al. also studied the synchronizability of weighted random networks with a large minimum degree (k[min ]>> 1) [62]. They showed that the synchronizability was mainly determined by the average degree and the heterogeneity of the node's intensity. Intensity is the sum of the strengths of all inputs of a node, and reflects the degree as well as the link weights. Synchronizability was enhanced when the heterogeneity of the nodes intensities was reduced. In a subsequent study Zhou and Kurths investigated whether optimal weights for synchronizability could emerge in adaptive networks [63]. They showed that this was indeed the case in scale-free networks of coupled chaotic oscillators, and that the final weights were negatively correlated with the node's degrees. The adapation process enhanced network synchronizability by reducing the heterogeneity of node intensities. Van den Berg and van Leeuwen also studied the adaptation process and showed that sparsely connected random graphs above a certain size always give rise to a small-world network [64]. In a later study these authors showed that under the influence of an adaptive rewiring procedure a network of randomly connected Hindmarsh-Rose neurons will evolve towards a small-world architecture with complex dynamics [65]. This result was obtained irrespective of the initial dynamics of the network (irregular firing or bursting behaviour). Synchronization in a complex network of coupled oscillators was studied from a different perspective by Arenas et al. [66]. They showed that a relation exists between the complex, hierarchical structure in the connectivity matrix on the one hand, and different time scales of the synchronization dynamics on the other hand. More specifically, for short time scales the nodes are disconnected, while for longer time scales the nodes become synchronized in groups according to their topological structure. This study underscores once more the importance of structure function relationships in complex networks. Related results were obtained by Zemanova et al. and Zhou et al. [67,68]. 4. Applications to neuroscience 4.1 Dynamics of simulated neural networks From the previous sections is has become clear that a major research focus in modern network studies is the relation between network topology on the one hand, and dynamics on networks on the other hand. This problem is of major interest for neuroscience, and an important question is to what extent the results obtained with models of general types of oscillators are relevant for networks of neuron-like elements as well. Lago-Fernandez et al. were the first to study this question in a network of non-identical Hodgkin and Huxley neurons coupled by excitatory synapses [69]. They studied the influence of three basic types of network architecture on coherent oscillations of the network neurons. Random networks displayed a fast system response, but were unable to produce coherent oscillations. Networks with regular topology showed coherent oscillations, but no fast signal processing. Small-world networks showed both a fast system response as well as coherent oscillations, suggesting that this type of architecture could be optimal for information processing in neural networks. The influence of complex connectivity on neuronal circuit dynamics was also studied by Roxin et al. [70]. They considered a small-world network of excitable, leaky integrate-and-fire neurons. For low values of p (the likelihood of random rewiring) a localized transient stimulus resulted either in self sustained persistent (mostly periodic) activity or a brief transient response. For high values of p, the network displayed exceedingly long transients and disordered patterns. The probability of failure (a stimulus not resulting in sustained activity) showed a phase transition over a small range of values of p. The authors concluded that this 'bi-stability' of the network might represent a mechanisms for short term memory. Masuda and Aihara showed that in a model of 400 coupled leaky integrate-and-fire neurons small p values gave rise to travelling waves or clustered states, intermediate values to rapid communications between remote neurons and global synchrony, and high p to asynchronous firing [71]. They also showed that network dynamics can be influenced by the degree distribution. With so-called 'balanced rewiring' (same degree for all vertices) the optimal p for synchronization vanished. Increasing p replaced precise local with rough global synchrony. Synchronization of neurons in networks is important for normal functioning, in particular information processing, but may also reflect abnormal dynamics related to epilepsy. Three modelling studies have addressed this issue specifically. Netoff et al. started from the observation that in a hippocampal slice model of epilepsy the CA3 regions shows short bursts of activity whereas the CA1 regions shows seizure like activity lasting for seconds [72]. To explain these observations they constructed models (small-world networks with N = 3000; k = 30 for CA1 and k = 90 for CA3) of various types of neurons (Poisson spike train, leaky integrate-and-fire, stochastic Hodgkin and Huxley). For increasing values of the rewiring probability, the models displayed first normal behaviour, then seizure like transients and finally continuous bursting. Increasing the strength of the synapses had a similar effect as increasing p. For the CA3 model (with higher k) the transition from seizures to bursting occurred for a lower value of p compared to the CA1 model. These findings suggest that the bursting behaviour of the CA3 region may represent a dynamical state beyond seizures. This is an important suggestion since similar bursting-like phenomena have also been observed in the scalp recorded EEGs of neurological patients, and their epileptic significance is still poorly understood [73 Percha et al. started with the observation that in medial temporal lobe epilepsy, epileptogenesis is characterized by structural network remodelling and aberrant axonal sprouting [75]. To study the influence of modified network topology on seizure threshold they considered a two-dimensional model of 12 by 12 Hindmarsh-Rose neurons. For increasing values of p they found a phase transition between a state of local to a state of global coherence; the transition occurred at p = 0.3. At the phase transition point the duration of globally coherent states displayed a power law scaling, consistent with type III intermittency. The authors speculated that neural networks may develop towards a critical regime between local and global synchronization; seizures would result if pathology pushes the system beyond this critical state. A similar concept can be found in two other studies [5,75]. The influence of temporal lobe architecture on seizures was also studied by Dyhrfjeld-Johnsen et al. [76]. They studied a computational model of rat dentate gyrus with 1 billion neurons, and no more than three synapses between any two neurons, suggestive of a small-world architecture. They showed that loss of long distance hilar cells had only little influence on global network connectivity as long as a few of these long distance connections were preserved. Also, local axonal sprouting of granular cells resulted in increased local connectivity. Simulations of the dynamics of this model showed that network hyperexcitability was preserved despite the loss of hilar cells. To explain the dynamics of cultured neural networks French and Gruenstein investigated two-dimensional excitatory small-world networks with bursting integrate-and-fire neurons with regular spiking (RS) and intrinsic bursting (IB) [77]. The model showed spontaneous activity, similar to cultured networks. Traces of membrane potential and cytoplasmatic calcium matched those of experimental data. For even low values of rewiring probability the values for the speed of propagation in the model were within the range of the physiological model. For higher p and more long distance connections wave speed increased. Recently it has been shown that real neural networks cultured in vitro in multi electrode arrays (MEAs) display functional connectivity patterns with small-world characteristics [78 Higher values of p are known to be associated with shorted path lengths in the Watts and Strogatz small-world model. That pathlength is an important predictor of network performance, as has been shown recently [79]. These authors investigated a two-dimensional lattice of coupled van der Pol-FitzHugh-Nagumo oscillators and considered as measures of network performance: activity and synchronization. They found that network performance was mainly determined by the network average path length: the shorter the path length, the better the performance. Local properties such as the clustering coefficient turned out to be less important. The studies discussed above considered networks of excitatory elements only. Shin and Kim studied a network of 1000 coupled FitzHugh-Nagumo (FHN) neurons with fixed inhibitory coupling strength and an excitatory coupling strength that changed with firing [80]. Starting from random initial coupling strengths, this network self-organized to both the small-world and the scale-free network regime by synaptic re-organization and by the spike timing dependent synaptic plasticity (STDP). The optimal balance between excitation and inhibition proved to be crucial, as has been observed in other studies [81]. Paula et al. studied small-world and scale-free models of 2048 sparsely coupled (k = 4) McCulloch and Pitts neurons [82]. In the case of regular topology the model showed non-periodic activity, whereas random topology resulted in periodic dynamics, where the duration of the periods depended on the square root of network size. The transition between aperiodic and periodic dynamics as a function of p was suggestive of a phase transition. Two other studies provide a link with the topic of anatomical connectivity that will be discussed in more detail in the next section. Zhou et al. and Zemanova et al. investigated the correlations between network topology and functional organization of complex brain networks [67,68]. They modelled the cortical network of the cat with 53 areas and 830 connections as a weighted small-world network. Each node or area in the network was modelled as a sub network of excitable FitzHugh-Nagumo neurons (N = 200; k = 12, SWN topology with p = 0.3; 25% inhibitory neurons; 5% of the neurons receive excitatory connections form other areas). The control parameter was the coupling strength g. For weak coupling the model showed non-trivial organization related to the underlying network topology, that is correlation patterns between time series of activity were closely related to the underlying anatomical connectivity. These results are in agreement with those of Arenas et al. described above [66]. In a recent modelling study a close correspondence between functional and anatomical connectivity was confirmed when the functional connectivity was determined for long time scales [83]. So far, only few studies have studied the relevance of network structure for memory processes in simulated neural networks. Two behaviors of such networks are relevant for memory: auto-associative retrieval and self-sustained localized states ('bumps'). Anishchenko and Treves showed that the auto-associative retrieval requires networks with a structure close to random, while the self-sustained localized states were only found in networks with a very ordered structure [84]. Coincidence of both behaviours in a small-world regime could not be demonstrated in networks with realistic as opposed to simple binary neurons. 4.2 Neuroanatomical networks 4.2.1 Real networks Interestingly, the seminal paper of Watts and Strogatz was also the first example of an application of graph theory to a neuroscientific question [16]. Watts and Strogatz studied the anatomical connectivity of the nervous system of C. elegans, which is the sole example of a completely mapped neural network. This neural network could be represented by a graph with N = 282 and k = 14. Neurons were considered to be connected if they shared a synapse or a gap junction. Analysis of this graph yielded a path length L = 2.65 (random network: 2.25) and a clustering coefficient C = 0.28 (random network = 0.05). This represents the first evidence of small-world architecture of a real nervous system. That similar conclusions can be drawn for nervous systems of vertebrates and primates, was shown by Hilgetag et al. [85]. They studied compilations of corticocortical connection data from macaque (visual, somatosensory, whole cortex) and cat, and analyzed these data with optimal set analysis, non-parametric cluster analysis, and graph theoretical measures. All approaches showed a hierarchical organization with densely locally connected clusters of areas. Generally, path lengths were slightly larger than those of random networks, while clustering coefficients were twice as large as those of random networks: macaque visual: L = 1.69 (random 1.65) C = 0.594 (random 0.321); macaque somatosensory: L = 1.77 (random 1.72) C = 0.569 (random 0.312); macaque whole cortex: L = 2.18 (random 1.95) C = 0.49 (random 0.159); cat whole cortex: L = 1.79 (random 1.67); C = 0.602 (random 0.302). The authors concluded that cortical connectivity possesses attributes of 'small-world' networks. This raises the question whether the small-world pattern of anatomical connectivity determines the patterns of functional connectivity. Stephan et al. studied data from 19 papers on the spread of (epileptiform) activity after strychnine-induced dysinhibition in macaque cortex in vivo [86]. Graph analysis of functional connectivity networks gave the following results L = 2.1730 (random: 2.1500); C = 0.3830 (random: 0.0149). This represents the first proof of a small-world pattern in functional connectivity data, and suggests a relation between anatomical and functional connectivity patterns. While the study of Stephan et al. was based upon data from the literature, Kotter and Sommer modelled the propagation of epileptiform activity in a large scale model of the cortex of the cat and compared the results with randomly connected networks [87]. They concluded that association fibres and their connections strengths were useful predictors of global topographic activation patterns in the cerebral cortex and that a global structure – function relationship could be demonstrated. Sporns and Zwi studied data sets of macaque visual and whole cortex, and cat cortex, comparing the results to both lattice and random networks, where the in and out degrees of all vertices were preserved [88]. They computed scaled values of L and C (that is: L and C related to L and C of random networks) and looked for cycles. For all three networks the scaled C was close to that of a lattice network, while the scaled L was close to random networks. They also found that there was little or no evidence for scale-free degree distributions, which makes sense in view of the relatively constant number of 10^4 synapses per neuron. According to the authors the small-world architecture of the cortex must play a crucial role in cortical information processing. Some of the same data studied in the above mentioned papers were re-investigated for the presence of motifs (connected graphs forming a subgraph of a larger network) by Sporns and Kotter [89]. The authors distinguished between structural motifs of size M (specific set of M vertices linked by edges) and functional motifs (same M vertices, but not all edges). Graphs were compared to lattice and random networks which preserved the in and out degree of all vertices. The authors concluded that brain networks maximize both the number and diversity of functional motifs, while the repertoire of structural motifs remains small. Kaiser and Hilgetag studied the edge vulnerability of macaque and cat cortex, protein- protein interaction networks, and transport networks [90]. Comparisons were made with random and scale-free networks. The average shortest path length was used as a measure of network integrity, and four different measures were used to identify critical connections in the network. Of these, the edge frequency (the fraction of shortest paths using a specific edge; related to the 'betweenness' discussed in section 3.1) was the best measure to predict the influence of deleting an edge on average path length. However, for random and scale-free networks all measures performed not very well. Assuming that biological networks are more likely to be small-world, the edge frequency underscores the importance (for overall network performance) and vulnerability of inter-cluster connections. This conclusion is an agreement with Buzsaki et al. who stressed the importance of long-range interneurons for network architecture and performance [91]. Similarly, Manev and Manev suggested that neurogenesis might give rise to new random connections subserving the small-world properties of brain networks Extending the work of Watts and Strogatz and Hilgetag et al., Humphries et al. investigated whether a specific sub-network of the brain, the brainstem reticular formation, displays a small-world like architecture [93]. They considered two models based upon neuro-anatomical data: a stochastic and a pruning model, and used a small-world metric defined as: S = (C/C-r)/(L/L-r). Here, C-r and L-r refers to the clustering coefficient and path length of corresponding ensembles of random networks. They found that both models fulfil criteria for a small-world network (high S) for a range of parameter settings; however, the in degree and out degree distributions did not follow a power law, arguing against a scale-free architecture. The first more or less direct proof of small-world like anatomical connectivity in human was reporter by He et al. [94]. They studied MRI scans of 124 healthy subjects, and assumed that two regions were connected if they displayed statistically significant correlations in cortical thickness. For this analysis the entire cortex was segmented into 54 regions. With this approach, the authors could show that the human brain networks has the characteristics of a small-world network with γ (C/C-r) = 2.36 and λ (L/L-r)= 1.15 and a small-world parameter σ (same as S defined above) = 2.04. Furthermore, the degree distribution corresponded to an exponentially truncated power law, as described by Achard et al. [95]. 4.2.2 Theoretical and modelling approaches Supplementing the empirical studies on neuro-anatomical connectivity several studies have studied the significance of connectivity patterns in complex networks from a more theoretical and modelling based perspective [96]. In particular, Sporns and colleagues have inspired a new approach called 'theoretical neuro-anatomy' [97]. They have pointed out that brains are faced with two opposite requirements: (i) segregation, or local specialization for specific tasks; (ii) integration, combining all the information at a global level [98]. One of the key questions is which kind of anatomical and functional architecture allows segregation and integration to be combined in an optimal way. Sporns et al. studied network models that were allowed to develop to maximize certain properties. Networks which developed when optimising for complexity (defined as an optimal balance between segregation and integration: see [99].) showed small-world characteristics; also the graph theoretical measures of these networks were similar to those of real cortical networks, as described under 4.2.1. [98]. Furthermore, networks selected for optimal complexity had relatively low 'wiring costs'. The authors speculate that this type of network architecture (complex or small-world like) could emerge as an adaptation to rich environments [97,99]. In a later review the authors argued that the emergent complex, small-world architecture of cortical networks might promote high levels of information integration and the formation of a so-called 'dynamic core' [21]. This 'dynamic core' could be a potential substrate of higher cognition and consciousness. The notion of an optimal architecture has also been studied in terms of wiring costs and optimal component placement in neural networks. Karbowski hypothesized that cerebral cortex architecture is designed to save available resources [100]. In a model he studied the trade off between minimizing energetic and biochemical costs (axonal length and number of synapses). The model showed some similarity with small-world networks, but in contrast to these had a distance-dependent probability of connectivity. Kaiser and Hilgetag studied the well known anatomical networks of macaque cortex, and C. Elegans [101]. They showed that optimal component placement could substantially reduce wiring length in all tested networks. However, in the minimally rewired networks the number of processing steps along the shortest paths would increase compared to the real networks. They concluded that neural networks are more similar to network layouts that minimize length of processing paths rather than wiring length. A different conclusion was reached by Chen et al. who studied wiring optimisation of 278 non-pharyngeal neurons of C. Elegans [102]. They solved for an optimal layout of the network in terms of wiring costs and found that most neurons ended up close to their actual position. However, these authors also noted that some neurons got a new position which strongly deviated from the original one, suggesting the involvement of other biological factors. One might speculate that at least one of the other factors could be an optimal architecture in terms of processing steps as suggested by Kaiser and Hilgetag [101]. 4.3 Functional networks The following section on fMRI, EEG, and MEG discusses applications of graph theory to recordings of brain physiology rather than brain anatomy. This approach is based upon the concept of functional or effectivy connectivity, first introduced by Aertsen et al. [103]. The basic assumption is that statistical interdependencies between time series of neuronal activity or related metabolic measures reflect functional interactions between neurons and brain regions. Obviously, patterns of functional connectivity will be restricted by the underlying anatomical connectivity, but they do not have to be identical, and may reveal information beyond the anatomical structure. This is illustrated by the fact that functional connectivity patterns can display rapid task-related changes, as illustrated in several studies discussed below. The basic principles of applying graph analysis to recordings of brain activity are illustrated in Fig. 4. Figure 4. Schematic illustration of graph analysis applied to multi channel recordings of brain activity (fMRI, EEG or MEG). The first step (panel A) consists of computing a measure of correlation between all possible pairs of channels of recorded brain activity. The correlations can be represented in a correlation diagram (panel B, strength of correlation indicated with black white scale). Next a threshold is applied, and all correlations above the threshold are considered to be edges connecting vertices (channels). Thus, the correlation matrix is converted to a unweighted graph (panel C). From this graph various measures such as the clustering coefficient C and the path length L can be computed. For comparisons, random networks can be generated by shuffling the cells of the original correlation matrix of panel B. This shuffling preserves the symmetry of the matrix, and the mean strength of the correlations (panel D). From the random matrices graphs are constructed, and graph measures are computed as before. The mean values of the graph measures for the ensemble of random networks are determined. Finally, The ratio of the graph measures of the original network and the mean values of the graph measures of the random networks can be determined (panel F). 4.3.1 Functional MRI Probably the first attempt to apply graph theoretical concepts to fMRI was a methodological paper by Dodel et al. [104]. In this methodological study, graph theory was used as a new approach to identifying functional clusters of activated brain areas during a task. Starting from BOLD (blood oxygen level dependent) time series of brain activity, a matrix of correlations between the time series was computed, and this matrix was converted to a (undirected, unweighted) graph by assigning edges to all supra-threshold correlations. With this approach the authors were able to demonstrate various functional clusters in the form of subgraphs during a finger tapping task. The authors noted the problem that the threshold had a significant influence on the results, and that criteria for choosing an optimal threshold should be considered. Eguiluz et al. were the first to study clustering coefficients, path lengths, and degree distributions in relation to fMRI data [105,106]. They studied fMRI in 7 subjects during three different finger tapping tasks, and derived matrices of correlations coefficients from the BOLD time series. These matrices were thresholded to obtain unweighted graphs. In this study BOLD time series of each of the fMRI voxels were used. The degree distribution was found to be scale-free, irrespective of the type of task considered. Also, the clustering coefficient was four times larger than that of a random network, and the path length was considered 'close to' that of a random network (in fact depending on the threshold it was 2–3 times larger). The authors concluded that the functional brain networks displayed both scale-free as well as small-world features. Since these properties did not depend upon the task, they assumed that graph analysis mainly reveals invariant properties of the underlying networks, which might be in a 'critical' state [106]. A different approach was taken by the Cambridge group who studied fMRI BOLD time series during a 'resting state' with eyes-closed and no task [95,107-109]. In the first study, fMRI was studied in 12 healthy subjects, and BOLD time series were taken from 90 regions of interest (ROI; 45 from each hemisphere); each of these ROIs corresponded to a specific anatomical region [107]. From these 90 time series a matrix of partial correlations was obtained. The threshold was based upon the significance of the correlations, controlling for false positive findings due to the large number of correlations with the false discovery rate (FDR). The authors found a number of strong and significant correlations, both locally as well as between distant (intra- and inter-hemispherical) brain regions. Hierarchical clustering revealed six major systems consisting of four major cortical lobes, the medial temporal lobe, and a subcortical system. In one patient with a lowered consciousness following an ischemic brain stem lesion a reduction of left intrahemispherical and interhemispherical connections was found. Graph analysis was applied to unweighted graphs using a significance level of p < 0.05 as a threshold for the partial correlation matrix. The clustering coefficient of this graph was 0.25 (random network: 0.12) and the path length 2.82 (random network: 2.58). The ratio C/C-r was 2.08 and the ratio L/L-r was 1.09, both suggestive of a small-world architecture of the resting state functional network. The authors noted that the anatomy did not always predict precisely the functional relationships, and that the resting state connectivity could be a potentially useful marker of brain disease or damage, as illustrated by the patient example. In another study in five subjects the interdependencies between the BOLD time series were studied in the frequency rather than the time domain [108]. Estimators of partial coherence and a normalized mutual information measure were used to construct the graphs. The authors found stronger fronto-parietal connectivity at lower frequencies and involvement of higher frequencies in the case of local connections. Subsequently an extensive graph analysis of this data set was performed [95]. Here, wavelet analysis was used to study connectivity patterns as a function of frequency band. The corresponding correlation matrices were thresholded at p < 0.05 using FDR. The resulting graphs displayed a single giant cluster of highly connected brain regions (79 out of 90). In this graph the strongest hubs corresponded to the most recently developed parts of heteromodal association cortex. The most clear-cut small-world pattern was found for BOLD data in the frequency range of 0.03–0.06 Hz. The clustering coefficient was 0.53, and the path length was 2.49. The authors also considered a small-world index as proposed by Humphries: (C/C-r)/(L-L-r). This index is expected to be > 1 in the case of a small world network (relatively high C and low L compared to corresponding random networks). In the case of the experimental graph the index was 2.08, consistent with a small-world network. The authors also investigated the resilience of the network to either 'random attack' (removal of randomly chosen vertex) or targeted attack' (removal of largest hubs). They found that the real brain networks were as resistant to random attacks as either random or scale-free networks. In contrast, the real brain networks were more resistant to targeted attacks than scale-free networks. This finding, as well as the absence of power law scaling and arguments from brain development (where hubs develop late rather than early) suggest to the authors that brain networks are not scale-free as had been suggested by Eguiluz et al [105]. The authors conclude that the functional networks revealed by graph analysis of resting state fMRI might represent a 'physiological substrate for segregated and distributed information processing'. Finally, the global and local efficiency measures as introduced by Latora and Marchiori were applied in an fMRI study in 15 healthy young and 11 healthy old subjects [109]. The subjects were studied during a resting state no-task paradigm, either with placebo treatment or with sulpiride (an antagonist of the dopamine D2 receptor in the brain). The analysis was based upon wavelet correlation analysis of low frequency correlations between BOLD time series of 90 regions of interest followed by thresholding. The efficiency measures were related to a 'cost' factor, defined as the actual number of edges divided by the maximum number of edges possible in the graph. Local and global efficiency, normalized for cost, were shown to be decreased both in the old compared to the young group and in the sulpiride condition compared to the placebo condition. The effect of age on efficiency was stronger and involved more brain regions than the sulpiride effect. These results were similar irrespective whether the analysis was done on unweighted or weighted graphs reconstructed from the correlation matrix. 4.3.2 EEG and MEG Data derived from functional MRI experiments – whether task related or resting state – are very suitable for graph analysis because of their high spatial resolution, In contrast, spatial resolution is more problematic with neurophysiological techniques such as EEG and MEG. However, these techniques do measure directly the electromagnetic field related to neuronal activity, and have a much higher temporal resolution. The first application of graph analysis to MEG was published in 2004 [110]. In this experiment MEG recordings of five subjects during a no-task, eyes-closed state were analysed. Correlations between the time series of the 126 artefact-free channels studied were analysed with the synchronization likelihood (SL), a non-linear measure of statistical interdependencies [111,112]. The matrices of pair wise SL values were converted to unweighted graphs by assuming an edge between pairs of channels (vertices) with an SL above a threshold, and no edge in the case of a subthreshold SL. In all cases the threshold was chosen such that the mean degree was 15. This analysis was performed for MEG data filtered in different frequency bands. For intermediate frequencies the corresponding graphs were close to ordered networks (high clustering coefficient, and long path length). For low (< 8 Hz) and high (> 30 Hz) frequencies the graphs showed small-world features with high C and low L. These results were fairly consistent when the degree k was varied between 10 and 20, although both C and L increased as a function of K. Graph theoretical properties of MEG recordings in healthy subjects were studied more extensively in a recent paper by Bassett et al. [1,113]. The authors applied graph analysis to MEG recordings in 22 healthy subjects during a no-task, eyes-open state and a simple motor task (finger tapping). Wavelet analysis was used to obtain correlation matrices in the major frequency bands ranging from delta to gamma. After thresholding unweighted, undirected graphs were obtained and characterized in terms of an impressive range of graph theoretical measures such as clustering coefficient, path length, small world metric σ ([C/C-random]./[L/L-random]. see [93].), clustering, characteristic length scale, betweenness and synchronizability (although it is not very well described in the paper the authors probably refer to the eigenvalue ratio based upon graph spectral analysis: λ[N]/λ[2]). In all six frequency bands a small world architecture was found, characterized by values of the small world metric σ between 1.7 and 2. This small-world pattern was remarkably stable over different frequency bands as well as experimental conditions. During the motor task relatively small changes in network topology were observed, mainly consisting of the emergence of long distance interactions between frontal and parietal areas in the beta and gamma bands. Analysis of the synchronizability showed that the networks were in a critical dynamical state close to transition between the non-synchronized and synchronized state. The first application of graph analysis to EEG was published in 2007 [114]. Here a group of 15 Alzheimer patients was compared to a non-demented control group of 13 subjects. EEG recorded from 21 channels during an eyes-closed, no-task state and filtered in the beta band (13–30 Hz) was analysed with the synchronization likelihood. When C and L were computed as a function of threshold (same threshold for controls and patients), the path length was significantly longer in the AD group. For very high thresholds it was noted that the graphs became disconnected, and the pathlength became shorted in the AD group. When C and L were studied as a function of degree k (same K for both groups), the path length was shorter in the AD group, but only for a small range of K (around 3). For both controls and patients the graphs showed small-world features when C and L were compared to those of random control networks (with preserved degree distribution). A higher mini mental state examination score (MMSE) correlated with a higher C and smaller L. The results were interpreted in terms of a less optimal, that is less small-world like network organization in the AD group. Bartolomei et al. applied graph analysis to MEG resting state recordings in a group of 17 patients with brain tumours and 15 healthy controls. [115]. Unweighted graphs were obtained from SL matrices of MEG filtered in different frequency bands, using an average degree k of 10, and a network size (number of channels) of 149. Mean SL values were higher in patients in the lower frequency bands (delta, theta and alpha), and lower in the higher frequency bands (beta and gamma). In patients the ratio of the clustering coefficient and the mean clustering coefficient for random networks (C/C-r) was lower than in controls in the theta and gamma band (for right sided tumours); the ratio of pathlength and mean pathlength of random networks (L/L-r) was lower in patients in the theta band, the beta band (for left sided tumours) and the gamma band (for right sided tumours). The general pattern that emerges from this study is that pathological networks are closer to random networks, and healthy networks are closer to small-world networks. Interestingly, such random networks might have a lower threshold for seizures (which occur frequently in patients with low grade brain tumours) than small-world networks. In two related studies Micheloyannis et al. applied graph analysis to 28 channel EEG recorded during an 2-back working memory test [116,117]. In both studies EEG filtered in different frequency bands was analysed with the SL, and converted to unweighted graphs either as a function of threshold, or as a function of degree K (with K = 5). Also, the ratios C/C-r and L/L-r were computed, relating the C and L to those of random networks with the same degree distribution. In the first study 20 healthy subjects with a few years of formal education and a low IQ were compared to 20 healthy subjects with university degrees and a high IQ [116]. Mean SL did not differ between the two groups. Graph analysis of the no-task condition did not show differences between the groups either. However, during the working memory task the networks in the group with lower education as compared to the highly educated group were closer to small-world networks as revealed by a higher C/C-r and a lower L/L-s in the theta, alpha1, alpha2, beta and gamma band. The results were explained in terms of the neural efficiency hypothesis: the lower educated subjects would 'need' the more optimal small-world configuration during the working memory task to compensate for their lower cognitive abilities. In the second study the 20 control subjects with higher education were compared to 20 patients with schizophrenia (stable disease, under drug treatment). During the working memory task the C/C-r was lower in the schizophrenia group compared to controls in alpha1, alpha2, beta and gamma bands. Consequently, task related networks in the schizophrenia group were less small-world like, and more random compared to controls. Combining these results with those of the first study there is a decrease of small-world features going from controls with low education to controls with high education, and then from controls with high education to schizophrenia patients. One might speculate that the controls with low education display a compensation mechanism during the task, which is not needed by the highly educated controls and which completely fails in the case of the patients. Of interest, the notion of a more random network in schizophrenia has recently been confirmed in a study in 40 patients and 40 controls [118]. In this EEG based study the patients were characterized by a lower clustering coefficient, a shorter path length and a lower centrality index of the major network hubs. It should be noted that the patients in the Micheloyannis et al and the Breakspear et al studies were treated with antipsychotic drugs, and that an influence of the drug treatment on the network features was found in the Breakspear et al study. Thus, the 'network randomization' could reflect both disease as well as pharmacological effects. The two studies by Micheloyannis et al. [116,117]. and the study by Bassett et al. [113]. showed the influence of a cognitive or motor task on network topology. This raises the question to what extent network features such as C and L reflect 'state' or 'trait' characteristics. In this context, changes during sleep are of interest. Ferri et al. showed that network properties change during sleep [119]. In 10 healthy subjects 19 channel EEG recordings filtered between 0.25–2.5 Hz were analysed with the synchronization likelihood. Unweighted networks were derived from the SL matrices with a fixed K = 3. The ratio C/C-r but not the ratio L/L-r was found to increase during all sleep stages compared to the awake state; however there were no differences between the various sleep stages. When the sleep architecture was studied in more detail taking into account to CAP (cyclic alternating pattern) phases a higher increase in C/C-r during the CAP A1 phase than during CAP B phase was found. Thus networks features can change during a cognitive task as well as under the influence of sleep. However, there is preliminary evidence that network properties have strong 'trait' characteristics as well. Dirk Smit et al. applied graph analysis to no-task EEG recordings in a large sample of 732 healthy subjects, consisting of mono and dizygotic twins and their siblings (Smit et al, 2006) [120]. In a previous study it was already shown that the mean SL has a strong genetic component, especially in the alpha band (Posthuma et al., 2005) [121]. In the study of Smit et al, both C and L in showed substantial and significant heritability in theta, alpha1, alpha2, beta1, beta2 and beta 3 bands. Furthermore, small-world like properties of the theta and beta band connectivity were related to individual differences in verbal comprehension [120]. The change in network properties during a physiological change in level of consciousness such as sleep raises the question whether network properties might also be affected by pathological changes in consciousness such as occur during epileptic seizures. Two modelling studies have pointed at the importance of network topology for spread of epileptic activity in a network [72,74]. A first preliminary report on network analysis of EEG depth recordings in a single patient during an epileptic seizure was published by Wu and Guan [122]. The authors constructed graphs with N = 30 by using both channels (six) and different frequency bands (five) to construct un weighted networks with degrees varying from 4–7. The bispectrum was used to extract phase coupling information form the EEG. During the seizure a change in network configuration was detected in the direction of a small-world network: there was an increase in C and a decrease of L. Conversely, one might argue that the preceding interictal network was relatively more random. In a larger study Ponten et al. investigated seven patients during temporal lobe seizures recorded with intracranial depth electrodes [123]. EEG time series filtered in various frequency bands were analysed with the synchronization likelihood, and the SL matrices were converted to unweighted graphs with a fixed degree of 6. A slightly modified definition of L was used (L was defined as the harmonic mean instead of the arithmetic mean of the shortest path lengths: see section 3.1 of this paper) which dealt conveniently with the problem of disconnected points. During seizures the ratio C /C-r increased in delta, theta and alpha bands; L/L-r also increased in the same bands. Thus ictal changes reflected a movement away from a random interictal towards a more ordered ictal network configuration. This suggests that epilepsy might perhaps be characterized be interictal networks with a pathological random structure. Such a random structure has an even lower threshold for the spreading of seizures than the normal small-world configuration (random networks are more synchronizable than small-world networks: see [60,61].); the results of Bartolomei et al. [115]. seem to be in agreement with this hypothesis and suggest that 'network randomisation' might be a general result of brain damage. Needless to say that this bold hypothesis has to be explored in further studies. 5. Conclusions and future prospects To conclude this review we would like to draw some conclusions and suggest a number of problems to be addressed by future research. A first important conclusion is that the modern theory of networks, which originated with the discovery of small-world and scale-free networks, is a very useful framework for the study of large scale networks in the brain. There are several reasons for this: (i) the new theory provides us with powerful realistic models of complex networks in the brain; (ii) a large and still increasing number of measures becomes available to study topological and dynamical properties of these networks; (iii) the theory allows us to better understand the correlations between network structure and the processes taking place on these networks, in particular synchronization processes; (iv) by relating structure to function the notion of an optimal network (in terms of balancing segregation and integration, and performance and cost) can be formulated; (v) the theory provides scenario's how complex networks might develop, and how they might responds to different types of damage (random error versus targeted attack). These considerations explain the motivation to apply modern network theory to neuroscience. A second conclusion is that modelling studies with neural networks underscore importance of structure function relationships suggested by more fundamental work, and point in the direction of systems with a critical dynamics close to onset of synchronization. Of considerable clinical interest is the work suggesting a relationship between network structure and pathological synchronization, providing a possible mechanism for [72,74,122,123]. Thirdly, anatomical studies suggest that neural networks, ranging from the central nervous system of C. elegans to cortical networks in the cat and macaque, may be organized as small-world networks, and that patterns of functional connectivity may follow the same pattern [83,85-87]. Fourth, some preliminary conclusions can be drawn from studies of functional connectivity in humans: (i) most studies point in the direction of a small-world pattern for functional connectivity, although scale-free networks have also been described (Eguiluz et al., 2005); (ii) the small-world topology of functional brain networks is very constant across techniques, conditions and frequency bands; tasks induce only minor modifications; (iii) the architecture of functional brain networks may reflect genetic factors and is related to cognitive performance; (iv) different types of brain disease can disrupt the optimal small-world pattern, sometimes giving rise to more random networks which may be associated with cognitive problems as well as a lower threshold for seizures (pathological hypersynchronization). Some of these conclusions may provide useful starting points for future studies. However, any future work in this field will also have to consider a number of methodological issues. For one thing it is not yet clear what is the optimal way to convert functional imaging data (derived from fMRI, EEG or MEG) to graphs for further analysis. In the case of EEG and MEG the influence of volume conduction on graph measures has not been considered, although it is possible that assessment of the clustering coefficient is biased by this. This raises the question whether the analysis should be done in signal or in source space, and if source reconstruction is needed, what algorithm should be used. Another problem is somewhat arbitrary threshold that is needed to convert a matrix of correlations to an unweighted graph. The choice of the threshold remains a bit arbitrary, and studying a whole range of thresholds may raise statistical problems (type II errors) because of the large number of tests that have to be done. One way out may be to model correlation matrices as weighted graphs, taking into account the full information available. However, at this time only few measures are available for weighted graphs. A further problem that frequently occurs when converting matrices of correlations to graphs is the fact that some of the nodes may become disconnected from the network; this presents difficulties in the calculation of clustering coefficients and path lengths. Use of global and local 'efficiency' measures, and harmonic instead of arithmetic means might provide a solution here [27]. A final remark is that the whole spectrum of graph theoretical measures has not yet been explored in most neuroscience studies. An example of study that makes use of a broad range of graph measures is the recent paper by Bassett el al. [113]. Future studies could gain by a careful consideration of all the graph measures which are currently available, and the new measures that are described in physics papers. Finally, a number of conceptual issues for future studies deserve mentioning. Some of the questions that have to be addressed by new studies are the following: (i) how does network structure change during growth and development? Some theoretical studies have suggested scenario's explaining how small-world or scale-free networks could emerge by activity dependent changes, but whether these scenario's are a proper description of human brain development is an open question; (ii) related to this problem: it is important to know how do genetic and environmental factors influence network features? An influence of genetic factors on network properties in young adults has been suggested, but the underlying mechanisms are completely unknown. (iii) which network factors provide the best explanation for cognitive functioning? It is clear that certain network properties may be associated with increased synchronizability, and that cognition depends upon the formation and dissolution of synchronized networks in the brain? It is not yet known which network properties are the best predictors of cognitive functioning; (iv) is it possible to detect different characteristic scenarios by which brain pathology changes network structure and function? In particular, could it be that different types of brain disease may be related to either 'random error' or 'targeted attack' of brain networks, and is it possible to predict when and how brain disease will give rise to clinical symptoms? Related to this: could a better understanding of neurological disease at the network level give rise to new treatment approaches? (v) is there a relationship between network properties and susceptibility for seizures? Here the hypothesis that brain disease will convert a healthy small-world network to a more random network with a stronger synchronizability, and thus a lower threshold for pathological synchronization/seizures needs further exploration. (vi) is there a relationship between the 'giant cluster' which emerges at the onset of synchronization and consciousness? The relationship between a single cluster of synchronized neurons and brain areas and consciousness has been suggested by several authors [124,125]. Graph theory could extend these ideas by providing an explanation how and when such a giant cluster will appear in neuronal networks, and what properties it is likely to have. Thanks to Els van Deventer who helped to retrieve many of the papers used in this review. 1. Sporns O, Honey Ch J: Small world inside big brains. PNAS 2006, 51:19219-19220. Publisher Full Text 2. Le van Quyen M: Disentangling the dynamic core: a research program for a neurodynamics at the large scale. Biol Res 2003, 36:67-88. PubMed Abstract 3. Amaral LAN, Ottino JM: Complex networks. Augmenting the framework for the study of complex systems. Eur Phys J B 2004, 38:147-162. Publisher Full Text 4. Stam CJ: Nonlinear dynamical analysis of EEG and MEG: review of an emerging field. Clin Neurophysiol 2005, 116:2266-2301. PubMed Abstract | Publisher Full Text 5. Lopes da Silva FH, Blanes W, Kalitzin SN, Parra J, Suffczynski P, Velis DN: Dynamical diseases of brain systems: different routes to seizures. IEEE Transactions On Biomedical Engineering 2003, 50:540-548. PubMed Abstract | Publisher Full Text 6. Lehnertz K, Litt B: The first collaborative workshop on seizure prediction: summary and data description. Clin Neurophysiol 2005, 116:493-505. PubMed Abstract | Publisher Full Text 7. Lehnertz K, Mormann F, Osterhage H, Muller A, Prusseit J, Chernihovskyi A, Staniek M, Krug D, Bialonski S, Elger CE: State-of-the-art of seizure prediction. J Clin Neurophysiol 2007, 24:147-53. PubMed Abstract | Publisher Full Text 8. Pereda E, Quian Quiroga R, Bhattacharya J: Nonlinear multivariate analysis of neurophysiological signals. Progress in Neurobiology 2005, 77:1-37. PubMed Abstract | Publisher Full Text 9. Uhlhaas PJ, Singer W: Neural synchrony in brain disorders: relevance for cognitive dysfunctions and pathophysiology. Neuron 2006, 52:155-168. PubMed Abstract | Publisher Full Text 10. Linkenkaer-Hansen K, Nikouline VV, Palva JM, Ilmoniemi RJ: Long-range temporal correlations and scaling behavior in human brain oscillations. J Neurosci 2001, 21:1370-1377. PubMed Abstract | Publisher Full Text 11. Nikulin VV, Brismar T: Long-range temporal correlations in alpha and beta oscillations: effect of arousal level and test-retest reliability. Clin Neurophysiol 2004, 115:1896-1908. PubMed Abstract | Publisher Full Text 12. Stam CJ, de Bruin EA: Scale-free dynamics of global functional connectivity in the human brain. Hum Brain Mapp 2004, 22:97-109. PubMed Abstract | Publisher Full Text 13. Stam CJ, Montez T, Jones BF, Rombouts SARB, Made Y van der , Pijnenburg YAL, Scheltens Ph: Disturbed fluctuations of resting state EEG synchronization in Alzheimer patients. Clin Neurophysiol 2005, 116:708-715. PubMed Abstract | Publisher Full Text 14. Watts DJ, Strogatz SH: Collective dynamics of "small-world" networks. Nature 1998, 393:440-442. PubMed Abstract | Publisher Full Text 15. Barabasi AL, Albert R: Emergence of scaling in random networks. Science 1999, 286:509-512. PubMed Abstract | Publisher Full Text 16. Jeong H, Tombor B, Albert R, Oltvar ZN, Barabasi AL: The large-scale organization of metabolic networks. Nature 2000, 407:651-654. PubMed Abstract | Publisher Full Text 17. Strogatz SH: Exploring complex networks. Nature 2001, 410:268-276. PubMed Abstract | Publisher Full Text 18. Li W, Cai X: Statistical analysis of airport network of China. Phys Rev E 2004, 69(4 Pt 2):046106. Publisher Full Text 19. Sporns O, Chialvo DR, Kaiser M, Hilgetag CC: Organization, development and function of complex brain networks. Trends in Cognitive Sciences 2004, 8:418-425. PubMed Abstract | Publisher Full Text 20. Bassett DS, Bullmore E: Small-world brain networks. Neuroscientist 2006, 12:512-523. PubMed Abstract | Publisher Full Text 21. Solomonov R, Rapoport A: Connectivity of random nets. Bulletin of Mathematical Biophysics 1951, 13:107-117. Publisher Full Text 22. Erdos P, Renyi A: On the evolution of random graphs. Publications of the Mathematical Institute of the Hungarian Academy of Sciences 1960, 12:17-61. 23. Newman MEJ: The structure and function of complex networks. SIAM Review 2003, 45:167-256. Publisher Full Text 24. Dorogovtsev SN, Mendes JFF: Evolution of networks. From biological nets to the Internet and WWW. Oxord: Oxford University press; 2003. 25. Durrett R: Random graph dynamics. Cambridge series in statistical and probabilistic mathematics. Cambridge: Cambridge University Press; 2007. 26. Wang XF, Chen G: Complex networks: small-world, scale-free and beyond. IEEE circuits and systems magazine 2003, 6-20. Publisher Full Text 27. Grigorov MG: Global properties of biological networks. DDT 2005, 10:365-372. PubMed Abstract | Publisher Full Text 28. Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang D-U: Complex networks: structure and dynamics. Physics Reports 2006, 424:175-308. Publisher Full Text 29. Newman MEJ, Barabasi AL, Watts DJ Eds: The structure and dynamics of networks. Princeton and Oxford: Princeton University Press; 2006. 30. Amaral LAN, Scala A, Barthelemy M, Standly HE: Classes of small-world networks. PNAS 2000, 97:11149-11152. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 31. Cohen R, Havlin S: Scale-free networks are ultrasmall. Phys Rev Lett 2003, 90:058701. PubMed Abstract | Publisher Full Text 32. Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U: Network motifs: simple building blocks of complex networks. Science 2002, 298:824-827. PubMed Abstract | Publisher Full Text 33. Artzy-Randrup Y, Fleishman SJ, Ben-Tal N, Stone L: Comment on "Network motifs: simple building blocks of complex networks" and "Superfamilies of evolved and designed networks". Science 2004, 305(5687):1107. PubMed Abstract | Publisher Full Text 34. Parris PE, Kenkre VM: Traversal times for random walks on small-world networks. Phys Rev E 2005, 72(5 Pt 2):056119. Publisher Full Text 35. Palla G, Derenhi I, Farkas I, Vicsek T: Uncovering the overlapping community structure of complex networks in nature and society. Nature 2005, 435:814-818. PubMed Abstract | Publisher Full Text 36. Latora V, Marchiori M: Efficient behavior of small-world networks. Phys Rev Lett 2001, 87:198701. PubMed Abstract | Publisher Full Text 37. Larota V, Marchiori M: Economic small-world behavior in weighted networks. 38. Newman MEJ, Girvan M: Finding and evaluating community structure in networks. Phys Rev E 2004, 69(2 Pt 2):026113. Publisher Full Text 39. Park K, Lai Y-Ch, Ye N: Characterization of weighted complex networks. Phys Rev E 2004, 70(2 Pt 2):026109. Publisher Full Text 40. Barthelemy M, Barrat A, Pastor-Satorras R, Vespignani A: Characterization and modelling of weighted networks. Physica A 2005, 346:34-43. Publisher Full Text 41. Barrat A, Bathelemy M, Pastor-Satorras R, Vespignani A: The architecture of complex weighted networks. PNAS 2005, 101:3747-3752. Publisher Full Text 42. Onnela J-P, Saramaki J, Kertesz J, Kaski K: Intensity and coherence of motifs in weighted complex networks. Phys Rev E 2005, 71(6 Pt 2):065103(R). Publisher Full Text 43. Vragovic I, Louis E, Diaz-Guillera A: Efficiency of informational transfer in regular and complex networks. Phys Rev E 2005, 71(3 Pt 2A):036122. Publisher Full Text 44. Crucitti P, Latora V, Marchiori M, Rapisarda A: Efficiency of scale-free networks: error and attack tolerance. Physica A 2003, 320:622-642. Publisher Full Text 45. Motter AE, Matias MA, Kurths J, Ott E: Dynamics on complex networks and applications. Physica D 2006, 224:vii-viii. Publisher Full Text 46. Barahona M, Pecora LM: Synchronization in small-world systems. Phys Rev Lett 2002, 89:art 054191. Publisher Full Text 47. Hong H, Choi Y: Synchronization on small-world networks. Phys Rev E 2002, 65:026139. Publisher Full Text 48. Nishikawa T, Motter AE, Lai Y-Ch, Hoppensteadt FC: Heterogeneity in oscillator networks: are smaller worlds easier to synchronize? Phys Rev Lett 2003, 91:014101. PubMed Abstract | Publisher Full Text 49. Atay FM, Jost J, Wende A: Delays, connection topology, and synchronization of coupled chaotic maps. Phys Rev Lett 2004, 92:144101. PubMed Abstract | Publisher Full Text 50. Atay FM, Biyikoglu T: Graph operation and synchronization of complex networks. Phys Rev E 2005, 72:016217. Publisher Full Text 51. Donetti L, Hurtado PI, Munoz MA: Entangled networks, synchronization, and optimal network topology. Phys Rev Lett 2005, 95:188701. PubMed Abstract | Publisher Full Text 52. Lee D-S: Synchronization transition in scale-free networks: clusters of synchrony. Phys Rev E 2005, 72(2 Pt 2):026208. Publisher Full Text 53. Zhou H, Lipowsky R: Dynamic pattern evolution on scale-free networks. PNAS 2005, 102:10052-10057. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 54. Motter AE, Zou ch, Kurths : Network synchronization, diffusion and the paradox of heterogeneity. Phys Rev E 2005, 71(1 Pt 2):016116. Publisher Full Text 55. Chavez M, Hwang D-U, Hentschel HGE, Boccaletti S: Synchronization is enhanced in weighted complex networks. Phys Rev Lett 2005, 94:218701. PubMed Abstract | Publisher Full Text 56. Chavez M, Hwang D-U, Amann A, Boccaletti S: Synchronizing weighted complex networks. Chaos 2006, 16:015106. PubMed Abstract | Publisher Full Text 57. Zhou C, Motter AE, Kurths J: Universality in the synchronization of weighted random networks. Phys Rev Lett 2006, 96:034101. PubMed Abstract | Publisher Full Text 58. Zhou C, Kurths J: Dynamical weights and enhanced synchronization in adaptive complex networks. Phys Rev Lett 2006, 96:164102. PubMed Abstract | Publisher Full Text 59. Van den Berg , van Leeuwen C: Adaptive rewiring in chaotic networks renders small-world connectivity with consistent clusters. Europhys Lett 2004, 65:459-464. Publisher Full Text 60. Kwok HF, Jurica P, Raffone A: Robust emergence of small-world structure in networks of spiking neurons. Cogn Neurodyn 2007. DOI 10.1007/s11571-006-9005-5 61. Arenas A, Diaz-Guilera A, Perez-Vicente CJ: Synchronization reveals topological scales in complex networks. Phys Rev Lett 2006, 96:114102. PubMed Abstract | Publisher Full Text 62. Zemanova L, Zhou Ch, Kurths J: Structural and functional clusters of complex brain networks. Physica D 2006, 224:202-212. Publisher Full Text 63. Zhou C, Zemanova L, Zamora G, Hilgetag C, Kurths J: Hierarchical organization unveiled by functional connectivity in complex brain networks. Phys Rev Lett 2006, 97:238103. PubMed Abstract | Publisher Full Text 64. Lago-Fernandez LF, Huerta R, Corbacho F, Siguenza JA: Fast response and temporal coherent oscillations in small-world networks. Phys Rev Lett 2000, 84:2758-2761. PubMed Abstract | Publisher Full Text 65. Roxin A, Riecke H, Solla SA: Self-sustained activity in a small-world network of excitable neurons. Phys Rev Lett 2004, 92:198101. PubMed Abstract | Publisher Full Text 66. Masuda N, Aihara K: Global and local synchrony of coupled neurons in small-world networks. Biol Cybern 2004, 90:302-3-9. Publisher Full Text 67. Netoff TI, Clewley R, Arno S, White JA: Epilepsy in small-world networks. J Neurosci 2004, 24:8075-8083. PubMed Abstract | Publisher Full Text 68. Epilepsia 2002, 43:103-113. PubMed Abstract | Publisher Full Text 69. Percha B, Dzakpasu R, Zochowski M: Transition from local to global phase synchrony in small world neural network and its possible implications for epilepsy. Phys Rev E 2005, 72:031909. Publisher Full Text 70. Kozma R, Puljic M, Balister P, Bollobas B, Freeman WJ: Phase transitions in the neuropercolation model of neural populations with mixed local and non-local interactions. Biol Cybern 2005, 92:367-379. PubMed Abstract | Publisher Full Text 71. Dyhrfjeld-Johnsen J, Santhajumar V, Morgan RJ, Huerta R, Tsiming L, Sotesz I: Topological determinants of epileptogenesis in large-scale structural and functional models of the dentate gyrus derived from experimental data. J Neurophysiol 2007, 97:1566-1587. PubMed Abstract | Publisher Full Text 72. French DA, Gruenstein : An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons. J Comput Neurosci 2006, 21:227-241. PubMed Abstract | Publisher Full Text 73. Bettencourt LMA, Stephens GJ, Ham MI, Gross GW: Functional structure of cortical neuronal networks grown in vitro. Phys Rev E 2007, 75:021915. Publisher Full Text 74. Vragovic I, Louis E, Degli Esposti Boschie C, Ortega G: Diversity-induced synchronized oscillations in close-to-threshold excitable elements arranged on regular networks: effects of network Physica D 2006, 219:111-119. Publisher Full Text 75. Shin Ch-W, Kim S: Self-organized criticality and scale-free properties in emergent functional neural networks. Phys Rev E 2006, 74:045101. Publisher Full Text 76. Vreeswijk V, Sompolinsky H: Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 1996, 274:1724-1726. PubMed Abstract | Publisher Full Text 77. Paula DR, Araujo AD, Andrade JS Jr, Herrmann HJ, Galles JAC: Periodic neural activity induced by network complexity. Phys Rev E 2006, 74:017102. Publisher Full Text 78. Honey CJ, Kotter R, Breakspear M, Sporns O: Network structure of cerebral cortex shapes functional connectivity on multiple time scales. PNAS 2007, in press. PubMed Abstract | Publisher Full Text 79. Anischenko A, Treves A: Autoassociative memory retrieval and spontaneous activity bumps in small-world networks of integrate-and-fire neurons. J Physiol 2007, 100(4):225-236. doi: 10.1016/j.jphysparis.207.01.004 80. Hilgetag CC, Burns GAPC, O'Neill MAO, Scannell JW, Young MP: Anatomical connectivity defines the organization of clusters of cortical areas in the macaque monkey and the cat. Phil Trans R Soc Lond B 2000, 355(1393):91-110. Publisher Full Text 81. Stephan KE, Hilgetag C-C, Burns GAPC, O'Neill MA, Young MP, Kotter R: Computational analysis of functional connectivity between areas of primate cerebral cortex. Phil Trans R Soc Lond B 2000, 355:111-126. Publisher Full Text 82. Kotter R, Sommer FT: Global relationship between anatomical connectivity and activity propagation in the cerebral cortex. Phil Trans R Soc Lond B 2000, 355(1393):127-134. Publisher Full Text 83. Sporns O, Zwi JD: The small-world of the cerebral cortex. Neuroinformatics 2004, 2:145-162. PubMed Abstract | Publisher Full Text 84. Sporns O, Kotter R: Motifs in brain networks. PLOS Biology 2004, 2:1910-1918. Publisher Full Text 85. Kaiser M, Hilgetag CC: Edge vulnerability in neural and metabolic networks. Biol Cybern 2004, 90:311-317. PubMed Abstract | Publisher Full Text 86. Buzsaki G, Geisler C, Henze DA, Wang X-J: Interneuron diversity series: circuit complexity and axon wiring economy of cortical interneurons. TRENDS in Neurosciences 2004, 27:186-193. PubMed Abstract | Publisher Full Text 87. Manev R, Manev H: The meaning of mammalian adult neurogenesis and the function of newly added neurons: the 'small-world' network. Med Hypotheses 2005, 64:114-117. PubMed Abstract | Publisher Full Text 88. Humphries MD, Gurney K, Prescott TJ: The brainstem reticular formation is a small-world, not scale-free network. Proc R Soc Lond B 2006, 273:503-511. Publisher Full Text 89. He Y, Chen ZJ, Evans AC: Small-world anatomical networks in the human brain revealed by cortical thickness form MRI. Cereb Cortex 2006. doi: 10.1093/cercor/bh149 90. Achard S, Salvador R, Whitcher B, Suckling J, Bullmore E: A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. J Neurosci 2006, 26:63-72. PubMed Abstract | Publisher Full Text 91. Stephan KE: On the role of general system theory for functional neuroimaging. J Anat 2004, 205:443-470. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 92. Sporns O, Tononi G, Edelman GM: Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices. Cereb Cortex 2000, 10:127-141. PubMed Abstract | Publisher Full Text 93. Sporns O, Tononi G, Edelman GE: Connectivity and complexity: the relationship between neuroanatomy and brain dynamics. Neural Networks 2000, 13:909-922. PubMed Abstract | Publisher Full Text 94. Sporns O, Tononi G: Classes of network connectivity and dynamics. Complexity 2002, 7:28-38. Publisher Full Text 95. Karbovski J: Optimal wiring principle and plateaus in the degree of separation for cortical neurons. Phys Rev Lett 2001, 86:3674-3677. PubMed Abstract | Publisher Full Text 96. Kaiser M, Hilgetag CC: Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systems. PloS Computational Biology 2006, 2:805-815. Publisher Full Text 97. Chen BL, Hall D, Chklovskii DB: Wiring optimization can related neuronal structure and function. PNAS 2006, 12:4723-4728. Publisher Full Text 98. Aertsen AMHJ, Gerstein GL, Habib MK, Palm G: Dynamics of neuronal firing correlation: modulation of 'effective connectivity'. J Neurophysiol 1989, 61:900-917. PubMed Abstract | Publisher Full Text 99. Dodel S, Hermann JM, Geisel T: Functional connectivity by cross-correlation clustering. Neurocomputing 2002, 44–46:1065-1070. Publisher Full Text 100. Eguiluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV: Scale-free brain functional networks. Phys Rev Lett 2005, 94:018102. PubMed Abstract | Publisher Full Text 101. Salvador R, Suckling J, Coleman MR, Pickard JD, Menon D, Bullmore E: Neurophysiological architecture of functional magnetic resonance images of human brain. Cereb Cortex 2005, 15:1332-1342. PubMed Abstract | Publisher Full Text 102. Salvador R, Suckling J, Schwarzbauer Ch, Bullmore E: Undirected graphs of frequency-dependent functional connectivity in whole brain networks. Phil Trans R Soc Lond B 360(1457):937-946. doi: 10.1098/rstb.2005.1645. Publisher Full Text 103. Achard S, Bullmore E: Efficiency and cost of economical brain functional networks. PloS Comp Biol 2007, 3(2):e17. Publisher Full Text 104. Stam CJ: Functional connectivity patterns of human magnetoencephalographic recordings: a "small-world" network? Neurosci Lett 2004, 355:25-28. PubMed Abstract | Publisher Full Text 105. Stam CJ, van Dijk BW: Synchronization likelihood: an unbiased measure of generalized synchronization in multivariate data sets. Physica D 2002, 163:236-241. Publisher Full Text 106. Montez T, Linkenkaer-Hansen K, van Dijk BW, Stam CJ: Synchronization likelihood with explicit time-frequency priors. Neuroimage 2006, 33:1117-1125. PubMed Abstract | Publisher Full Text 107. Bassett DS, Meyer-Linderberg A, Achard S, Duke Th, Bullmore E: Adaptive reconfiguration of fractal small-world human brain functional networks. PNAS 2006, 103:19518-19523. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 108. Stam CJ, Jones BF, Nolte G, Breakspear M, Scheltens Ph: Small-world networks and functional connectivity in Alzheimer's disease. Cereb Cortex 2007, 17:92-99. PubMed Abstract | Publisher Full Text 109. Bartolomei F, Bosma I, Klein M, Baayen JC, Reijneveld JC, Postma TJ, Heimans JJ, van Dijk BW, de Munck JC, de Jongh A, Cover KS, Stam CJ: Disturbed functional connectivity in brain tumour patients: evaluation by graph analysis of synchronization matrices. Clin Neurophysiol 2006, 117:2039-2049. PubMed Abstract | Publisher Full Text 110. Micheloyannis S, Pachou E, Stam CJ, Vourkas M, Erimaki S, Tsirka V: Using graph theoretical analysis of multi channel EEG to evaluate the neural efficiency hypothesis. Neurosci Lett 2006, 402:273-277. PubMed Abstract | Publisher Full Text 111. Micheloyannis S, Pachou E, Stam CJ, Breakspear M, Bitsios P, Vourkas M, Erimaki S, Zervakis M: Small-world networks and disturbed functional connectivity in schizophrenia. Schizophr Res 2006, 87:60-66. PubMed Abstract | Publisher Full Text 112. Breakspear M, Rubinov M, Knock S, Williams LM, Harris AWF, Micheloyannis S, Terry JR, Stam CJ: Graph analysis of scalp EEG data in schizophrenia reveals a random shift of nonlinear nentwork Neuroimage 2006., 31(Suppl 1) 671 W-AM PubMed Abstract 113. Ferri R, Rundo F, Brunt O, Terzano MG, Stam CJ: Small-world network organization of functional connectivity of EEG slow-wave activity during sleep. Clin Neurophysiol 2007, 118:449-456. PubMed Abstract | Publisher Full Text 114. Smit DJA, Stam CJ, Boomsma DI, Posthyma D, de Geus EJC: Heritability of 'small world' architecture of functional brain connectivity. 115. Posthuma D, de Geus EJC, Mulder EJCM, Smit DJA, Boomsma DI, Stam CJ: Genetic components of functional connectivity in the brain: the heritability of synchronization likelihood. Hum Brain Mapp 2005, 26:191-198. PubMed Abstract | Publisher Full Text 116. Wu H, Li X, Guan X: Networking property during epileptic seizure with multi-channel EEG recordings. 117. Ponten SC, Bartolomei F, Stam CJ: Small-world networks and epilepsy: graph theoretical analysis of intracranially recorded mesial temporal lobe seizures. Clin Neurophysiol 2007, 118:918-927. PubMed Abstract | Publisher Full Text 118. DeHaene S, Naccache L: Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition 2001, 79:1-37. PubMed Abstract | Publisher Full Text 119. Tononi G, Edelman GM: Consciousness and complexity. Science 1998, 282:1846-1851. PubMed Abstract | Publisher Full Text Sign up to receive new article alerts from Nonlinear Biomedical Physics
{"url":"http://www.nonlinearbiomedphys.com/content/1/1/3","timestamp":"2014-04-17T00:53:42Z","content_type":null,"content_length":"268501","record_id":"<urn:uuid:722d9d02-8e47-4568-a8b9-2bc2bd01d9c3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Totaro, Burt - Department of Pure Mathematics and Mathematical Statistics, University of Cambridge • Part IB GEOMETRY, Examples sheet 2 (Lent 2011, Burt Totaro) (1) Let U be an open subset of R2 • Part III Algebraic Cycles, first set of notes Burt Totaro • The automorphism group of an ane quadric Burt Totaro • Birational geometry of quadrics in characteristic 2 Burt Totaro • The topology of smooth divisors and the arithmetic of abelian varieties • ICM 2002 Vol. III 1{3 Topology of Singular Algebraic Varieties • The curvature of a Hessian metric Burt Totaro • The resolution property for schemes and stacks Burt Totaro • Part IB Geometry, Examples sheet 1 (Lent 2011) Burt Totaro • Line bundles with partially vanishing cohomology Burt Totaro • Complexi cations of nonnegatively curved manifolds Burt Totaro • Euler and algebraic geometry Burt Totaro • Non-injectivity of the map from the Witt group of a variety to the Witt group of its function eld • Birational geometry of quadrics Burt Totaro • The cone conjecture for Calabi-Yau pairs in dimension Burt Totaro • Algebraic surfaces and hyperbolic geometry Burt Totaro • The torsion index of E 8 and other groups Burt Totaro • Cheeger manifolds and the classi cation of biquotients Burt Totaro • Cambridge Hotels and Maps A list of hotels that are within walking distance to the bus routes. CMS is on (off Clarkson • The torsion index of E8 and other groups Burt Totaro • Curvature, diameter, and quotient manifolds Burt Totaro • Curvature, diameter, and quotient manifolds Burt Totaro • Part II Representation Theory Sheet 1 Unless otherwise stated, groups here are finite, and all vector spaces are finite dimensional • Proceedings of Symposia in Pure Mathematics The Chow Ring of a Classifying Space • Cheeger manifolds and the classification of biquotients Burt Totaro • Splitting elds for E 8 -torsors Burt Totaro • Hilbert's fourteenth problem over finite fields, and a conjecture on the cone of curves • Part IB GEOMETRY, Examples sheet 3 (Lent 2011, Burt Totaro) (1) Show that the tangent space to S2 • The automorphism group of an affine quadric Burt Totaro • Examples sheet 2 for Part III Lie groups, Lie algebras, and their representations • The elliptic genus of a singular variety Burt Totaro • The topology of smooth divisors and the arithmetic of abelian varieties • Splitting fields for E8-torsors Burt Totaro • The torsion index of the spin groups Burt Totaro • Projective resolutions of representations of GL(n) Burt Totaro • Tensor products in p-adic Hodge theory Burt Totaro * • Proceedings of Symposia in Pure Mathematics The Chow Ring of a Classifying Space • Moving codimension-one subvarieties over finite fields Burt Totaro • Examples sheet 3 for Part III Lie groups, Lie algebras, and their representations • Towards a Schubert calculus for complex re ection Burt Totaro • Moving codimension-one subvarieties over nite elds Burt Totaro • Part II Representation Theory Sheet 2 Unless otherwise stated, all vector spaces are finite dimensional over a field F of charac- • Euler and algebraic geometry Burt Totaro • Chow groups, Chow cohomology, and linear varieties Burt Totaro * • Torsion algebraic cycles and complex cobordism Burt Totaro • Cohomology of semidirect product groups Burt Totaro • Euler characteristics for p-adic Lie groups Burt Totaro • Chern numbers for singular varieties and elliptic homology • Hilbert's fourteenth problem over finite fields, and a conjecture on the cone of curves • Algebraic Geometry (M24) Algebraic geometry is the study of the set of solutions of a system of polynomial equations, • Pseudo-abelian varieties Burt Totaro • Jumping of the nef cone for Fano varieties Burt Totaro • Birational geometry of quadrics in characteristic 2 Burt Totaro • The elliptic genus of a singular variety Burt Totaro • The torsion index of the spin groups Burt Totaro • The resolution property for schemes and stacks Burt Totaro • The curvature of a Hessian metric Burt Totaro • Non-injectivity of the map from the Witt group of a variety to the Witt group of its function field • Complexifications of nonnegatively curved manifolds Burt Totaro • Towards a Schubert calculus for complex reflection Burt Totaro • ICM 2002 Vol. III 13 Topology of Singular Algebraic Varieties • Examples sheet 1 for Part III Algebraic Geometry Burt Totaro • Examples sheet 2 for Part III Algebraic Geometry Burt Totaro • Examples sheet 3 for Part III Algebraic Geometry Burt Totaro • Examples sheet 1 for Part III Lie groups, Lie algebras, and their representations • Part II Representation Theory Sheet 3 Unless otherwise stated, groups here are finite, and all vector spaces are finite dimensional • Part II Representation Theory Sheet 4 Unless otherwise stated, all vector spaces are finite dimensional over C. • Part III Algebraic Cycles, Examples Sheet 1 Burt Totaro • Part III Algebraic Cycles, Examples Sheet 2 Burt Totaro • Examples sheet 2 for Part II Algebraic Topology Burt Totaro • Examples sheet 3 for Part II Algebraic Topology Burt Totaro • Examples sheet 1 for Part IIB Algebraic Topology Burt Totaro • Initial ideals, Veronese subrings, and rates of algebras David Eisenbud * • Commutative Algebra (M24) Commutative algebra is the study of commutative rings, the basic examples being the ring of • Examples sheet 1 for Part III Commutative Burt Totaro • 19 August 2011 Media release • International Mathematical • The Council for the Mathematical Sciences comprises the Institute of Mathematics and its Applications, the Royal Statistical Society, the London Mathematical Society, the Edinburgh Mathematical Society and the Operational Research Society, which are all r • The Rt Hon David Cameron MP c/o David Holmes Prime Minister Mathematics Institute, • 24 October 2011 Professor David Delpy FRS FREng FMedSci • 20 September 2011 Media release • New York University A private university in the public service • 20 September 2011 The Right Hon. David Cameron PC MP • September 28, 2011 The Rt. Hon. David Willetts MP, • Fibre Bundles Prof. B. J. Totaro • Lie Group, Lie Algebra and their Representations Prof. Totaro (B.J.Totaro@dpmms.cam.ac.uk)
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/16/002.html","timestamp":"2014-04-18T04:40:10Z","content_type":null,"content_length":"20401","record_id":"<urn:uuid:9215abd7-f647-4034-b6e9-849db0c01c02>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply For "perimeter" you could think: how long would a piece of string be to go around the edge of the circle (or square, or whatever shape). For area, you could think how many brush strokes you would need to paint inside the circle (or square, etc). To help you remember π, have a look at this We also have a page on perimeters and areas here Is any of that a help?
{"url":"http://www.mathisfunforum.com/post.php?tid=2405&qid=23204","timestamp":"2014-04-20T11:26:02Z","content_type":null,"content_length":"17876","record_id":"<urn:uuid:e1725ac6-3f44-423e-b283-25465fb3f462>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternatives vs. Outcomes: A Note on the Gibbard-Satterthwaite Theorem Weber, Tjark (2009): Alternatives vs. Outcomes: A Note on the Gibbard-Satterthwaite Theorem. Download (130Kb) | Preview The Gibbard-Satterthwaite theorem is a well-known theorem from the field of social choice theory. It states that every voting scheme with at least 3 possible outcomes is dictatorial or manipulable. Later work on the Gibbard-Satterthwaite theorem frequently does not distinguish between alternatives and outcomes, thereby leading to a less general statement that requires the voting scheme to be onto. We show how the Gibbard-Satterthwaite theorem can be derived from the seemingly less general formulation. Item Type: MPRA Paper Original Alternatives vs. Outcomes: A Note on the Gibbard-Satterthwaite Theorem Language: English Keywords: Gibbard-Satterthwaite theorem; infeasible alternatives Subjects: D - Microeconomics > D7 - Analysis of Collective Decision-Making > D71 - Social Choice; Clubs; Committees; Associations Item ID: 17836 Depositing Tjark Weber Date 13. Oct 2009 04:33 Last 08. Jan 2014 07:16 Kenneth J. Arrow. A difficulty in the concept of social welfare. Journal of Political Economy, 58(4):328-346, August 1950. Salvador Barbera. Strategy-proofness and pivotal voters: A direct proof of the Gibbard-Satterthwaite theorem. International Economic Review, 24(2):413-417, 1983. Jean-Pierre Benoit. The Gibbard-Satterthwaite theorem: a simple proof. Economics Letters, 69(3):319-322, December 2000. John Duggan and Thomas Schwartz. Strategic manipulability without resoluteness or shared beliefs: Gibbard-Satterthwaite generalized. Social Choice and Welfare, 17(1):85-93, January 2000. Allan M. Feldman and Roberto Serrano. Welfare Economics and Social Choice Theory. Birkhäuser, 2006. Peter Gärdenfors. A concise proof of a theorem on manipulation of social choice functions. Public Choice, 32:137-142, 1977. Allan Gibbard. Manipulation of voting schemes: a general result. Econometrica, 41(4):587-601, July 1973. Reprinted in Charles K. Rowley, ed., Social Choice Theory (Cheltenham: Edward Elgar, 1993). Eitan Muller and Mark A. Satterthwaite. The equivalence of strong positive association and strategy-proofness. Journal of Economic Theory, 14(2):412-418, April 1977. References: Tobias Nipkow. Social choice theory in HOL: Arrow and Gibbard-Satterthwaite. Journal of Automated Reasoning, 2009. To appear. Paramesh Ray. Independence of irrelevant alternatives. Econometrica, 41(5):987-991, September 1973. Philip J. Reny. Arrow's theorem and the Gibbard-Satterthwaite theorem: a unified approach. Economics Letters, 70(1):99-105, January 2001. Mark A. Satterthwaite. Strategy-proofness and Arrow's conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10:187-217, April 1975. D. Schmeidler and H. Sonnenschein. Two proofs of the Gibbard-Satterthwaite theorem on the possibility of a strategy-proof social choice function. In H. Gottinger and W. Leinfellner, editors, Decision Theory and Social Ethics: Issues in Social Choice, pages 227-234. D. Reidel Publishing Company, Dordrecht, 1978. Lars-Gunnar Svensson. The proof of the Gibbard-Satterthwaite theorem revisited. Working Papers from Lund University, Department of Economics, 1, 1999. Alan D. Taylor. Social Choice and the Mathematics of Manipulation. Outlooks. Cambridge University Press, 2005. Wikipedia. Gibbard-Satterthwaite theorem. In Wikipedia, The Free Encyclopedia. 2009. Retrieved September 1, 2009, from http://en.wikipedia.org/w/index.php?title=Gibbard% URI: http://mpra.ub.uni-muenchen.de/id/eprint/17836
{"url":"http://mpra.ub.uni-muenchen.de/17836/","timestamp":"2014-04-18T13:39:55Z","content_type":null,"content_length":"21617","record_id":"<urn:uuid:c027c432-a1a2-4e6b-a856-4d883b49f535>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Bayesian inference on ignorance and uncertainty Posts Tagged ‘Bayesian inference’ For the past 34 years I’ve been compelled to teach a framework that I’ve long known is flawed. A better framework exists and has been available for some time. Moreover, I haven’t been forced to do this by any tyrannical regime or under threats of great harm to me if I teach this alternative instead. And it gets worse: I’m not the only one. Thousands of other university instructors have been doing the same all over the world. I teach statistical methods in a psychology department. I’ve taught courses ranging from introductory undergraduate through graduate levels, and I’m in charge of that part of my department’s curriculum. So, what’s the problem—Why haven’t I abandoned the flawed framework for its superior alternative? Without getting into technicalities, let’s call the flawed framework the “Neyman-Pearson” approach and the alternative the “Bayes” approach. My statistical background was formed as I completed an undergraduate degree in mathematics during 1968-72. My first courses in probability and statistics were Neyman-Pearson and I picked up the rudiments of Bayes toward the end of my degree. At the time I thought these were simply two valid alternative ways of understanding probability. Several years later I was a newly-minted university lecturer teaching introductory statistics to fearful and sometimes reluctant students in the social sciences. The statistical methods used in the social science research were Neyman-Pearson, so of course I taught Neyman-Pearson. Students, after all, need to learn to read the literature of their discipline. Gradually, and through some of my research into uncertainty, I became aware of the severe problems besetting the Neyman-Pearson framework. I found that there was a lengthy history of devastating criticisms raised against Neyman-Pearson even within the social sciences, criticisms that had been ignored by practising researchers and gatekeepers to research publication. However, while the Bayesian approach may have been conceptually superior, in the late ‘70’s through early ‘80’s it suffered from mathematical and computational impracticalities. It provided few usable methods for dealing with complex problems. Disciplines such as psychology were held in thrall to Neyman-Pearson by a combination of convention and the practical requirements of complex research designs. If I wanted to provide students or, for that matter, colleagues who came to me for advice, with effective statistical tools for serious research then usually Neyman-Pearson techniques were all I could offer. But what to do about teaching? No university instructor takes a formal oath to teach the truth, the whole truth, and nothing but the truth; but for those of us who’ve been called to teach it feels as though we do. I was sailing perilously close to committing Moore’s Paradox in the classroom (“I assert Neyman-Pearson but I don’t believe it”). I tried slipping in bits and pieces alerting students to problematic aspects of Neyman-Pearson and the existence of the Bayesian alternative. These efforts may have assuaged my conscience but they did not have much impact, with one important exception. The more intellectually proactive students did seem to catch on to the idea that theories of probability and statistics are just that—Theories, not god-given commandments. Then Bayes got a shot in the arm. In the mid-80’s some powerful computational techniques were adapted and developed that enabled this framework to fight at the same weight as Neyman-Pearson and even better it. These techniques sail under the banner of Markov chain Monte Carlo methods, and by the mid-90’s software was available (free!) to implement them. The stage was set for the Bayesian revolution. I began to dream of writing a Bayesian introductory statistics textbook for psychology students that would set the discipline free and launch the next generation of researchers. It didn’t happen that way. Psychology was still deeply mired in Neyman-Pearson and, in fact, in a particularly restrictive version of it. I’ll spare you the details other than saying that it focused, for instance, on whether the researcher could reject the claim that an experimental effect was nonexistent. I couldn’t interest my colleagues in learning Bayesian techniques, let alone undergraduate By the late ‘90’s a critical mass of authoritative researchers convinced the American Psychological Association to form a task-force to reform statistical practice, but this reform really amounted to shifting from the restrictive Neyman-Pearson orientation to a more liberal one that embraced estimating how big an experimental effect is and setting a “confidence interval” around it. It wasn’t the Bayesian revolution, but I leapt onto this initiative because both reforms were a long stride closer to the Bayesian framework and would still enable students to read the older Neyman-Pearson dominated research literature. So, I didn’t write a Bayesian textbook after all. My 2000 introductory textbook was, so far as I’m aware, one of the first to teach introductory statistics to psychology students from a confidence interval viewpoint. It was generally received well by fellow reformers, and I got a contract to write a kind of researcher’s confidence interval handbook that came out in 2003. The confidence interval reform in psychology was under weigh, and I’d booked a seat on the juggernaut. Market-wise, my textbook flopped. I’m not singing the blues about this, nor do I claim sour grapes. For whatever reasons, my book just didn’t take the market by storm. Shortly after it came out, a colleague mentioned to me that he’d been at a UK conference with a symposium on statistics teaching where one of the speakers proclaimed my book the “best in the world” for explaining confidence intervals and statistical power. But when my colleague asked if the speaker was using it in the classroom he replied that he was writing his own. And so better-selling introductory textbooks continued to appear. A few of them referred to the statistical reforms supposedly happening in psychology but the majority did not. Most of them are the n^th edition of a well-established book that has long been selling well to its set of long-serving instructors and their students. My 2003 handbook fared rather better. I had put some software resources for computing confidence intervals on a webpage and these got a lot of use. These, and my handbook, got picked up by researchers and their graduate students. Several years on, the stuff my scripts did started to appear in mainstream commercial statistics packages. It seemed that this reform was occurring mainly at the advanced undergraduate, graduate and researcher levels. Introductory undergraduate statistical education in psychology remained (and still remains) largely untouched by it. Meanwhile, what of the Bayesian movement? In this decade, graduate-level social science oriented Bayesian textbooks began to appear. I recently reviewed several of them and have just sent off an invited review of another. In my earlier review I concluded that the market still lacked an accessible graduate-level treatment oriented towards psychology, a gap that may have been filled by the book I’ve just finished reviewing. Have I tried teaching Bayesian methods? Yes, but thus far only in graduate-level workshops, and on my own time (i.e., not as part of the official curriculum). I’ll be doing so again in the second half of this year, hoping to recruit some of my colleagues as well as graduate students. Next year I’ll probably introduce a module on Bayes for our 4^th-year (Honours) students. It’s early days, however, and we remain far from being able to revamp the entire curriculum. Bayesian techniques still rarely appear in the mainstream research literature in psychology, and so students still need to learn Neyman-Pearson to read that literature with a knowledgably critical eye. A sea-change may be happening, but it’s going to take years (possibly even decades). Will I try writing a Bayesian textbook? I already know from experience that writing a textbook is a lot of time and hard work, often for little reward. Moreover, in many universities (including mine) writing a textbook counts for nothing. It doesn’t bring research money, it usually doesn’t enhance the university’s (or the author’s) scholarly reputation, it isn’t one of the university’s “performance indicators,” and it seldom brings much income to the author. The typical university attitude towards textbooks is as if the stork brings them. Writing a textbook, therefore, has to be motivated mainly by a passion for teaching. So I’m thinking about it… Written by michaelsmithson May 2, 2011 at 8:32 am Posted in Uncategorized Tagged with Bayesian inference, Confidence interval, Jerzy Neyman, Markov chain Monte Carlo, Philosophy, Probability, Psychology, Social sciences, Statistics, Teaching, Thomas Bayes, Uncertainty
{"url":"https://ignoranceanduncertainty.wordpress.com/tag/bayesian-inference/","timestamp":"2014-04-18T00:48:41Z","content_type":null,"content_length":"23166","record_id":"<urn:uuid:fa9d40d3-0b8a-4629-aaa8-8929abb2159e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Materials for Lab and Class Current Search Limits Subject: Geoscience showing only Geoscience > Atmospheric Science Results 1 - 10 of 43 matches How Fast Do Materials Weather? part of Starting Point-Teaching Entry Level Geoscience:Interactive Lectures:Examples Rebecca Teed, Wright State University-Main Campus A think-pair-share activity in which students calculate weathering rates from tombstone weathering data. - Carbon Dioxide Exercise part of Starting Point-Teaching Entry Level Geoscience:Interactive Lectures:Examples Rebecca Teed, Wright State University-Main Campus Students work in groups, plotting carbon dioxide concentrations over time on overheads and estimating the rate of change over five years. - The Modern Atmospheric CO2 Record part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples Bob Mackay, Clark College Students compare carbon dioxide (CO2) data from Mauna Loa Observatory, Barrow (Alaska), and the South Pole over the past 40 years to help them better understand what controls atmospheric CO2. - The Heat is On: Understanding Local Climate Change part of Cutting Edge:Visualization:Examples Dan Zalles, SRI International Students draw conclusions about the extent to which multiple decades of temperature data about Phoenix suggest that a shift in local climate is taking place as opposed to exhibiting nothing more than natural ... Is There a Trend in Hurricane Number or Intensity? part of Cutting Edge:Hurricanes-Climate Change Connection:Activities Todd Ellis, SUNY College at Oneonta This lab guides students through an examination of the hurricane record to determine if there is a trend in hurricane intensity over the past 40 years and introduces some issues related to statistics and ... Calculation of your personal carbon footprint part of Cutting Edge:Energy:Energy Activities Scott Giorgis, University of Wisconsin-Madison This worksheet walks the students through the steps for calculating their personal carbon footprint. Additionally it helps them consider options for reducing their carbon footprint and the potential costs of those ... Comparing Carbon Calculators part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples Mark McCaffrey, National Center for Science Education Carbon calculators, no matter how well intended as tools to help measure energy footprints, tend to be black boxes and can produce wildly different results, depending on the calculations used to weigh various ... Sun Spot Analysis part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples Bob Mackay, Clark College; Mike Clark Introductory students use Excel to graph monthly mean Greenwich sunspot numbers from 1749 to 2004 and perform a spectral analysis of the data using the free software program "Spectra". - Using Mass Balance to Understand Atmospheric CFCs part of Starting Point-Teaching Entry Level Geoscience:Teaching with Data:Examples Bob Mackay, Clark College Students use an interactive online mass balance model help understand the observed levels of chlorofluorocarbon CFC-12 over the recent past. - Mass Balance Model part of Starting Point-Teaching Entry Level Geoscience:Mathematical and Statistical Models:Mathematical and Statistical Models Examples Activity and Starting Point page by R.M. MacKay. Clark College, Physics and Meteorology. Students are introduced to the concept of mass balance, flow rates, and equilibrium using an online interactive water bucket model. -
{"url":"http://serc.carleton.edu/quantskills/teaching_resources/activities.html?q1=sercvocabs__43%3A13","timestamp":"2014-04-17T04:47:13Z","content_type":null,"content_length":"33915","record_id":"<urn:uuid:1e90aad4-1f2b-4818-98b9-692b2657c977>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Darin Stephenson Mathematics Department < Faculty/Staff < Undergraduate Research < Degree Programs < Courses < Statistics Resources < Pre-Actuary Preparation < Colloquia < Competitions < Departmental Newsletter < Problem of the Fortnight < Links < Careers in Mathematics < Press Releases < Prospective Students < Natural Science Division < Darin Stephenson Home Courses Research Publications Curriculum Vitae Undergraduate Research Math 231-232 Book About Me In Spring 2014, Darin Stephenson will be teaching Math 231: Multivariable Mathematics I and Math 280: Bridge to Higher Mathematics. He currently serving as Chair of the Campus Life Board and as a member of the Co-Curricular Activities Committee. He is also a Fellow of the newly-formed Center for Ministry Studies at Hope College. Contact Information: Darin R. Stephenson Department of Mathematics Hope College P. O. Box 9000 Holland, MI 49422-9000 e-mail: stephenson@hope.edu Phone: 616.395.7524 Office: 219 VanderWerf Hall
{"url":"http://www.math.hope.edu/stephenson/","timestamp":"2014-04-17T07:54:43Z","content_type":null,"content_length":"13612","record_id":"<urn:uuid:1a07282c-6f31-4848-b33e-29396bd1c205>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
[Proj] Transverse Mercator algorithm Charles Karney ckarney at sarnoff.com Sun Sep 7 14:53:23 EDT 2008 Gerald I. Evenden wrote: > I appreciate your approach to error diagnosis in the comparison of the > ?tmercs however I must take a more brute force approach and merely > look at primitive results. This is due to the fact that I am *not* a > theoretician but merely one looking for the bottom line. It's probably good to have an idea of what the error might look like as a function of position. The error will approximately be the first dropped term in the series expansions. In the case of the 4-term JHS 154 expansion this is ~ e^10 sin(10 xi) cosh(10 eta) ~ e^10 sin(10 phi) cosh(10 x/a) or some oscillating function of y multiplying an exponential function of x. We are, of course, primarily interested in the potentially large variation with x. > I will get a 3D plot made sometime today. There's a good chance that this will look messy because of the oscillatory behavior of the error with latitude and the fact that round-off errors dominate for small x. My methodology is: Generate a set of random latitudes (lat0), longitudes (lon0), exact eastings (x0), and exact northings (y0). I posted a set of such data (1/4 million points) at For each geographic point (lat0,lon0), compute (x,y) = F(lat0,lon0) (lat,lon) = F^-1(x,y) where F is the numerical transverse Mercator projection being evaluated. Define err = max( hypot(x-x0, y-y0), 1e7/90 * hypot(lat-lat0, cos(lat0)*(lon-lon0)) ) mu = asin( sin(lon0) * cos(lat0) ) The second term in err approximately converts the lat/lon discrepancy into a distance. mu is the angular distance away from the central meridian and I use this as an ersatz x of the purpose of error analysis. You can get a feel for the behavior of the error by plotting err against mu. E.g., in Matlab notation: plot(mu, log10(err), 'x'); The tabulated error data I sent out yesterday was computed with, e.g., max(err(x0<5e5 & y0<96e5)) and so on. The second statement takes the maximum of the error for all mu < 25 and this is useful because (from the plot) err is strongly increasing with mu. > Of course, the above has one critical assumption we must not forget: > the maxima procedure is correct and god-like. Quite so. The good news is that when the procedure fails, it fails Charles Karney <ckarney at sarnoff.com> Sarnoff Corporation, Princeton, NJ 08543-5300 URL: http://charles.karney.info Tel: +1 609 734 2312 Fax: +1 609 734 2662 More information about the Proj mailing list
{"url":"http://lists.maptools.org/pipermail/proj/2008-September/003740.html","timestamp":"2014-04-16T19:37:53Z","content_type":null,"content_length":"5222","record_id":"<urn:uuid:b1d2ff57-049b-4c5e-a561-5198ea71a710>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with word problem Linear Algebra [Archive] - Free Math Help Forum 07-14-2006, 02:38 PM I would really appreciate any help solving the word problem below. Peter takes 15 minutes longer to mow the lawn by himself than charles does. Together they can mow the lawn in 18 minutes. How long does it take charles to mow the lawn by himself.
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-44773.html","timestamp":"2014-04-21T12:26:23Z","content_type":null,"content_length":"7379","record_id":"<urn:uuid:6b5dbab6-d1ee-4801-9847-5a8333549a04>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00316-ip-10-147-4-33.ec2.internal.warc.gz"}