url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://en.wikipedia.org/wiki/Logistic_regression
# Logistic regression In statistics, logistic regression, or logit regression, is a type of probabilistic statistical classification model.[1] It is also used to predict a binary response from a binary predictor, used for predicting the outcome of a categorical dependent variable (i.e., a class label) based on one or more predictor variables (features). That is, it is used in estimating the parameters of a qualitative response model. The probabilities describing the possible outcomes of a single trial are modeled, as a function of the explanatory (predictor) variables, using a logistic function. Frequently (and subsequently in this article) "logistic regression" is used to refer specifically to the problem in which the dependent variable is binary—that is, the number of available categories is two—while problems with more than two categories are referred to as multinomial logistic regression or, if the multiple categories are ordered, as ordered logistic regression. Logistic regression measures the relationship between a categorical dependent variable and one or more independent variables, which are usually (but not necessarily) continuous, by using probability scores as the predicted values of the dependent variable.[2] As such it treats the same set of problems as does probit regression using similar techniques. ## Fields and examples of applications Logistic regression was put forth in the 1940s as an alternative to Fisher's 1936 classification method, linear discriminant analysis.[3] It is used extensively in numerous disciplines, including the medical and social science fields. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boyd et al. using logistic regression.[4] Logistic regression might be used to predict whether a patient has a given disease (e.g. diabetes), based on observed characteristics of the patient (age, gender, body mass index, results of various blood tests, etc.). Another example might be to predict whether an American voter will vote Democratic or Republican, based on age, income, gender, race, state of residence, votes in previous elections, etc.[5] The technique can also be used in engineering, especially for predicting the probability of failure of a given process, system or product.[6][7] It is also used in marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription, etc.[citation needed] In economics it can be used to predict the likelihood of a person's choosing to be in the labor force, and a business application would be to predict the likehood of a homeowner defaulting on a mortgage. Conditional random fields, an extension of logistic regression to sequential data, are used in natural language processing. ## Basics Logistic regression can be binomial or multinomial. Binomial or binary logistic regression deals with situations in which the observed outcome for a dependent variable can have only two possible types (for example, "dead" vs. "alive"). Multinomial logistic regression deals with situations where the outcome can have three or more possible types (e.g., "disease A" vs. "disease B" vs. "disease C"). In binary logistic regression, the outcome is usually coded as "0" or "1", as this leads to the most straightforward interpretation.[8] If a particular observed outcome for the dependent variable is the noteworthy possible outcome (referred to as a "success" or a "case") it is usually coded as "1" and the contrary outcome (referred to as a "failure" or a "noncase") as "0". Logistic regression is used to predict the odds of being a case based on the values of the independent variables (predictors). The odds are defined as the probability that a particular outcome is a case divided by the probability that it is a noncase. Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical data. Unlike ordinary linear regression, however, logistic regression is used for predicting binary outcomes of the dependent variable (treating the dependent variable as the outcome of a Bernoulli trial) rather than continuous outcomes. Given this difference, it is necessary that logistic regression take the natural logarithm of the odds of the dependent variable being a case (referred to as the logit or log-odds) to create a continuous criterion as a transformed version of the dependent variable. Thus the logit transformation is referred to as the link function in logistic regression—although the dependent variable in logistic regression is binomial, the logit is the continuous criterion upon which linear regression is conducted.[8] The logit of success is then fit to the predictors using linear regression analysis. The predicted value of the logit is converted back into predicted odds via the inverse of the natural logarithm, namely the exponential function. Therefore, although the observed dependent variable in logistic regression is a zero-or-one variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a success (a case). In some applications the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a case; this categorical prediction can be based on the computed odds of a success, with predicted odds above some chosen cut-off value being translated into a prediction of a success. ## Logistic function, odds ratio, and logit Figure 1. The logistic function, with $\beta_0 + \beta_1 x$ on the horizontal axis and $F(x)$ on the vertical axis An explanation of logistic regression begins with an explanation of the logistic function, which always takes on values between zero and one:[8] $F(t) = \frac{e^t}{e^t+1} = \frac{1}{1+e^{-t}},$ If $t$ is viewed as a linear function of an explanatory variable $x$ (or of a linear combination of explanatory variables), the logistic function can be written as: $F(x) = \frac {1}{1+e^{-(\beta_0 + \beta_1 x)}}.$ This will be interpreted as the probability of the dependent variable equalling a "success" or "case" rather than a failure or non-case. We also define the inverse of the logistic function, the logit: $g(x) = \ln \frac{F(x)}{1 - F(x)} = \beta_0 + \beta_1 x ,$ and equivalently: $\frac{F(x)}{1 - F(x)} = e^{\beta_0 + \beta_1 x}.$ A graph of the logistic function $F(x)$ is shown in Figure 1. The input is the value of $\beta_0 + \beta_1 x$ and the output is $F(x)$. The logistic function is useful because it can take an input with any value from negative infinity to positive infinity, whereas the output $F(x)$ is confined to values between 0 and 1 and hence is interpretable as a probability. In the above equations, $g(x)$ refers to the logit function of some given linear combination $x$ of the predictors, $\ln$ denotes the natural logarithm, $F(x)$ is the probability that the dependent variable equals a case, $\beta_0$ is the intercept from the linear regression equation (the value of the criterion when the predictor is equal to zero), $\beta_1 x$ is the regression coefficient multiplied by some value of the predictor, and base $e$ denotes the exponential function. The formula for $F(x)$ illustrates that the probability of the dependent variable equaling a case is equal to the value of the logistic function of the linear regression expression. This is important in that it shows that the value of the linear regression expression can vary from negative to positive infinity and yet, after transformation, the resulting expression for the probability $F(x)$ ranges between 0 and 1. The equation for $g(x)$ illustrates that the logit (i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression. Likewise, the next equation illustrates that the odds of the dependent variable equaling a case is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative infinity and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.[8] ### Multiple explanatory variables If there are multiple explanatory variables, then the above expression $\beta_0+\beta_1x$ can be revised to $\beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_mx_m.$ Then when this is used in the equation relating the logged odds of a success to the values of the predictors, the linear regression will be a multiple regression with m explanators; the parameters $\beta_j$ for all j = 0, 1, 2, ..., m are all estimated. ## Model fitting ### Estimation #### Maximum likelihood estimation The regression coefficients are usually estimated using maximum likelihood estimation.[9] Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximizes the likelihood function, so an iterative process must be used instead, for example Newton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until improvement is minute, at which point the process is said to have converged.[9] In some instances the model may not reach convergence. When a model does not converge this indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large proportion of predictors to cases, multicollinearity, sparseness, or complete separation. • Having a large proportion of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to nonconvergence. • Multicollinearity refers to unacceptably high correlations between predictors. As multicollinearity increases, coefficients remain unbiased but standard errors increase and the likelihood of model convergence decreases.[9] To detect multicollinearity amongst the predictors, one can conduct a linear regression analysis with the predictors of interest for the sole purpose of examining the tolerance statistic [9] used to assess whether multicollinearity is unacceptably high. • Sparseness in the data refers to having a large proportion of empty cells (cells with zero counts). Zero cell counts are particularly problematic with categorical predictors. With continuous predictors, the model can infer values for the zero cell counts, but this is not the case with categorical predictors. The model will not converge with zero cell counts for categorical predictors because the natural logarithm of zero is an undefined value, so final solutions to the model cannot be reached. To remedy this problem, researchers may collapse categories in a theoretically meaningful way or add a constant to all cells.[9] • Another numerical problem that may lead to a lack of convergence is complete separation, which refers to the instance in which the predictors perfectly predict the criterion – all cases are accurately classified. In such instances, one should reexamine the data, as there is likely some kind of error.[8] As a general rule of thumb, logistic regression models require a minimum of about 10 events per explaining variable (where event denotes the cases belonging to the less frequent category in the dependent variable).[10] #### Minimum chi-squared estimator for grouped data While individual data will have a dependent variable with a value of zero or one for every observation, with grouped data one observation is on a group of people who all share the same characteristics (e.g., demographic characteristics); in this case the researcher observes the proportion of people in the group for whom the response variable falls into one category or the other. If this proportion is neither zero nor one for any group, the minimum chi-squared estimator involves using weighted least squares to estimate a linear model in which the dependent variable is the logit of the proportion: that is, the log of the ratio of the fraction in one group to the fraction in the other group.[11]:pp.686–9 ### Evaluating goodness of fit Goodness of fit in linear regression models is generally measured using the R2. Since this has no direct analog in logistic regression, various methods[11]:ch.21 including the following can be used instead. #### Deviance and likelihood ratio tests In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of sum of squares calculations.[12] Deviance is analogous to the sum of squares calculations in linear regression[8] and is a measure of the lack of fit to the data in a logistic regression model.[12] Deviance is calculated by comparing a given model with the saturated model – a model with a theoretically perfect fit.[8] This computation is called the likelihood-ratio test:[8] $D = -2\ln \frac{\text{likelihood of the fitted model}} {\text{likelihood of the saturated model}}.$ In the above equation D represents the deviance and ln represents the natural logarithm. The log of the likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, so the product is multiplied by negative two times its natural logarithm to produce a value with an approximate chi-squared distribution.[8] Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained. Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. And, the model deviance represents the difference between a model with at least one predictor and the saturated model.[12] In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Therefore, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a $\chi^2_{s-p},$ chi-square distribution with degree of freedom[8] equal to the difference in the number of parameters estimated. Let $D_{\text{null}} =-2\ln \frac{\text{likelihood of null model}} {\text{likelihood of the saturated model}}$ $D_{\text{fitted}} =-2\ln \frac{\text{likelihood of fitted model}} {\text{likelihood of the saturated model}}.$ Then \begin{align} D_\text{fitted}- D_\text{null} &= \left(-2\ln \frac{\text{likelihood of fitted model}} {\text{likelihood of the saturated model}} \right)-\left(-2\ln \frac{\text{likelihood of null model}} {\text{likelihood of the saturated model}}\right) \\ &= -2 \left(\ln \frac{\text{likelihood of fitted model}} {\text{likelihood of the saturated model}}-\ln \frac{\text{likelihood of null model}} {\text{likelihood of the saturated model}}\right)\\ =& -2 \ln \frac{ \left( \frac{\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}\right)}{ \left( \frac{\text{likelihood of null model}}{\text{likelihood of the saturated model}}\right)}\\ =& -2 \ln \frac{\text{likelihood of the fitted model}}{\text{likelihood of null model}}. \end{align} If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improved model fit. This is analogous to the F-test used in linear regression analysis to assess the significance of prediction.[12] #### Pseudo-R2s In linear regression the squared multiple correlation, R2 is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[12] In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[12] Three of the most commonly used indices are examined on this page beginning with the likelihood ratio R2, R2L:[12] $R^2_\text{L} = \frac{D_\text{null} - D_\text{model}} {D_\text{null}} .$ This is the most analogous index to the squared multiple correlation in linear regression.[9] It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis.[9] One limitation of the likelihood ratio R2 is that it is not monotonically related to the odds ratio,[12] meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases. The Cox and Snell R2 is an alternative index of goodness of fit related to the R2 value from linear regression.[citation needed] The Cox and Snell index is problematic as its maximum value is .75, when the variance is at its maximum (.25). The Nagelkerke R2 provides a correction to the Cox and Snell R2 so that the maximum value is equal to one. Nevertheless, the Cox and Snell and likelihood ratio R2s show greater agreement with each other than either does with the Nagelkerke R2.[12] Of course, this might not be the case for values exceeding .75 as the Cox and Snell index is capped at this value. The likelihood ratio R2 is often preferred to the alternatives as it is most analogous to R2 in linear regression, is independent of the base rate (both Cox and Snell and Nagelkerke R2s increase as the proportion of cases increase from 0 to .5) and varies between 0 and 1. A word of caution is in order when interpreting pseudo-R2 statistics. The reason these indices of fit are referred to as pseudo R2 is because they do not represent the proportionate reduction in error as the R2 in linear regression does.[12] Linear regression assumes homoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always be heteroscedastic – the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of R2 as a proportionate reduction in error in a universal sense in logistic regression.[12] #### Hosmer–Lemeshow test The Hosmer–Lemeshow test uses a test statistic that asymptotically follows a $\chi^2$ distribution to assess whether or not the observed event rates match expected event rates in subgroups of the model population. #### Evaluating binary classification performance If the estimated probabilities are to be used to classify each observation of independent variable values as predicting the category that the dependent variable is found in, the various methods below for judging the model's suitability in out-of-sample forecasting can also be used on the data that were used for estimation—accuracy, precision (also called positive predictive value), recall (also called sensitivity), specificity and negative predictive value. In each of these evaluative methods, an aspect of the model's effectiveness in assigning instances to the correct categories is measured. ## Coefficients After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor.[12] In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t-test. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic. ### Likelihood ratio test The likelihood-ratio test discussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model.[8][9][12] In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has a significantly smaller deviance (c.f chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor.[12] (There is considerable debate among statisticians regarding the appropriateness of so-called "stepwise" procedures. They do not preserve the nominal statistical properties and can be very misleading.[1] ### Wald statistic Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the Wald statistic. The Wald statistic, analogous to the t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.[9] $W_j = \frac{B^2_j} {SE^2_{B_j}}$ Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be large increasing the probability of Type-II error. The Wald statistic also tends to be biased when data are sparse.[12] ### Case-control sampling Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to get data on only a few diseased individuals. Thus, we may evaluate more diseased individuals. This is also called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases is sufficient to get enough control data.[13] If we form a logistic model from such data, if the model is correct, the $\beta_j$ parameters are all correct except for $\beta_0$. We can correct $\beta_0$ if we know the true prevalence as follows:[13] $\hat{\beta_0^*}=\hat{\beta_0}+\log{{\pi}\over{1 - \pi}} - \log{{\tilde{\pi}}\over{1-\tilde{\pi}}}$ where $\pi$ is the true prevalence and $\tilde{\pi}$ is the prevalence in the sample. ## Formal mathematical specification There are various equivalent specifications of logistic regression, which fit into different types of more general models. These different specifications allow for different sorts of useful generalizations. ### Setup The basic setup of logistic regression is the same as for standard linear regression. It is assumed that we have a series of N observed data points. Each data point i consists of a set of m explanatory variables x1,i ... xm,i (also called independent variables, predictor variables, input variables, features, or attributes), and an associated binary-valued outcome variable Yi (also known as a dependent variable, response variable, output variable, outcome variable or class variable), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to explain the relationship between the explanatory variables and the outcome, so that an outcome can be predicted for a new set of explanatory variables. Some examples: • The observed outcomes are the presence or absence of a given disease (e.g. diabetes) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age, blood pressure, body-mass index, etc.). • The observed outcomes are the votes (e.g. Democratic or Republican) of a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). In such a case, one of the two outcomes is arbitrarily coded as 1, and the other as 0. As in linear regression, the outcome variables Yi are assumed to depend on the explanatory variables x1,i ... xm,i. Explanatory variables As shown above in the above examples, the explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (such as income, age and blood pressure) and discrete variables (such as sex or race). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" can be converted to four separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. (In a case like this, only three of the four dummy variables are independent of each other, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it is only necessary to encode three of the four possibilities as dummy variables. This also means that when all four possibilities are encoded, the overall model is not identifiable in the absence of additional constraints such as a regularization constraint. Theoretically, this could cause problems, but in reality almost all logistic regression models are fit with regularization constraints.) Outcome variables Formally, the outcomes Yi are described as being Bernoulli-distributed data, where each outcome is determined by an unobserved probability pi that is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms: \begin{align} Y_i\mid x_{1,i},\ldots,x_{m,i} \ & \sim \operatorname{Bernoulli}(p_i) \\ \mathbb{E}[Y_i\mid x_{1,i},\ldots,x_{m,i}] &= p_i \\ \Pr(Y_i=y_i\mid x_{1,i},\ldots,x_{m,i}) &= \begin{cases} p_i & \text{if }y_i=1 \\ 1-p_i & \text{if }y_i=0 \end{cases} \\ \Pr(Y_i=y_i\mid x_{1,i},\ldots,x_{m,i}) &= p_i^{y_i} (1-p_i)^{(1-y_i)} \end{align} The meanings of these four lines are: 1. The first line expresses the probability distribution of each Yi: Conditioned on the explanatory variables, it follows a Bernoulli distribution with parameters pi, the probability of the outcome of 1 for trial i. As noted above, each separate trial has its own probability of success, just as each trial has its own explanatory variables. The probability of success pi is not observed, only the outcome of an individual Bernoulli trial using that probability. 2. The second line expresses the fact that the expected value of each Yi is equal to the probability of success pi, which is a general property of the Bernoulli distribution. In other words, if we run a large number of Bernoulli trials using the same probability of success pi, then take the average of all the 1 and 0 outcomes, then the result would be close to pi. This is because doing an average this way simply computes the proportion of successes seen, which we expect to converge to the underlying probability of success. 3. The third line writes out the probability mass function of the Bernoulli distribution, specifying the probability of seeing each of the two possible outcomes. 4. The fourth line is another way of writing the probability mass function, which avoids having to write separate cases and is more convenient for certain types of calculations. This relies on the fact that Yi can take only the value 0 or 1. In each case, one of the exponents will be 1, "choosing" the value under it, while the other is 0, "canceling out" the value under it. Hence, the outcome is either pi or 1 − pi, as in the previous line. Linear predictor function The basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials. The linear predictor function $f(i)$ for a particular data point i is written as: $f(i) = \beta_0 + \beta_1 x_{1,i} + \cdots + \beta_m x_{m,i},$ where $\beta_0, \ldots, \beta_m$ are regression coefficients indicating the relative effect of a particular explanatory variable on the outcome. The model is usually put into a more compact form as follows: • The regression coefficients β0, β1, ..., βm are grouped into a single vector β of size m + 1. • For each data point i, an additional explanatory pseudo-variable x0,i is added, with a fixed value of 1, corresponding to the intercept coefficient β0. • The resulting explanatory variables x0,i, x1,i, ..., xm,i are then grouped into a single vector Xi of size m + 1. This makes it possible to write the linear predictor function as follows: $f(i)= \boldsymbol\beta \cdot \mathbf{X}_i,$ using the notation for a dot product between two vectors. ### As a generalized linear model The particular model used by logistic regression, which distinguishes it from standard linear regression and from other types of regression analysis used for binary-valued outcomes, is the way the probability of a particular outcome is linked to the linear predictor function: $\operatorname{logit}(\mathbb{E}[Y_i\mid x_{1,i},\ldots,x_{m,i}]) = \operatorname{logit}(p_i)=\ln\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \cdots + \beta_m x_{m,i}$ Written using the more compact notation described above, this is: $\operatorname{logit}(\mathbb{E}[Y_i\mid \mathbf{X}_i]) = \operatorname{logit}(p_i)=\ln\left(\frac{p_i}{1-p_i}\right) = \boldsymbol\beta \cdot \mathbf{X}_i$ This formulation expresses logistic regression as a type of generalized linear model, which predicts variables with various types of probability distributions by fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable. The intuition for transforming using the logit function (the natural log of the odds) was explained above. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over $(-\infty,+\infty)$ — thereby matching the potential range of the linear prediction function on the right side of the equation. Note that both the probabilities pi and the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g. maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject to regularization conditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doing maximum a posteriori (MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done using a squared regularizing function, which is equivalent to placing a zero-mean Gaussian prior distribution on the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such as iteratively reweighted least squares (IRLS) or, more commonly these days, a quasi-Newton method such as the L-BFGS method. The interpretation of the βj parameter estimates is as the additive effect on the log of the odds for a unit change in the jth explanatory variable. In the case of a dichotomous explanatory variable, for instance gender, $e^\beta$ is the estimate of the odds of having the outcome for, say, males compared with females. An equivalent formula uses the inverse of the logit function, which is the logistic function, i.e.: $\mathbb{E}[Y_i\mid \mathbf{X}_i] = p_i = \operatorname{logit}^{-1}(\boldsymbol\beta \cdot \mathbf{X}_i) = \frac{1}{1+e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}$ The formula can also be written (somewhat awkwardly) as a probability distribution (specifically, using a probability mass function): $\operatorname{Pr}(Y_i=y_i\mid \mathbf{X}_i) = {p_i}^{y_i}(1-p_i)^{1-y_i} =\left(\frac{1}{1+e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}\right)^{y_i} \left(1-\frac{1}{1+e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}\right)^{1-y_i}$ ### As a latent-variable model The above model has an equivalent formulation as a latent-variable model. This formulation is common in the theory of discrete choice models, and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely related probit model. Imagine that, for each trial i, there is a continuous latent variable Yi* (i.e. an unobserved random variable) that is distributed as follows: $Y_i^\ast = \boldsymbol\beta \cdot \mathbf{X}_i + \varepsilon \,$ where $\varepsilon \sim \operatorname{Logistic}(0,1) \,$ i.e. the latent variable can be written directly in terms of the linear predictor function and an additive random error variable that is distributed according to a standard logistic distribution. Then Yi can be viewed as an indicator for whether this latent variable is positive: $Y_i = \begin{cases} 1 & \text{if }Y_i^\ast > 0 \ \text{ i.e. } - \varepsilon < \boldsymbol\beta \cdot \mathbf{X}_i, \\ 0 &\text{otherwise.} \end{cases}$ The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameter μ (which sets the mean) is equivalent to a distribution with a zero location parameter, where μ has been added to the intercept coefficient. Both situations produce the same value for Yi* regardless of settings of explanatory variables. Similarly, an arbitrary scale parameter s is equivalent to setting the scale parameter to 1 and then dividing all regression coefficients by s. In the latter case, the resulting value of Yi* will be smaller by a factor of s than in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the same Yi choice. (Note that this predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.) It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of the generalized linear model and without any latent variables. This can be shown as follows, using the fact that the cumulative distribution function (CDF) of the standard logistic distribution is the logistic function, which is the inverse of the logit function, i.e. $\Pr(\varepsilon < x) = \operatorname{logit}^{-1}(x)$ Then: \begin{align} \Pr(Y_i=1\mid\mathbf{X}_i) &= \Pr(Y_i^\ast > 0\mid\mathbf{X}_i) & \\ &= \Pr(\boldsymbol\beta \cdot \mathbf{X}_i + \varepsilon > 0) & \\ &= \Pr(\varepsilon > -\boldsymbol\beta \cdot \mathbf{X}_i) &\\ &= \Pr(\varepsilon < \boldsymbol\beta \cdot \mathbf{X}_i) & \text{(because the logistic distribution is symmetric)} \\ &= \operatorname{logit}^{-1}(\boldsymbol\beta \cdot \mathbf{X}_i) & \\ &= p_i & \text{(see above)} \end{align} This formulation — which is standard in discrete choice models — makes clear the relationship between logistic regression (the "logit model") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data). ### As a two-way latent-variable model Yet another formulation uses two separate latent variables: \begin{align} Y_i^{0\ast} &= \boldsymbol\beta_0 \cdot \mathbf{X}_i + \varepsilon_0 \, \\ Y_i^{1\ast} &= \boldsymbol\beta_1 \cdot \mathbf{X}_i + \varepsilon_1 \, \end{align} where \begin{align} \varepsilon_0 & \sim \operatorname{EV}_1(0,1) \\ \varepsilon_1 & \sim \operatorname{EV}_1(0,1) \end{align} where EV1(0,1) is a standard type-1 extreme value distribution: i.e. $\Pr(\varepsilon_0=x) = \Pr(\varepsilon_1=x) = e^{-x} e^{-e^{-x}}$ Then $Y_i = \begin{cases} 1 & \text{if }Y_i^{1\ast} > Y_i^{0\ast}, \\ 0 &\text{otherwise.} \end{cases}$ This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in the multinomial logit model. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoretical utility associated with making the associated choice, and thus motivate logistic regression in terms of utility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulating discrete choice models, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.) The choice of the type-1 extreme value distribution seems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use through rational choice theory. It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions: $\boldsymbol\beta = \boldsymbol\beta_1 - \boldsymbol\beta_0$ $\varepsilon = \varepsilon_1 - \varepsilon_0$ An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes one degree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e. if $\varepsilon = \varepsilon_1 - \varepsilon_0 \sim \operatorname{Logistic}(0,1) .$ We can demonstrate the equivalent as follows: \begin{align} \Pr(Y_i=1\mid\mathbf{X}_i) &= \Pr(Y_i^{1\ast} > Y_i^{0\ast}\mid\mathbf{X}_i) & \\ &= \Pr(Y_i^{1\ast} - Y_i^{0\ast} > 0\mid\mathbf{X}_i) & \\ &= \Pr(\boldsymbol\beta_1 \cdot \mathbf{X}_i + \varepsilon_1 - (\boldsymbol\beta_0 \cdot \mathbf{X}_i + \varepsilon_0) > 0) & \\ &= \Pr((\boldsymbol\beta_1 \cdot \mathbf{X}_i - \boldsymbol\beta_0 \cdot \mathbf{X}_i) + (\varepsilon_1 - \varepsilon_0) > 0) & \\ &= \Pr((\boldsymbol\beta_1 - \boldsymbol\beta_0) \cdot \mathbf{X}_i + (\varepsilon_1 - \varepsilon_0) > 0) & \\ &= \Pr((\boldsymbol\beta_1 - \boldsymbol\beta_0) \cdot \mathbf{X}_i + \varepsilon > 0) & \text{(substitute }\varepsilon\text{ as above)} \\ &= \Pr(\boldsymbol\beta \cdot \mathbf{X}_i + \varepsilon > 0) & \text{(substitute }\boldsymbol\beta\text{ as above)} \\ &= \Pr(\varepsilon > -\boldsymbol\beta \cdot \mathbf{X}_i) & \text{(now, same as above model)}\\ &= \Pr(\varepsilon < \boldsymbol\beta \cdot \mathbf{X}_i) & \\ &= \operatorname{logit}^{-1}(\boldsymbol\beta \cdot \mathbf{X}_i) & \\ &= p_i & \end{align} #### Example As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. the Parti Québécois, which wants Quebec to secede from Canada). We would then use three latent variables, one for each choice. Then, in accordance with utility theory, we can then interpret the latent variables as expressing the utility that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; and would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility, since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money. These intuitions can be expressed as follows: Estimated strength of regression coefficient for different outcomes (party choices) and different values of explanatory variables Center-right Center-left Secessionist High-income strong + strong − strong − Middle-income moderate + weak + none Low-income none strong + none This clearly shows that 1. Separate sets of regression coefficients need to exist for each choice. When phrased in terms of utility, this can be seen very easily. Different choices have different effects on net utility; furthermore, the effects vary in complex ways that depend on the characteristics of each individual, so there need to be separate sets of coefficients for each characteristic, not simply a single extra per-choice characteristic. 2. Even though income is a continuous variable, its effect on utility is too complex for it to be treated as a single variable. Either it needs to be directly split up into ranges, or higher powers of income need to be added so that polynomial regression on income is effectively done. ### As a "log-linear" model Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of the multinomial logit. Here, instead of writing the logit of the probabilities pi as a linear predictor, we separate the linear predictor into two, one for each of the two outcomes: \begin{align} \ln \Pr(Y_i=0) &= \boldsymbol\beta_0 \cdot \mathbf{X}_i - \ln Z \, \\ \ln \Pr(Y_i=1) &= \boldsymbol\beta_1 \cdot \mathbf{X}_i - \ln Z \, \\ \end{align} Note that two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes the logarithm of the associated probability as a linear predictor, with an extra term $- ln Z$ at the end. This term, as it turns out, serves as the normalizing factor ensuring that the result is a distribution. This can be seen by exponentiating both sides: \begin{align} \Pr(Y_i=0) &= \frac{1}{Z} e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} \, \\ \Pr(Y_i=1) &= \frac{1}{Z} e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i} \, \\ \end{align} In this form it is clear that the purpose of Z is to ensure that the resulting distribution over Yi is in fact a probability distribution, i.e. it sums to 1. This means that Z is simply the sum of all un-normalized probabilities, and by dividing each probability by Z, the probabilities become "normalized". That is: $Z = e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}$ and the resulting equations are \begin{align} \Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} \, \\ \Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} \, \end{align} Or generally: $\Pr(Y_i=c) = \frac{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}{\sum_h e^{\boldsymbol\beta_h \cdot \mathbf{X}_i}}$ This shows clearly how to generalize this formulation to more than two outcomes, as in multinomial logit. In order to prove that this is equivalent to the previous model, note that the above model is overspecified, in that $\Pr(Y_i=0)$ and $\Pr(Y_i=1)$ cannot be independently specified: rather $\Pr(Y_i=0) + \Pr(Y_i=1) = 1$ so knowing one automatically determines the other. As a result, the model is nonidentifiable, in that multiple combinations of β0 and β1 will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities: \begin{align} \Pr(Y_i=1) &= \frac{e^{(\boldsymbol\beta_1 +\mathbf{C}) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 +\mathbf{C})\cdot \mathbf{X}_i} + e^{(\boldsymbol\beta_1 +\mathbf{C}) \cdot \mathbf{X}_i}} \, \\ &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i} e^{\mathbf{C} \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} e^{\mathbf{C} \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i} e^{\mathbf{C} \cdot \mathbf{X}_i}} \, \\ &= \frac{e^{\mathbf{C} \cdot \mathbf{X}_i}e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\mathbf{C} \cdot \mathbf{X}_i}(e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i})} \, \\ &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} \, \\ \end{align} As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to set $\boldsymbol\beta_0 = \mathbf{0} .$ Then, $e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} = e^{\mathbf{0} \cdot \mathbf{X}_i} = 1$ and so $\Pr(Y_i=1) = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{1 + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{1+e^{-\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = p_i$ which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings where $\boldsymbol\beta = \boldsymbol\beta_1 - \boldsymbol\beta_0$ will produce equivalent results.) Note that most treatments of the multinomial logit model start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common in econometrics and political science, where discrete choice models and utility theory reign, while the "log-linear" formulation here is more common in computer science, e.g. machine learning and natural language processing. ### As a single-layer perceptron The model has an equivalent formulation $p_i = \frac{1}{1+e^{-(\beta_0 + \beta_1 x_{1,i} + \cdots + \beta_k x_{k,i})}}. \,$ This functional form is commonly called a single-layer perceptron or single-layer artificial neural network. A single-layer neural network computes a continuous output instead of a step function. The derivative of pi with respect to X = (x1, ..., xk) is computed from the general form: $y = \frac{1}{1+e^{-f(X)}}$ where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation. This function is also preferred because its derivative is easily calculated: $\frac{\mathrm{d}y}{\mathrm{d}X} = y(1-y)\frac{\mathrm{d}f}{\mathrm{d}X}. \,$ ### In terms of binomial data A closely related model assumes that each i is associated not with a single Bernoulli trial but with ni independent identically distributed trials, where the observation Yi is the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows a binomial distribution: $Y_i \ \sim \operatorname{Bin}(n_i,p_i),\text{ for }i = 1, \dots , n$ An example of this distribution is the fraction of seeds (pi) that germinate after ni are planted. In terms of expected values, this model is expressed as follows: $p_i = \mathbb{E}\left[\left.\frac{Y_i}{n_{i}}\,\right|\,\mathbf{X}_i \right],$ so that $\operatorname{logit}\left(\mathbb{E}\left[\left.\frac{Y_i}{n_{i}}\,\right|\,\mathbf{X}_i \right]\right) = \operatorname{logit}(p_i)=\ln\left(\frac{p_i}{1-p_i}\right) = \boldsymbol\beta \cdot \mathbf{X}_i,$ Or equivalently: $\operatorname{Pr}(Y_i=y_i\mid \mathbf{X}_i) = {n_i \choose y_i} p_i^{y_i}(1-p_i)^{n_i-y_i} ={n_i \choose y_i} \left(\frac{1}{1+e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}\right)^{y_i} \left(1-\frac{1}{1+e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}\right)^{n_i-y_i}$ This model can be fit using the same sorts of methods as the above more basic model. ## Bayesian logistic regression Comparison of logistic function with a scaled inverse probit function (i.e. the CDF of the normal distribution), comparing $\sigma(x)$ vs. $\Phi(\sqrt{\frac{\pi}{8}}x)$, which makes the slopes the same at the origin. This shows the heavier tails of the logistic distribution. In a Bayesian statistics context, prior distributions are normally placed on the regression coefficients, usually in the form of Gaussian distributions. Unfortunately, the Gaussian distribution is not the conjugate prior of the likelihood function in logistic regression; in fact, the likelihood function is not an exponential family and thus does not have a conjugate prior at all. As a result, the posterior distribution is difficult to calculate, even using standard simulation algorithms (e.g. Gibbs sampling). There are various possibilities: • Don't do a proper Bayesian analysis, but simply compute a maximum a posteriori point estimate of the parameters. This is common, for example, in "maximum entropy" classifiers in machine learning. • Use a more general approximation method such as the Metropolis–Hastings algorithm. • Draw a Markov chain Monte Carlo sample from the exact posterior by using the Independent Metropolis–Hastings algorithm with heavy-tailed multivariate candidate distribution found by matching the mode and curvature at the mode of the normal approximation to the posterior and then using the Student’s t shape with low degrees of freedom.[14] This is shown to have excellent convergence properties. • Use a latent variable model and approximate the logistic distribution using a more tractable distribution, e.g. a Student's t-distribution or a mixture of normal distributions. • Do probit regression instead of logistic regression. This is actually a special case of the previous situation, using a normal distribution in place of a Student's t, mixture of normals, etc. This will be less accurate but has the advantage that probit regression is extremely common, and a ready-made Bayesian implementation may already be available. • Use the Laplace approximation of the posterior distribution.[15] This approximates the posterior with a Gaussian distribution. This is not a terribly good approximation, but it suffices if all that is desired is an estimate of the posterior mean and variance. In such a case, an approximation scheme such as variational Bayes can be used.[16] ### Gibbs sampling with an approximating distribution As shown above, logistic regression is equivalent to a latent variable model with an error variable distributed according to a standard logistic distribution. The overall distribution of the latent variable $Y_i\ast$ is also a logistic distribution, with the mean equal to $\boldsymbol\beta \cdot \mathbf{X}_i$ (i.e. the fixed quantity added to the error variable). This model considerably simplifies the application of techniques such as Gibbs sampling. However, sampling the regression coefficients is still difficult, because of the lack of conjugacy between the normal and logistic distributions. Changing the prior distribution over the regression coefficients is of no help, because the logistic distribution is not in the exponential family and thus has no conjugate prior. One possibility is to use a more general Markov chain Monte Carlo technique, such as the Metropolis–Hastings algorithm, which can sample arbitrary distributions. Another possibility, however, is to replace the logistic distribution with a similar-shaped distribution that is easier to work with using Gibbs sampling. In fact, the logistic and normal distributions have a similar shape, and thus one possibility is simply to have normally distributed errors. Because the normal distribution is conjugate to itself, sampling the regression coefficients becomes easy. In fact, this model is exactly the model used in probit regression. However, the normal and logistic distributions differ in that the logistic has heavier tails. As a result, it is more robust to inaccuracies in the underlying model (which are inevitable, in that the model is essentially always an approximation) or to errors in the data. Probit regression loses some of this robustness. Another alternative is to use errors distributed as a Student's t-distribution. The Student's t-distribution has heavy tails, and is easy to sample from because it is the compound distribution of a normal distribution with variance distributed as an inverse gamma distribution. In other words, if a normal distribution is used for the error variable, and another latent variable, following an inverse gamma distribution, is added corresponding to the variance of this error variable, the marginal distribution of the error variable will follow a Student's t-distribution. Because of the various conjugacy relationships, all variables in this model are easy to sample from. The Student's t-distribution that best approximates a standard logistic distribution can be determined by matching the moments of the two distributions. The Student's t-distribution has three parameters, and since the skewness of both distributions is always 0, the first four moments can all be matched, using the following equations: \begin{align} \mu &= 0 \\ \frac{\nu}{\nu-2} s^2 &= \frac{\pi^2}{3} \\ \frac{6}{\nu-4} &= \frac{6}{5} \end{align} This yields the following values: \begin{align} \mu &= 0 \\ s &= \sqrt{\frac{7}{9} \frac{\pi^2}{3}} \\ \nu &= 9 \end{align} The following graphs compare the standard logistic distribution with the Student's t-distribution that matches the first four moments using the above-determined values, as well as the normal distribution that matches the first two moments. Note how much closer the Student's t-distribution agrees, especially in the tails. Beyond about two standard deviations from the mean, the logistic and normal distributions diverge rapidly, but the logistic and Student's t-distributions don't start diverging significantly until more than 5 standard deviations away. (Another possibility, also amenable to Gibbs sampling, is to approximate the logistic distribution using a mixture density of normal distributions.) Comparison of logistic and approximating distributions (t, normal). Tails of distributions. Further tails of distributions. Extreme tails of distributions. ## Extensions There are large numbers of extensions: • Multinomial logistic regression (or multinomial logit) handles the case of a multi-way categorical dependent variable (with unordered values, also called "classification"). Note that the general case of having dependent variables with more than two values is termed polytomous regression. • Ordered logistic regression (or ordered logit) handles ordinal dependent variables (ordered values). • Mixed logit is an extension of multinomial logit that allows for correlations among the choices of the dependent variable. • An extension of the logistic model to sets of interdependent variables is the conditional random field. ## Model suitability A way to measure a model's suitability is to assess the model against a set of data that was not used to create the model.[17] The class of techniques is called cross-validation. This holdout model assessment method is particularly valuable when data are collected in different settings (e.g., at different times or places) or when models are assumed to be generalizable. To measure the suitability of a binary regression model, one can classify both the actual value and the predicted value of each observation as either 0 or 1.[18] The predicted value of an observation can be set equal to 1 if the estimated probability that the observation equals 1 is above $\frac{1}{2}$, and set equal to 0 if the estimated probability is below $\frac{1}{2}$. Here logistic regression is being used as a binary classification model. There are four possible combined classifications: 1. prediction of 0 when the holdout sample has a 0 (True Negatives, the number of which is TN) 2. prediction of 0 when the holdout sample has a 1 (False Negatives, the number of which is FN) 3. prediction of 1 when the holdout sample has a 0 (False Positives, the number of which is FP) 4. prediction of 1 when the holdout sample has a 1 (True Positives, the number of which is TP) These classifications are used to calculate accuracy, precision (also called positive predictive value), recall (also called sensitivity), specificity and negative predictive value: $\text{Accuracy}=\frac{TP+TN}{TP+FP+FN+TN}$ = fraction of observations with correct predicted classification $\text{Precision} = \text{PositivePredictiveValue} =\frac{TP}{TP+FP} \,$ = Fraction of predicted positives that are correct $\text{NegativePredictiveValue} = \frac{TN}{TN+FN}$ = fraction of predicted negatives that are correct $\text{Recall} = \text{Sensitivity} = \frac{TP}{TP+FN} \,$ = fraction of observations that are actually 1 with a correct predicted classification $\text{Specificity} = \frac{TN}{TN+FP}$ = fraction of observations that are actually 0 with a correct predicted classification ## References 1. ^ Christopher M. Bishop (2006). Pattern Recognition and Machine Learning. Springer. p. 205. "In the terminology of statistics, this model is known as logistic regression, although it should be emphasized that this is a model for classification rather than regression." 2. ^ Clinical Research for Surgeons. Mohit Bhandari,Anders Joensson. page 293 3. ^ Gareth James; Daniela Witten; Trevor Hastie; Robert Tibshirani (2013). An Introduction to Statistical Learning. Springer. p. 6. 4. ^ Boyd, C. R.; Tolson, M. A.; Copes, W. S. (1987). "Evaluating trauma care: The TRISS method. Trauma Score and the Injury Severity Score". The Journal of trauma 27 (4): 370–378. doi:10.1097/00005373-198704000-00005. PMID 3106646. 5. ^ Harrell, Frank E. (2001). Regression Modeling Strategies. Springer-Verlag. ISBN 0-387-95232-2. 6. ^ M. Strano; B.M. Colosimo (2006). "Logistic regression analysis for experimental determination of forming limit diagrams". International Journal of Machine Tools and Manufacture 46 (6). doi:10.1016/j.ijmachtools.2005.07.005. edit 7. ^ Palei, S. K.; Das, S. K. (2009). "Logistic regression model for prediction of roof fall risks in bord and pillar workings in coal mines: An approach". Safety Science 47: 88. doi:10.1016/j.ssci.2008.01.002. edit 8. Hosmer, David W.; Lemeshow, Stanley (2000). Applied Logistic Regression (2nd ed.). Wiley. ISBN 0-471-35632-8.[page needed] 9. Menard, Scott W. (2002). Applied Logistic Regression (2nd ed.). SAGE. ISBN 978-0-7619-2208-7.[page needed] 10. ^ Peduzzi, P; Concato, J; Kemper, E; Holford, TR; Feinstein, AR (December 1996). "A simulation study of the number of events per variable in logistic regression analysis.". Journal of Clinical Epidemiology 49 (12): 1373–9. doi:10.1016/s0895-4356(96)00236-3. PMID 8970487. 11. ^ a b Greene, William N. (2003). Econometric Analysis (Fifth ed.). Prentice-Hall. ISBN 0-13-066189-9. 12. Cohen, Jacob; Cohen, Patricia; West, Steven G.; Aiken, Leona S. (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Routledge. ISBN 978-0-8058-2223-6.[page needed] 13. ^ a b 14. ^ Bolstad, William M. (2010). Understandeing Computational Bayesian Statistics. Wiley. ISBN 978-0-470-04609-8.[page needed] 15. ^ Bishop, Christopher M. "Chapter 4. Linear Models for Classification". Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp. 217–218. ISBN 978-0387-31073-2. 16. ^ Bishop, Christopher M. "Chapter 10. Approximate Inference". Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp. 498–505. ISBN 978-0387-31073-2. 17. ^ Jonathan Mark and Michael A. Goldberg (2001). Multiple Regression Analysis and Mass Assessment: A Review of the Issues. The Appraisal Journal, Jan. pp. 89–109 18. ^ Myers, J. H.; Forgy, E. W. (1963). "The Development of Numerical Credit Evaluation Systems". J. Amer. Statist. Assoc. 58 (303): 799–806. doi:10.1080/01621459.1963.10500889.
2014-09-01 21:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 97, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862818717956543, "perplexity": 694.2387949179536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909053229-00363-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.metaculus.com/questions/4794/ev-mass-of-next-fundamental-particle/?invite=9I6hgw
Submit Essay Once you submit your essay, you can no longer edit it. # eV Mass of Next Fundamental Particle ### Question In 2012, the Higgs boson was discovered by the Large Hadron Collider with a mass of $125\times10^9$ eV. This observation of the Higgs completed the Standard Model, of which the Higgs mechanism was an important theoretical but experimentally unobserved part. There remain unexplained facts about physics and theoretical difficulties with current models of physics that might be explained by the introduction of new fundamental particles. One popular extension to the standard model is supersymmetry, which predicts that each particle has a heavier supersymmetric partner. There are proposals for larger particle accelerators that could probe collisions at higher energies, such as the Future Circular Collider which, if constructed, would have a center of mass collision energy of $10^{14}$ eV, though physicists are sceptical that any new physics would be discovered by them. One particularly exciting form of new physics that could be discovered would be a particle in their energy range. What will be the mass in eV of the next fundamental particle to be discovered? Resolution will be the average mass listed for the particle by Particle Data Group once scientific consensus emerges that the particle observed is a new fundamental particle. If multiple new particles are discovered in the same window of time, the first will be considered to be the first to have been observed, even if it was not known to be a new fundamental particle at the time. The question resolves ambiguously if no new fundamental particle is discovered by 2070. ### Prediction Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it. Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards. Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.
2022-07-07 04:26:29
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49891966581344604, "perplexity": 687.1445796878536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00776.warc.gz"}
https://math.stackexchange.com/questions/138853/wave-separation
# wave separation Say there is a wave of sines and cosines. (<- one can think of Fourier theory.) A) There is a wave that has the same frequency all the time. However, amplitude (- shape) of each period differs. Is it possible to separate this wave into combinations of waves that have the same amplitude for each wave? B) Say there is a added combination of two waves that are same in frequency but different in amplitude. The amplitude of each wave is equal all the time. Can the signal be decomposed into two signals that were combined? Thanks. In the wikipedia article it's shown the simplest case, $y(t)=[1 + M \cdot \cos(\omega_m t + \phi)]\cdot \sin(\omega_c t)$ Here we have a sinuosid of "central frequency" $\omega_c$ and its amplitude varies by a sinusoid of frequency $\omega_m$ ("modulation frequency"). It's easy to show (oops) that this signal can be expressed as the sum of three sinusoids of frequencies $\omega_c$, $\omega_c+\omega_m$ and $\omega_c -\omega_m$. For A, what you probably mean by the same frequency all the time is that the wave passes through zeros (maybe also maxima) at the right time spacing. It would be something like $f(t)=a_n\cos t$ where $a_n$ is the amplitude for period $n$. This is a well defined function (given the list of $a_n$) but it has energy at frequencies other than $\frac 1{2\pi}$ as you can see from an FFT. In particular, the derivative is not continuous when you change between the $a$'s
2019-06-24 21:43:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008896946907043, "perplexity": 307.82494211075675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00324.warc.gz"}
http://ravenflagpress.herokuapp.com/discussion/id/758/
##### Variation on A4 rahulchandrupatla What if this problem asked us the probability that all 3 cards are different assuming that there is are at least two different card values in your hand? I got P(3 different)/ P(atleast two different card values are in your hand) and the bottom is 2 cases : 3 different, or 2 same and 1 different. So putting it all together is $\frac{(10choose3)(3choose1)^3}{(10choose3)(3choose1)^3+(10choose2)(3choose2)(27)}$. Can someone please help me and confirm or deny this? weisbart: March 20, 2015, 1:44 a.m. This looks good. The 27 shows up because you remove all three cards with the same value, so you have 27 left. Good job.
2018-03-23 05:16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.349415123462677, "perplexity": 387.4893497637504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00247.warc.gz"}
https://byjus.com/us/math/calculating-area-perimeter-of-composite-figures/
How to Find Area and Perimeter of Composite Figures? (Examples) - BYJUS # Calculating Area, Perimeter of Composite Figures Composite figures are figures made up of multiple shapes. We know to find the area and perimeter of simple shapes like triangles, circles, squares, etc. Here we will combine those formulas to find the area and perimeter of composite shapes. ...Read MoreRead Less ## Composite figures Simple geometric shapes like triangles, squares, rectangles, semicircles, and other two-dimensional figures together make up composite figures. We can divide a composite figure or other irregular-shaped figures into simple, non-overlapping figures to find their area. We have to calculate the total area of the composite figure (Volume of a Composite Figure) by adding the areas of the simpler figures together. ## Estimating perimeter and area using a square grid Square grids are used for various types of unit measurements and to compare sizes. On a square grid, we can draw various shapes and calculate the total area by counting the full and half squares. We can cut along a couple of lines that we’ve drawn. The shapes whose area have to be measured are placed in square grids. We must put the pieces together in a different shape and find the total area. Construct the various squares (1 x 1, 2 x 2, 3 x 3, 4 x 4) on the square grid, if the figure contains different shapes then draw the outlines on a square grid and count the unit squares to determine the area of each figure. Each square box has sides that measure 1 unit each. Observe the pattern to figure out that the rectangle’s area equals the length multiplied by the width. Let’s understand this topic using an example; let’s calculate the perimeter and area of the arrow shown below. Solution: A. Step 1: Count how many grid squares the arrow covers. There are 16 of them. Step 2: Count how many diagonal lengths there are around the arrow. There are four of them. Step 3: Use 1.5 units for the length of the diagonals. Length of 16 grid square lengths = 16 $$\times$$ 1 = 16 units. Length of 4 diagonal lengths = 4 $$\times$$ 1.5 = 6 units. So, the perimeter is about 16 + 6 = 22 Units. B. Step 1: Count how many squares are entirely within the figure. There are 14 of them. Step 2: Count how many full squares there are in the diagram. There are 4 of them. Area of 14 squares = 14 $$\times$$ 1 = 14 Square units Area of 4 half squares = 4 $$\times$$ 0.5 = 2 Square units So, the area is 14 + 2 = 16 Square units ## Solved Area and Perimeter Composite Figures Examples 1)  Calculate the area and perimeter of the given figure. Triangle; base = 8ft, height = 10ft and semicircle; diameter = 12ft. Solution: A triangle and a semicircle together make up the figure. The distance around the figure’s triangular part is 8 + 10 = 18 feet. The circumference of a circle with a diameter of 12 feet is divided in half by the distance around the semicircle. $$\frac{C}{2}~=~\frac{\pi d}{2}$$          (Dividing the circumference by 2) = $$\frac{3.14~\times ~12}{2}$$          (Substituting 3.14 for “$$\pi$$” and 12 for “d”) = 18.84              (Simplified) So, the perimeter is 18 + 18.84 = 36.84 feet The area of the triangle and the area of the semi circle must now be determined. The area of the given triangle, A = $$\frac{1}{2}~bh$$ = $$\frac{1}{2}~(8)(10)$$ = 40 Square feet The area of the given semicircle, A = $$\frac{\pi r^2}{2}$$ = $$\frac{3.14~\times~ 6^2}{2}$$   (The semicircle has a radius of $$\frac{12}{2}$$ = 6 feet) = 56.52 Square feet So, the area is 40 + 56.52 = 96.52 square feet. 2) Calculate the area and perimeter of the given figure. Solution: Four semicircles and a square make up the figure. The circumference of a circle with a diameter of 5 inches is one-half the distance around the semicircle. $$\frac{C}{2}~=~\frac{\pi d}{2}$$          (Dividing the circumference by 2) = $$\frac{3.14~\times ~5}{2}$$           (Substituting 3.14 for “$$\pi$$” and 5 for “d”) = 7.85               (Simplified) So, the perimeter is 4$$~\times~$$ 3.14 = 12.56 inch. (since there are 4 semi circles so multiplying with 4) The area of the square and the area of the semi circle must now be determined. The area of the square, A = side$$~\times~$$side = 55 = 25 Square inches The area of the semicircle, A = $$\frac{\pi r^2}{2}$$ = $$\frac{3.14~\times~ 2.5^2}{2}$$ (The semicircle has a radius of 5= 2.5 inch) = 9.8125 Square inches So, the area is 4 + (4$$~\times~$$9.8125) = 43.25 square inches 3) Calculate the area and perimeter of the given figure. Solution: The rectangle and the semicircles make up the figure. The figure’s rectangle is surrounded by a distance of 4 + 4 + 4 + 4 + 4 + 4 = 24 feet. The distance around the semicircle is one- half the circumference of a circle with a diameter of 16 – (24) = 8 feet. $$\frac{C}{2}~=~\frac{\pi d}{2}$$      (Dividing the circumference by 2) = $$\frac{3.14~\times~ 8}{2}$$       (Substituting 3.14 for “$$\pi$$” and 8 for “d”) = 12.56          (Simplified) So, the perimeter is about 24 + (2$$~\times~$$ 12.56) = 49.12 feet. (since there are 2 semi circles we multiply the result with 2) The area of the rectangle and the area of the semi circle must now be determined. The area of the rectangle, A  = l$$~\times~$$w = 16$$~\times~$$4 = 64 Square feet The area of the semicircle, A = $$\frac{\pi r^2}{2}$$ = $$\frac{3.14~\times~ 4^2}{2}$$ (The semicircle has a radius of $$\frac{8}{2}$$ = 3 feet) = 25.12 Square feet. So, the area is 64 + (2$$~\times~$$ 25.12) = 114.24 square feet. 4) The volleyball court’s center circle is painted blue and has a radius of 4 feet. The rest of the court has a brown stain. A 60 square feet area of the court can be stained with one gallon of wood stain. How many gallons of wood stain are required to cover the court’s brown areas? Solution: Step 1: Recognize the issue – you’ve been given the dimensions of a volleyball court. When one gallon of wood stain covers 60 square feet, you must calculate the number of gallons of wood stain required to stain the brown portions of the court. Step 2: Make a plan – Measure the rectangular court’s entire area. Then divide by 60 after subtracting the area of the center circle. Step 3: Check and solve – Finding the area of the rectangle, A = l$$~\times~$$w = 90$$~\times~$$60 = 5400 sq.ft Finding the area of circle, A = $$\pi r^2$$ = 3.14$$~~\times~~4^2$$ = 50.24 Therefore, the area that is stained is about 5400 – 50.24 = 5450.24 square feet. Because one gallon of wood stain covers 60 square feet, you will need 5450.24 ÷ 60 = 90 gallons of wood stain. Frequently Asked Questions on Perimeter and Area of Composite Figures Specific formulae can be used to find the areas of standard 3D solids. The area of a composite shape can be determined by breaking it down into individual standard solids first. The best way to analyze composite figures is to break them down into smaller pieces with easily recognisable characteristics. Recalling the characteristics of simpler figures, such as squares and circles, is crucial and important.
2022-09-28 14:58:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6836006045341492, "perplexity": 1072.3182766124144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00263.warc.gz"}
https://byjus.com/question-answer/the-temperature-at-which-the-root-mean-square-velocity-of-so-2-molecules-is-the/
Question # The temperature at which the root mean square velocity of $$SO_2$$ molecules is the same as that of $$O_2$$ at $$27^o C$$ is: A 600oC B 300oC C 327oC D 27oC Solution ## The correct option is B $$327^oC$$$$\sqrt{\dfrac{3 RT}{M_{SO_2}}} = \sqrt{\dfrac{3 RT}{M_{O_2}}}$$$$\dfrac{T}{64} = \dfrac{300}{32}$$$$T = \dfrac{300 \times 64}{32}^2$$T = 600 K$$T = 327^o C$$Chemistry Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-17 04:45:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5558401942253113, "perplexity": 1779.5735214277177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00498.warc.gz"}
http://mathhelpforum.com/advanced-algebra/36525-matrices.html
## Matrices Throughout this question B = will be the basis for given by: and S will denote the standard basis i) Determine the matrix which represents the identity map relative to the basis B for the domain and the basis S for the codomain. ii) If is the linear map which is determined by: write down the matrix representation iii) Use parts i) and ii) to compute the standard matrix representation, ,of T.
2017-11-19 23:58:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398382306098938, "perplexity": 858.9244205660173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00426.warc.gz"}
https://stats.libretexts.org/Bookshelves/Applied_Statistics/Book%3A_Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/11%3A_Hypothesis_Testing/11.08%3A_Effect_Size%2C_Sample_Size_and_Power
# 11.8: Effect Size, Sample Size and Power In previous sections I’ve emphasised the fact that the major design principle behind statistical hypothesis testing is that we try to control our Type I error rate. When we fix α=.05 we are attempting to ensure that only 5% of true null hypotheses are incorrectly rejected. However, this doesn’t mean that we don’t care about Type II errors. In fact, from the researcher’s perspective, the error of failing to reject the null when it is actually false is an extremely annoying one. With that in mind, a secondary goal of hypothesis testing is to try to minimise β, the Type II error rate, although we don’t usually talk in terms of minimising Type II errors. Instead, we talk about maximising the power of the test. Since power is defined as 1−β, this is the same thing. # 11.8.1 The power function Figure 11.4: Sampling distribution under the alternative hypothesis, for a population parameter value of θ=0.55. A reasonable proportion of the distribution lies in the rejection region. Let’s take a moment to think about what a Type II error actually is. A Type II error occurs when the alternative hypothesis is true, but we are nevertheless unable to reject the null hypothesis. Ideally, we’d be able to calculate a single number β that tells us the Type II error rate, in the same way that we can set α=.05 for the Type I error rate. Unfortunately, this is a lot trickier to do. To see this, notice that in my ESP study the alternative hypothesis actually corresponds to lots of possible values of θ. In fact, the alternative hypothesis corresponds to every value of θ except 0.5. Let’s suppose that the true probability of someone choosing the correct response is 55% (i.e., θ=.55). If so, then the true sampling distribution for X is not the same one that the null hypothesis predicts: the most likely value for X is now 55 out of 100. Not only that, the whole sampling distribution has now shifted, as shown in Figure 11.4. The critical regions, of course, do not change: by definition, the critical regions are based on what the null hypothesis predicts. What we’re seeing in this figure is the fact that when the null hypothesis is wrong, a much larger proportion of the sampling distribution distribution falls in the critical region. And of course that’s what should happen: the probability of rejecting the null hypothesis is larger when the null hypothesis is actually false! However θ=.55 is not the only possibility consistent with the alternative hypothesis. Let’s instead suppose that the true value of θ is actually 0.7. What happens to the sampling distribution when this occurs? The answer, shown in Figure 11.5, is that almost the entirety of the sampling distribution has now moved into the critical region. Therefore, if θ=0.7 the probability of us correctly rejecting the null hypothesis (i.e., the power of the test) is much larger than if θ=0.55. In short, while θ=.55 and θ=.70 are both part of the alternative hypothesis, the Type II error rate is different. Figure 11.5: Sampling distribution under the alternative hypothesis, for a population parameter value of θ=0.70. Almost all of the distribution lies in the rejection region. Figure 11.6: The probability that we will reject the null hypothesis, plotted as a function of the true value of θ. Obviously, the test is more powerful (greater chance of correct rejection) if the true value of θ is very different from the value that the null hypothesis specifies (i.e., θ=.5). Notice that when θ actually is equal to .5 (plotted as a black dot), the null hypothesis is in fact true: rejecting the null hypothesis in this instance would be a Type I error. What all this means is that the power of a test (i.e., 1−β) depends on the true value of θ. To illustrate this, I’ve calculated the expected probability of rejecting the null hypothesis for all values of θ, and plotted it in Figure 11.6. This plot describes what is usually called the power function of the test. It’s a nice summary of how good the test is, because it actually tells you the power (1−β) for all possible values of θ. As you can see, when the true value of θ is very close to 0.5, the power of the test drops very sharply, but when it is further away, the power is large. # 11.8.2 Effect size Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned with mice when there are tigers abroad – George Box 1976 The plot shown in Figure 11.6 captures a fairly basic point about hypothesis testing. If the true state of the world is very different from what the null hypothesis predicts, then your power will be very high; but if the true state of the world is similar to the null (but not identical) then the power of the test is going to be very low. Therefore, it’s useful to be able to have some way of quantifying how “similar” the true state of the world is to the null hypothesis. A statistic that does this is called a measure of effect size (e.g. Cohen 1988; Ellis 2010). Effect size is defined slightly differently in different contexts,165 (and so this section just talks in general terms) but the qualitative idea that it tries to capture is always the same: how big is the difference between the true population parameters, and the parameter values that are assumed by the null hypothesis? In our ESP example, if we let θ0=0.5 denote the value assumed by the null hypothesis, and let θ denote the true value, then a simple measure of effect size could be something like the difference between the true value and null (i.e., θ−θ0), or possibly just the magnitude of this difference, abs(θ−θ0). big effect size small effect size significant result difference is real, and of practical importance difference is real, but might not be interesting non-significant result no effect observed no effect observed Why calculate effect size? Let’s assume that you’ve run your experiment, collected the data, and gotten a significant effect when you ran your hypothesis test. Isn’t it enough just to say that you’ve gotten a significant effect? Surely that’s the point of hypothesis testing? Well, sort of. Yes, the point of doing a hypothesis test is to try to demonstrate that the null hypothesis is wrong, but that’s hardly the only thing we’re interested in. If the null hypothesis claimed that θ=.5, and we show that it’s wrong, we’ve only really told half of the story. Rejecting the null hypothesis implies that we believe that θ≠.5, but there’s a big difference between θ=.51 and θ=.8. If we find that θ=.8, then not only have we found that the null hypothesis is wrong, it appears to be very wrong. On the other hand, suppose we’ve successfully rejected the null hypothesis, but it looks like the true value of θ is only .51 (this would only be possible with a large study). Sure, the null hypothesis is wrong, but it’s not at all clear that we actually care, because the effect size is so small. In the context of my ESP study we might still care, since any demonstration of real psychic powers would actually be pretty cool166, but in other contexts a 1% difference isn’t very interesting, even if it is a real difference. For instance, suppose we’re looking at differences in high school exam scores between males and females, and it turns out that the female scores are 1% higher on average than the males. If I’ve got data from thousands of students, then this difference will almost certainly be statistically significant, but regardless of how small the p value is it’s just not very interesting. You’d hardly want to go around proclaiming a crisis in boys education on the basis of such a tiny difference would you? It’s for this reason that it is becoming more standard (slowly, but surely) to report some kind of standard measure of effect size along with the the results of the hypothesis test. The hypothesis test itself tells you whether you should believe that the effect you have observed is real (i.e., not just due to chance); the effect size tells you whether or not you should care. # 11.8.3 Increasing the power of your study Not surprisingly, scientists are fairly obsessed with maximising the power of their experiments. We want our experiments to work, and so we want to maximise the chance of rejecting the null hypothesis if it is false (and of course we usually want to believe that it is false!) As we’ve seen, one factor that influences power is the effect size. So the first thing you can do to increase your power is to increase the effect size. In practice, what this means is that you want to design your study in such a way that the effect size gets magnified. For instance, in my ESP study I might believe that psychic powers work best in a quiet, darkened room; with fewer distractions to cloud the mind. Therefore I would try to conduct my experiments in just such an environment: if I can strengthen people’s ESP abilities somehow, then the true value of θ will go up167 and therefore my effect size will be larger. In short, clever experimental design is one way to boost power; because it can alter the effect size. Unfortunately, it’s often the case that even with the best of experimental designs you may have only a small effect. Perhaps, for example, ESP really does exist, but even under the best of conditions it’s very very weak. Under those circumstances, your best bet for increasing power is to increase the sample size. In general, the more observations that you have available, the more likely it is that you can discriminate between two hypotheses. If I ran my ESP experiment with 10 participants, and 7 of them correctly guessed the colour of the hidden card, you wouldn’t be terribly impressed. But if I ran it with 10,000 participants and 7,000 of them got the answer right, you would be much more likely to think I had discovered something. In other words, power increases with the sample size. This is illustrated in Figure 11.7, which shows the power of the test for a true parameter of θ=0.7, for all sample sizes N from 1 to 100, where I’m assuming that the null hypothesis predicts that θ0=0.5. ## [1] 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.11837800 ## [7] 0.08257300 0.05771362 0.19643626 0.14945203 0.11303734 0.25302172 ## [13] 0.20255096 0.16086106 0.29695959 0.24588947 0.38879291 0.33269435 ## [19] 0.28223844 0.41641377 0.36272868 0.31341925 0.43996501 0.38859619 ## [25] 0.51186665 0.46049782 0.41129777 0.52752694 0.47870819 0.58881596 ## [31] 0.54162450 0.49507894 0.59933871 0.55446069 0.65155826 0.60907715 ## [37] 0.69828554 0.65867614 0.61815357 0.70325017 0.66542910 0.74296156 ## [43] 0.70807163 0.77808343 0.74621569 0.71275488 0.78009449 0.74946571 ## [49] 0.81000236 0.78219322 0.83626633 0.81119597 0.78435605 0.83676444 ## [55] 0.81250680 0.85920268 0.83741123 0.87881491 0.85934395 0.83818214 ## [61] 0.87858194 0.85962510 0.89539581 0.87849413 0.91004390 0.89503851 ## [67] 0.92276845 0.90949768 0.89480727 0.92209753 0.90907263 0.93304809 ## [73] 0.92153987 0.94254237 0.93240638 0.92108426 0.94185449 0.93185881 ## [79] 0.95005094 0.94125189 0.95714694 0.94942195 0.96327866 0.95651332 ## [85] 0.94886329 0.96265653 0.95594208 0.96796884 0.96208909 0.97255504 ## [91] 0.96741721 0.97650832 0.97202770 0.97991117 0.97601093 0.97153910 ## [97] 0.97944717 0.97554675 0.98240749 0.97901142 Figure 11.7: The power of our test, plotted as a function of the sample size N. In this case, the true value of θ is 0.7, but the null hypothesis is that θ=0.5. Overall, larger N means greater power. (The small zig-zags in this function occur because of some odd interactions between θ, α and the fact that the binomial distribution is discrete; it doesn’t matter for any serious purpose)
2020-01-25 08:35:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7415016293525696, "perplexity": 351.91480199611084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00444.warc.gz"}
https://chemistry.stackexchange.com/questions/96793/why-are-magnesium-chloride-and-calcium-chloride-more-soluble-than-sodium-chlorid
Why are magnesium chloride and calcium chloride more soluble than sodium chloride? I read that $$\ce{MgCl2}$$ and $$\ce{CaCl2}$$ are more soluble than $$\ce{NaCl}$$ in water. Solubility of $$\ce{MgCl2}$$ is $$\pu{543 g/L}$$ and that of $$\ce{NaCl}$$ is $$\pu{360 g/L}$$ (both at $$20^{\circ} \pu{C}$$). I think that $$\ce{NaCl}$$ should be more soluble due to its higher ionic nature. $$\ce{Mg^{2+}}$$ and $$\ce{Ca^{2+}}$$ are more polarizing and have more covalent nature and thus should be less soluble. I want to know the reason for why the opposite is happening. • How about a few numbers? – Karl May 12 '18 at 8:18 • Who said greater ionic character means greater solubility? – Ivan Neretin May 12 '18 at 10:53 • there should be balance – amish dua May 12 '18 at 12:57 • @Chemist, It is not advised to add questions apart from what was originally intended by the OP. Hence the reject. Your addition of solubility data was carried forward, duly credited, but again lacked the markup. Please make complete edits and try not to add to old questions (you can ask new ones!) – William R. Ebenezer Feb 3 at 15:29 • From my experience by asking new questions it will be immediately marked duplicate without second thought by community members , so I decided to edit this question to make it answerable. – Chemist Feb 4 at 12:27
2020-02-25 22:31:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18797840178012848, "perplexity": 1805.1826648093415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00290.warc.gz"}
http://www.ck12.org/book/CK-12-Foundation-and-Leadership-Public-Schools%2C-College-Access-Reader%3A-Geometry/r1/section/4.11/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 4.11: Triangle Similarity using SAS Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Understand and apply the SAS Similarity Postulate. ## SAS for Similar Triangles SAS (Side-Angle-Side) Similarity Postulate If the lengths of two corresponding sides of two triangles are proportional and the included angles are congruent, then the triangles are similar. Two triangles are similar if two pairs of corresponding sides are __________________________ and the included angles are _____________________________. Example 1 Cheryl made the diagram below to investigate similar triangles more. She drew ΔABC\begin{align*}\Delta ABC\end{align*} first, with AB=40, AC=80\begin{align*}AB = 40, \ AC = 80\end{align*}, and mA=30\begin{align*}m \angle A = 30^\circ\end{align*}. Then Cheryl did the following: She drew MN¯¯¯¯¯¯¯¯¯¯\begin{align*}\overline{MN}\end{align*}, and made MN=60\begin{align*}MN = 60\end{align*}. Then she carefully drew MP¯¯¯¯¯¯¯¯¯\begin{align*}\overline{MP}\end{align*}, making MP=120\begin{align*}MP = 120\end{align*} and mM=30\begin{align*}m \angle M = 30^\circ\end{align*}. At this point, Cheryl had drawn two segments ( MN¯¯¯¯¯¯¯¯¯¯\begin{align*}\overline{MN}\end{align*} and MP¯¯¯¯¯¯¯¯¯\begin{align*}\overline{MP}\end{align*}) with lengths that are proportional to the lengths of the corresponding sides of ΔABC\begin{align*}\Delta ABC\end{align*}, and she had made the included angle, M\begin{align*}\angle M\end{align*}, congruent to the included angle (A)\begin{align*}(\angle A)\end{align*} in ΔABC\begin{align*}\Delta ABC\end{align*}. Then Cheryl measured angles. She found that: BN\begin{align*}\angle B \cong \angle N\end{align*} and CP\begin{align*}\angle C \cong \angle P\end{align*} What could Cheryl conclude? Here again we have automatic results. The other angles are automatically congruent, and the triangles are similar by AA. Cheryl’s work supports the SAS for Similar Triangles Postulate. 1. In the SAS for similar triangles postulate, which parts are congruent in the similar triangles? \begin{align*}{\;}\end{align*} \begin{align*}{\;}\end{align*} 2. In the SAS for similar triangles postulate, which parts are proportional in the similar triangles? \begin{align*}{\;}\end{align*} \begin{align*}{\;}\end{align*} 3. The two triangles below are similar because of the SAS for similar triangles postulate. Mark the SAS congruent parts with tic marks and/or arcs. Create numbers that are proportional for the similar parts. ## Similar Triangles Summary We’ve explored similar triangles extensively in several lessons. Let’s summarize the conditions we’ve found that guarantee that two triangles are similar. Two triangles are similar if and only if: • the angles in the triangles are congruent. • the lengths of corresponding sides in the polygons are proportional. AA for Similar Triangles It two pairs of corresponding angles in two triangles are congruent, then the triangles are similar. SSS for Similar Triangles If the lengths of the sides of two triangles are proportional, then the triangles are similar. SAS for Similar Triangles If the lengths of two corresponding sides of two triangles are proportional and the included angles are congruent, then the triangles are similar. You can use the graphic organizer on the next page to keep all of this information in one place. ## Graphic Organizer for Lessons 9-10 Proving Similar Triangles Type of Similarity What do the letters stand for? What does this mean? Draw a picture of two similar triangles and label parts Describe corresponding congruent parts Describe corresponding proportional parts AA SSS SAS ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Authors: Tags:
2017-01-17 01:31:25
{"extraction_info": {"found_math": true, "script_math_tex": 20, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812753677368164, "perplexity": 3726.5392193980265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00495-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=tmf&wshow=issue&year=1994&volume=101&volume_alt=&issue=1&issue_alt=&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TMF: Year: Volume: Issue: Page: Find Theory of the classical gravitational field and Mach's principleA. A. Logunov 3 Diagram equations of the theory of fully developed turbulenceÉ. V. Teodorovich 28 Quantum dissipative systems II. String in curved affine-metric space-timeV. E. Tarasov 38 Application of supersymmetry and factorization methods to solution of Dirac and Schrödinger equationsB. G. Idlis, M. M. Musakhanov, M. Sh. Usmanov 47 On a problem posed by PauliB. Z. Moroz, A. M. Perelomov 60 The role of the continuous symmetries in exact solvable discrete spectral problemsV. R. Kudashev 66 Three-dimensional covariant one-time equations for a system of $n$ spinor particlesE. A. Dei, V. N. Kapshai, N. B. Skachkov 69 The problem of metrizability of dynamical systems that admit normal shiftR. A. Sharipov 85 Expansion of the correlation functions of the grand canonical ensemble in powers of the activityG. I. Kalmykov 94 Pining effect in Pierles doped systems with deviation from half-filling of energy bandM. E. Palistrant 110 Equations of motion of rotating bodies in general relativity in the post-Newtonian approximationM. V. Gorbatenko 123 Groups of spacetime transformations and symmetries of four-dimensional spacetime. IIV. P. Belov 136
2019-03-20 06:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30137380957603455, "perplexity": 2906.9570071659828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202299.16/warc/CC-MAIN-20190320044358-20190320070358-00218.warc.gz"}
https://www.physicsforums.com/threads/magnets-and-qm.67893/
# Magnets and QM 1. Mar 19, 2005 ### weio HI How does a magnetic field build in a natural magnet like iron from a quantum mechanical point of view? I know it has to do with electrons spin and their orbital momentum, but how exactly does it work in a magnet? thanks 2. Mar 19, 2005 ### clive The main point of a QM theory of magnetism is the exchange interaction between neighbor spins: $$H=-J \sum \vec{S_1}\cdot \vec{S_2}$$ (Heisenberg Hamiltonian) where J is the exchange integral. This energy is responsible for the spin alignment in ferromagnetic materials (because it is minimal when the two spins are parallel) and then for the magnetic field created by these materials. The ferromagnetism can not originate from orbital magnetic moments of electrons (Miss van Leeuwen's theorem). Hope it helps..... 3. Mar 19, 2005 ### weio Thanks a bunch! I will look that up. wieo
2017-08-19 13:35:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4671778380870819, "perplexity": 1664.9565969151176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00166.warc.gz"}
https://proxies-free.com/equation-solving-nice-cubic-polynomials/
# equation solving – Nice cubic polynomials Let a polynomial with integer coefficients be nice if 1. this polynomial has integer roots; 2. its derivative has also integer roots. For instance $$p(x)=x(x-9)(x-24),\ p'(x)=3(x-4)(x-18)$$ is the smallest known nice cubic polynomial. Smallest here means a polynomial with the smallest absolute value of the largest coefficient ($$9times24=216$$). But how to verify with the help of MA that there are no smaller ones?
2021-09-19 23:16:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5618204474449158, "perplexity": 819.1720538604274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00352.warc.gz"}
https://zbmath.org/?q=an:1215.32016
# zbMATH — the first resource for mathematics On normal forms of singular Levi-flat real analytic hypersurfaces. (English) Zbl 1215.32016 A classical theorem due to E. Cartan states that a real analytic smooth Levi-flat hypersurface $$M$$ in $$\mathbb{C}^{n}$$ is locally biholomorphic to a hypersurface of the form $$\{\mathcal{R}e(z_{1})=0\}$$. D. Burns and X. Gong [Am. J. Math. 121, No. 1, 23–53 (1999; Zbl 0931.32009)] proved that, if $$M=F^{-1}(0)$$ is Levi-flat, where $$F:(\mathbb{C}^{n},0)\rightarrow(\mathbb{R},0)$$, $$n\geq 2$$, is a germ of real analytic function such that $F(z_{1},\dots,z_{n})=\mathcal{R}e(z_{1}^{2}+\dots+z_{n}^{2})+h.o.t,$ then $$M$$ is locally biholomorphic to a hypersurface of the form $$\{\mathcal{R}e(z_{1}^{2}+\dots+z_{n}^{2})=0\}$$. In the paper under review, the author exhibits similar normal forms in a situation more general. More precisely, the author proves that, if $F(z)=\mathcal{R}e(P(z)) + h.o.t,$ such that $$M=F^{-1}(0)$$, where $$F: (\mathbb C^n,0)\to(\mathbb R,0)$$ is a germ of real analytic functions, is Levi-flat at $$0\in\mathbb{C}^{n}$$, $$n\geq{2}$$, where $$P(z)$$ is a homogeneous polynomial of degree $$k$$ with an isolated singularity at $$0\in\mathbb{C}^{n}$$ and Milnor number $$\mu$$, then there exists a holomorphic change of coordinates $$\phi$$ such that $$\phi(M)=\{\mathcal{R}e(h)=0\}$$, where $$h(z)$$ is a polynomial of degree $$\mu+1$$ and $$j^{k}_{0}(h)=P$$. The idea of the proof is to study the singular set of the complexification of the Levi $$1$$-form and to apply a result of D. Cerveau and A. Lins Neto [Am. J. Math. 133, No. 3, 677–716 (2011; Zbl 1225.32038)]. The result follows from a generalization of the Morse lemma. ##### MSC: 32V40 Real submanifolds in complex manifolds 37F75 Dynamical aspects of holomorphic foliations and vector fields ##### Keywords: Levi-flat hypersurfaces; holomorphic foliations Full Text: ##### References: [1] V.I. Arnold. Normal Form of functions in the neighbourhood of degenerate critical points. UNM, 29(2) (1974), 11–49; RMS, 29(2) (1974), 19–48. [2] V.I. Arnold, S.M. Gusein-Zade and A.N. Varchenko. Singularities of Differential Maps. Vol. I, Monographs in Math., vol. 82, Birkhäuser (1985). [3] D. Burns and X. Gong. Singular Levi-flat real analytic hypersurfaces. Amer. J. Math., 121 (1999), 23–53. · Zbl 0931.32009 [4] D. Cerveau and A. Lins Neto. Local Levi-Flat hypersurfaces invariants by a codimension one holomorphic foliation. To appear in Amer. J. Math. · Zbl 1225.32038 [5] A. Fernández-Pérez. Singular Levi-flat hypersurfaces. An approach through holomorphic foliations. Ph.D. Thesis IMPA-Brazil (2010). [6] F. Loray. Pseudo-groupe d’une singularité de feuilletage holomorphe en dimension deux. Avaliable in http://hal.archives-ouvertures.fr/ccsd-00016434 [7] J.F. Mattei and R. Moussu. Holonomie et intégrales premières. Ann. Ec. Norm. Sup., 13 (1980), 469–523. · Zbl 0458.32005 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-10-21 04:22:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033920764923096, "perplexity": 878.3500790717936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00184.warc.gz"}
https://brilliant.org/problems/aldehyde-and-ketone-just-to-learn-3/
# Aldehyde and ketone (just to learn 3) Chemistry Level 2 Which of the following does not reacts with $$\ce{NaHSO^{3-}}$$ to form bisulfite? ×
2016-10-24 12:34:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6873469948768616, "perplexity": 10288.988885948753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719566.74/warc/CC-MAIN-20161020183839-00224-ip-10-171-6-4.ec2.internal.warc.gz"}
http://stackoverflow.com/questions/16606004/how-to-improve-my-prime-number-sum-algorithm/16606131
# How to improve my prime-number-sum algorithm? I have written a code which returns the sum of all the prime numbers whose values are below 2 million. But it is taking huge time for the result(waited for 30 minutes for the answer). Can anyone suggest how to make the algorithm more efficient? public static void main(String[] args){ int factors=0; factors++; // total number of factors } } if(factors==2){ c=sumPrime; } } } System.out.println(sumPrime); } - Your algorithm isn't efficient. –  Rupak May 17 '13 at 9:45 Because you are running loop for 2 million times. –  Makky May 17 '13 at 9:48 Still it shouldn't take 30 mins. I suspect infinite loop –  sanbhat May 17 '13 at 9:50 Look into the Sieve of Eratosthenes for computing the prime numbers, then sum them. –  pcalcao May 17 '13 at 9:51 @sanbhat No short-cutting when a divisor is found, so checking whether n is prime takes n divisions. That for n = 1 to 2000000 ~> 2*10^12 divisions. That takes a couple of hours on normal consumer-grade hardware. –  Daniel Fischer May 17 '13 at 11:31 Actually the code is not optimized enough because at worst case, time taken to execute the loop would be more. You can even optimize the code for finding prime number as well.Try the below code and it works fine public static void main(String[] args) { //you know 2 is the only even prime. int sum = 2; for (int i = 3; i <= 2000000; i++) { boolean prime = isPrime(i); if(prime){ sum+=i; } } System.out.println("Sum = " + sum); } /* * we know 2 is the “oddest” prime – it happens to be the only even prime * number. Because of this, we need only check 2 separately, then traverse * odd numbers up to the square root of n */ public static boolean isPrime(int n) { // check if n is a multiple of 2 if (n % 2 == 0) return false; // if not, then just check the odds for (int i = 3; i * i <= n; i += 2) { if (n % i == 0) return false; } return true; } - using some kind of storage for the primes (a List for example) would highly improve the whole thing, because it stops checking the same numbers we already know as unprime again and again –  LionC May 17 '13 at 10:04 But that again will cause or make the thing complex be it space-wise. IMO, for these kind of codes, its better not to store them in any kind of in-memory data structures and simply progressing in the naive manner with a bit of twiddling as the case of prime numbers- by checking till the square root. What say ? –  roger_that May 17 '13 at 10:17 the memory space it needs shouldn't be a deal on any device, at the end you have stored all prime numbers < 2.000.000, as Integers that should be something about 1-2MB space, which is really not that much, but it highly improves the efficiency. –  LionC May 17 '13 at 10:52 check sieve of Atkin algorithm. It is an optimized version of ancient sieve of Eratosthenes. - For a start: • Does the for(i=1;i<=primeNum;i++){ loop have to run to primeNum or maybe just to the square root of primeNum? • does i have to be incremented by 1 or is 2 more efficient? • ... - Also, "do you have to continue with the loop after you found the number to be divisible by something?" –  Dukeling May 17 '13 at 11:40 Plus: do you have to reinvent the wheel… ;-) –  Moritz Petersen May 17 '13 at 15:03 I won't provide you a full answer, as the objective of Project Euler (where I'm guessing you got that problem from), is to think about things and come up with solutions. Anyway, I will leave some steps that will guide you in the right way: • Break up your code into two methods: This will be very efficient, if you implement the sieve correctly, it should bring down your execution time to a couple of seconds. - There are multiple things: 1. When you want to determine more than one prime sieve algorithm is far better. (Google for sieve of Eratosthenes, and then sum up). 2. Even when using the naive algorithm, like you do, there are several improvements possible: 1. Think of it: except the first even prime, all others are odd, so you should not do a /primeNumber++ in your loop, but more a primeNumber+=2 (and actually start the loop not with 1). [ Runtime halfed ] 2. You Check in your inner loop for all numbers smaller than the prime candidate if they are a factor of it. There you can also skip all even numbers (and increase always by two). [Runtime almost halfed ]. 3. In your inner loop you can save even more. You dont need to check for all number < prime, but only up to < sqrt(prime). Because when a number divides the prime, there must be always two factors and one of them must be smaller(or equal) to the square root. [ Runtime "rooted" ] 4. You dont want the factors of the prime, you only want to know if the number is prime or not. So when you know it is not prime, do NOT continue testing it (i.e. break out of you loop when you got the first factor (to make that easier, you should NOT test for the 1 and the number itself) - This will save you a huge amount of time. [ Runtime even more reduced. ] So with this tips even your naive approach without sieve will result in a runtime less than 2 minuted (sqrt(15/4)). - This part is very innefficent:- for(i=1;i<=primeNum;i++){ factors++; // total number of factors } } The counts the total number of factors. Since all you are interested in is if the number is prime or not, the number of factors is not required. The test should be if there is a factor that isn't 1 or the number being tested. So you can do this:- boolean is_prime = true; // start at 3 as 1 is always a factor and even numbers above 2 are definately not prime, terminate at n-1 as n is also a factor is_prime = false; break; } } This is now more efficient for non-primes. For primes it is doing too much, factors come in pairs: if a.b == c then a <= sqrt(c) and b >= sqrt(c), so the loop can safely terminate at sqrt(primeNum). You could compute sqrt(primeNum) before the loop but that would usually require using floating point functions. Instead or terminating when i > sqrt(primeNum), terminate the loop when i.i > primeNum. You can also remove the i.i multiplication and replace it with an extra variable and a couple of adds (left as an exercise for the reader). Another approach is to use a sieve, as others have mentioned, which is a simple method when there's a fixed upper limit to the search space. You can make a version that has no upper limit (memory size not withstanding) but is quite tricky to implement as it requires a bit of dynamic memory management. Not sure if a simple sieve would be faster than the factor search as you will be hitting memory with the sieve which has a big effect on speed. - It's the loop in the loop that causes the code to be slow. Here is some code i found that runs with same criteria in only a few seconds: void setup() { for (int i = 1; i<2000000; i++) { if (isPrime(i)) { System.out.println(i); } } } boolean isPrime(int number) { if (number < 2) return false; if (number == 2) return true; if (number % 2 == 0) return false; for (int i = 3; (i*i)<=number; i+=2) { if (number % i == 0 ) return false; } return true; } - THe i*i in the for-condition is inefficient, better to make it i<=sqrt(number). There the compiler can move the root computation in front of the loop and it is only performed once, and the test is just a comparison. (And in the outer loop one can also do a i+=2 instead the i++) –  flolo May 18 '13 at 10:31 The outer loop is only to demonstrate that it returns a good result or a bad result. Your formula with the sqrt is in al my testcases slower. If you convert the code to C, then your formula is faster. I tried to bitwise the i*i formula, but the result is again slower. –  Dippo May 18 '13 at 15:48 Your algorithm is really inefficient, for algorithms to calculate prime numbers look here. Summing them up once you have them shouldn't be the problem. - This function sums the primes less than n using the Sieve of Eratosthenes: function sumPrimes(n) sum := 0 sieve := makeArray(2..n, True) for p from 2 to n step 1 if sieve[p] sum := sum + p for i from p * p to n step p sieve[i] := False return sum I'll leave it to you to translate to Java. For n = 2000000, this should run in one or two seconds. - In Python, you can accomplish it this way. Compared to some previous solutions, I try not to change the isPrime vector once I cross $\sqrt{n}$ import math def sum_primes(nmax): maxDivisor=int(math.floor(math.sqrt(nmax))) isPrime=[False,False]+range(2,nmax+1) for i in range(2,maxDivisor+1): if isPrime[i] is not False: for dummy in range(i*2,nmax+1,i): isPrime[dummy]=False print (sum(isPrime)) - Reduce the usage of if conditions. It slows down your code. Try to use ternary operators if possible if this affects your speed even though it is not recommended in java 1.5. Try this solution: if( (factors==2)&&(primeNum<2000000) ) instead of repeated if conditions and post any difference. - That is totally not the reason for the code needing so much time, actually depending on the compiler both versions will generate the same bytecode anyways –  LionC May 17 '13 at 10:00 Repeated if condition checking does affects the performance. In this case where the loop goes around for 2 million times, this is definitely worth a try. –  Arun Raj May 17 '13 at 10:19 This change affects the control flow of the program. But it won't make a big difference - 2000000 iterations of a test would take much less than one second, which isn't much compared to 30 minutes. Also, an optimising compiler might eliminate the primeNum test as it is always true. –  Skizz May 17 '13 at 10:44
2014-12-25 02:23:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4756251275539398, "perplexity": 1036.6123318232821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447544596.14/warc/CC-MAIN-20141224185904-00044-ip-10-231-17-201.ec2.internal.warc.gz"}
http://citizendia.org/Cross_product
This article is about the cross product of two vectors. For related concepts, see cross product (disambiguation). In mathematics, the cross product is a binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which is perpendicular to the two input vectors. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and In Mathematics, a binary operation is a calculation involving two Operands, in other words an operation whose Arity is two In Mathematics, two Vectors are orthogonal if they are Perpendicular, i By contrast, the dot product produces a scalar result. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R In Linear algebra, Real numbers are called Scalars and relate to vectors in a Vector space through the operation of Scalar multiplication In many engineering and physics problems, it is handy to be able to construct a perpendicular vector from two existing vectors, and the cross product provides a means for doing so. The cross product is also known as the vector product, or Gibbs vector product. Josiah Willard Gibbs ( February 11, 1839 &ndash April 28, 1903) was an American theoretical Physicist, Chemist The cross product is not defined except in three-dimensions (and the algebra defined by the cross product is not associative). In Mathematics, an algebra over a field K, or a K -algebra, is a Vector space A over K equipped with In Mathematics, associativity is a property that a Binary operation can have Like the dot product, it depends on the metric of Euclidean space. In Mathematics, a metric space is a set where a notion of Distance (called a metric) between elements of the set is defined Unlike the dot product, it also depends on the choice of orientation or "handedness". In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R See also Orientation (geometry. In Mathematics, an orientation on a real Vector space is a choice of which Certain features of the cross product can be generalized to other situations. For arbitrary choices of orientation, the cross product must be regarded not as a vector, but as a pseudovector. In Physics and Mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under a proper rotation but gains an For arbitrary choices of metric, and in arbitrary dimensions, the cross product can be generalized by the exterior product of vectors, defining a two-form instead of a vector. In Linear algebra, a two-form is another term for a Bilinear form, typically used in informal discussions or sometimes to indicate that the bilinear form is Illustration of the cross-product in respect to a right-handed coordinate system. ## Definition Finding the direction of the cross product by the right-hand rule. For the related yet different principle relating to electromagnetic coils see Right hand grip rule. The cross product of two vectors a and b is denoted by a × b. In physics, sometimes the notation ab is used[1] (mathematicians do not use this notation, to avoid confusion with the exterior product). Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. In a three-dimensional Euclidean space, with a usual right-handed coordinate system, a × b is defined as a vector c that is perpendicular to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane In Geometry, two lines or planes (or a line and a plane are considered perpendicular (or orthogonal) to each other if they form congruent For the related yet different principle relating to electromagnetic coils see Right hand grip rule. In Geometry, a parallelogram is a Quadrilateral with two sets of Parallel sides The cross product is given by the formula $\mathbf{a} \times \mathbf{b} = a b \sin \theta \ \mathbf{\hat{n}}$ where θ is the measure of the smaller angle between a and b (0° ≤ θ ≤ 180°), a and b are the magnitudes of vectors a and b, and $\mathbf{\hat{n}}$ is a unit vector perpendicular to the plane containing a and b. In Geometry and Trigonometry, an angle (in full plane angle) is the figure formed by two rays sharing a common Endpoint, called In Mathematics, a unit vector in a Normed vector space is a vector (often a spatial vector) whose length is 1 (the unit length In Geometry, two lines or planes (or a line and a plane are considered perpendicular (or orthogonal) to each other if they form congruent If the vectors a and b are collinear (i. e. , the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0. The direction of the vector $\mathbf{\hat{n}}$ is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector $\mathbf{\hat{n}}$ is coming out of the thumb (see the picture on the right). Using this rule implies that the cross-product is anti-commutative, i. In mathematics anticommutativity refers to the property of an operation being anticommutative, i e. , b × a = - (a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector. Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a left-handed coordinate system is used, the direction of the vector $\mathbf{\hat{n}}$ is given by the left-hand rule and points in the opposite direction. In Mathematics, the Cartesian coordinate system (also called rectangular coordinate system) is used to determine each point uniquely in a plane This, however, creates a problem because transforming from one arbitrary reference system to another (e. g. , a mirror image transformation from a right-handed to a left-handed coordinate system), should not change the direction of $\mathbf{\hat{n}}$. The problem is clarified by realizing that the cross-product of two vectors is not a (true) vector, but rather a pseudovector. In Physics and Mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under a proper rotation but gains an See cross product and handedness for more detail. In Mathematics, the cross product is a Binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which ## Computing the cross product ### Coordinate notation The unit vectors i, j, and k from the given orthogonal coordinate system satisfy the following equalities: i × j = k           j × k = i           k × i = j. In Mathematics, a unit vector in a Normed vector space is a vector (often a spatial vector) whose length is 1 (the unit length With these rules, the coordinates of the cross product of two vectors can be computed easily, without the need to determine any angles: Let a = a1i + a2j + a3k = (a1, a2, a3) and b = b1i + b2j + b3k = (b1, b2, b3) Then a × b = (a2b3 − a3b2) i + (a3b1 − a1b3) j + (a1b2 − a2b1) k = (a2b3 − a3b2, a3b1 − a1b3, a1b2 − a2b1) ### Matrix notation The definition of the cross product can also be represented by the determinant of a matrix: $\mathbf{a}\times\mathbf{b}=\det \begin{bmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\a_1 & a_2 & a_3 \\b_1 & b_2 & b_3 \\\end{bmatrix}.$ This determinant can be computed using Sarrus' rule. In Algebra, a determinant is a function depending on n that associates a scalar, det( A) to every n × n In Mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries) which may be Numbers or more generally Sarrus' rule or Sarrus' scheme is a method and a memorization scheme to compute the Determinant of a 3x3 Matrix. Consider the table $\begin{matrix}\mathbf{i} & \mathbf{j} & \mathbf{k} & \mathbf{i} & \mathbf{j} & \mathbf{k} \\a_1 & a_2 & a_3 & a_1 & a_2 & a_3 \\b_1 & b_2 & b_3 & b_1 & b_2 & b_3 \end{matrix}$ From the first three elements on the first row draw three diagonals to the right (e. g. the first diagonal would contain i, a2, and b3), and from the last three elements on the first row draw three diagonals to the left (e. g. the first diagonal would contain i, a3, and b2). Then multiply the elements on each of these six diagonals, and negate the last three products. The cross product would be defined by the sum of these products: $\mathbf{i}a_2b_3 + \mathbf{j}a_3b_1 + \mathbf{k}a_1b_2 - \mathbf{i}a_3b_2 - \mathbf{j}a_1b_3 - \mathbf{k}a_2b_1.$ ## Examples ### Example 1 Consider two vectors, a = (1,2,3) and b = (4,5,6). The cross product a × b is a × b = (1,2,3) × (4,5,6) = ((2×6 - 3×5), (3×4 - 1×6), (1×5 - 2×4)) = (-3,6,-3). ### Example 2 Consider two vectors, a = (3,0,0) and b = (0,2,0). The cross product a × b is a × b = (3,0,0) × (0,2,0) = ((0×0 - 0×2), (0×0 - 3×0), (3×2 - 0×0)) = (0,0,6). This example has the following interpretations: 1. The area of the parallelogram (a rectangle in this case) is 2 × 3 = 6. 2. The cross product of any two vectors in the xy plane will be parallel to the z axis. 3. Since the z-component of the result is positive, the non-obtuse angle from a to b is counterclockwise (when observed from a point on the +z semiaxis, and when the coordinate system is right-handed). ## Properties ### Geometric meaning See also: Triple_product Figure 1: The area of a parallelogram as a cross product. This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. Figure 2: The volume of a parallelepiped using dot and cross-products; dashed lines show the projections of c onto a × b and of a onto b × c, a first step in finding dot-products. The magnitude of the cross product can be interpreted as the unsigned area of the parallelogram having a and b as sides (see Figure 1): $| \mathbf{a} \times \mathbf{b}| = | \mathbf{a} | | \mathbf{b}| \sin \theta. \,\!$ Indeed, one can also compute the volume V of a parallelepiped having a, b and c as sides by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2): $V = |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|.$ Figure 2 demonstrates that this volume can be found in two ways, showing geometrically that the identity holds that a "dot" and a "cross" can be interchanged without changing the result. Area is a Quantity expressing the two- Dimensional size of a defined part of a Surface, typically a region bounded by a closed Curve. In Geometry, a parallelogram is a Quadrilateral with two sets of Parallel sides Properties Any of the three pairs of parallel faces can be viewed as the base planes of the prism This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. That is: $V =\mathbf{a \times b \cdot c} = \mathbf{a \cdot b \times c} \ .$ ### Algebraic properties The cross product is anticommutative, a × b = −b × a, distributive over addition, a × (b + c) = (a × b) + (a × c), and compatible with scalar multiplication so that (r a) × b = a × (r b) = r (a × b). In mathematics anticommutativity refers to the property of an operation being anticommutative, i In Mathematics, and in particular in Abstract algebra, distributivity is a property of Binary operations that generalises the distributive law It is not associative, but satisfies the Jacobi identity: a × (b × c) + b × (c × a) + c × (a × b) = 0. In Mathematics, associativity is a property that a Binary operation can have In Mathematics the Jacobi identity is a property that a binary operation can satisfy which determines how the order of evaluation behaves for the given operation It does not obey the cancellation law: If a × b = a × c and a0 then we can write: (a × b) − (a × c) = 0 and, by the distributive law above: a × (bc) = 0 Now, if a is parallel to (bc), then even if a0 it is possible that (bc) ≠ 0 and therefore that bc. In Mathematics, the notion of cancellative is a generalization of the notion of Invertible. However, if both a · b = a · c and a × b = a × c, then we can conclude that b = c. Indeed, a . (b - c) = 0, and a × (b - c) = 0 so that b - c is both parallel and perpendicular to the non-zero vector a. This is only possible if b - c = 0. The distributivity, linearity and Jacobi identity show that R3 together with vector addition and cross product forms a Lie algebra. In Mathematics, a Lie algebra is an algebraic structure whose main use is in studying geometric objects such as Lie groups and differentiable Manifolds Lie In fact, the Lie algebra is that of the orthogonal group in 3 dimensions, SO(3). In Mathematics, the orthogonal group of degree n over a field F (written as O( n, F) is the group of n This article is about rotations in three-dimensional Euclidean space Further, two non-zero vectors a and b are parallel iff a × b = 0. It follows from the geometrical definition above that the cross product is invariant under rotations about the axis defined by a×b. A rotation is a movement of an object in a circular motion A two- Dimensional object rotates around a center (or point) of rotation ### Triple product expansion Main article: Triple product The triple product expansion, also known as Lagrange's formula, is a formula relating the cross product of three vectors (called the vector triple product) with the dot product: a × (b × c) = b(a · c) − c(a · b). This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. The mnemonic “BAC minus CAB” is used to remember the order of the vectors in the right hand member. A mnemonic device (nəˈmɒnɪk is a Memory aid Commonly met mnemonics are often verbal something such as a very short poem or a special word used to help a person remember This formula is used in physics to simplify vector calculations. Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. A special case, regarding gradients and useful in vector calculus, is given below. In Vector calculus, the gradient of a Scalar field is a Vector field which points in the direction of the greatest rate of increase of the scalar Vector calculus (also called vector analysis) is a field of Mathematics concerned with multivariable Real analysis of vectors in an Inner \begin{align} \nabla \times (\nabla \times \mathbf{f}) & {}= \nabla (\nabla \cdot \mathbf{f} ) - (\nabla \cdot \nabla) \mathbf{f} \\& {}= \mbox{grad }(\mbox{div } \mathbf{f} ) - \mbox{laplacian } \mathbf{f}.\end{align} This is a special case of the more general Laplace-de Rham operator Δ = dδ + δd. In Mathematics and Physics, the Laplace operator or Laplacian, denoted by \Delta\  or \nabla^2  and named after The following identity also relates the cross product and the dot product: $|\mathbf{a} \times \mathbf{b}|^2 + |\mathbf{a} \cdot \mathbf{b}|^2 = |\mathbf{a}|^2 |\mathbf{b}|^2.$ This is a special case of the multiplicativity $|\mathbf{vw}| = |\mathbf{v}| |\mathbf{w}|$ of the norm in the quaternion algebra, and a restriction to $\mathbb{R}^3$ of Lagrange's identity. Quaternions, in Mathematics, are a non-commutative extension of Complex numbers They were first described by the Irish Mathematician In Algebra, Lagrange's identity is the identity \biggl( \sum_{k=1}^n a_k^2\biggr \biggl(\sum_{k=1}^n b_k^2\biggr - \biggl(\sum_{k=1}^n a_k b_k\biggr^2 ## Alternative ways to compute the cross product ### Quaternions Further information: quaternions and spatial rotation The cross product can also be described in terms of quaternions, and this is why the letters i, j, k are a convention for the standard basis on $\mathbf{R}^3$: it is being thought of as the imaginary quaternions. Unit quaternions provide a convenient mathematical notation for representing Orientations and Rotations of objects in three dimensions Quaternions, in Mathematics, are a non-commutative extension of Complex numbers They were first described by the Irish Mathematician Notice for instance that the above given cross product relations among i, j, and k agree with the multiplicative relations among the quaternions i, j, and k. In general, if we represent a vector [a1, a2, a3] as the quaternion a1i + a2j + a3k, we obtain the cross product of two vectors by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R ### Conversion to matrix multiplication A cross product between two vectors (which can only be defined in three-dimensional space) can be rewritten in terms of pure matrix multiplication as the product of a skew-symmetric matrix and a vector, as follows: $\mathbf{a} \times \mathbf{b} = [\mathbf{a}]_{\times} \mathbf{b} = \begin{bmatrix}\,0&\!-a_3&\,\,a_2\\ \,\,a_3&0&\!-a_1\\-a_2&\,\,a_1&\,0\end{bmatrix}\begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix}$ $\mathbf{b} \times \mathbf{a} = [\mathbf{a}]^T_{\times} \mathbf{b} = \begin{bmatrix}\,0&\,\,a_3&\!-a_2\\ -a_3&0&\,\,a_1\\\,\,a_2&\!-a_1&\,0\end{bmatrix}\begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix}$ where $[\mathbf{a}]_{\times} \stackrel{\rm def}{=} \begin{bmatrix}\,\,0&\!-a_3&\,\,\,a_2\\\,\,\,a_3&0&\!-a_1\\\!-a_2&\,\,a_1&\,\,0\end{bmatrix}.$ Also, if $\mathbf{a}$ is itself a cross product: $\mathbf{a} = \mathbf{c} \times \mathbf{d}$ then $[\mathbf{a}]_{\times} = (\mathbf{c}\mathbf{d}^T)^T - \mathbf{c}\mathbf{d}^T.$ This notation provides another way of generalizing cross product to the higher dimensions by substituting pseudovectors (such as angular velocity or magnetic field) with such skew-symmetric matrices. In Linear algebra, a skew-symmetric (or antisymmetric) matrix is a Square matrix A whose Transpose is also its negative In Physics and Mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under a proper rotation but gains an Do not confuse with Angular frequency The unit for angular velocity is rad/s In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges It is clear that such physical quantities will have n(n-1)/2 independent components in n dimensions, which coincides with number of dimensions for three-dimensional space, and this is why vectors can be used (and most often are used) to represent such quantities. This notation is also often much easier to work with, for example, in epipolar geometry. Epipolar geometry refers to the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions there are a number of geometric relations between the From the general properties of the cross product follows immediately that $[\mathbf{a}]_{\times} \, \mathbf{a} = \mathbf{0}$   and   $\mathbf{a}^{T} \, [\mathbf{a}]_{\times} = \mathbf{0}$ and from fact that $[\mathbf{a}]_{\times}$ is skew-symmetric it follows that $\mathbf{b}^{T} \, [\mathbf{a}]_{\times} \, \mathbf{b} = 0.$ The above-mentioned triple product expansion (bac-cab rule) can be easily proven using this notation. The above definition of $[\mathbf{a}]_{\times}$ means that there is a one-to-one mapping between the set of 3×3 skew-symmetric matrices, also denoted SO(3), and the operation of taking the cross product with some vector $\mathbf{a}$. This article is about rotations in three-dimensional Euclidean space ### Index notation The cross product can alternatively be defined in terms of the Levi-Civita tensor $\varepsilon_{ijk}$ $\mathbf{a \times b} = \mathbf{c}\Leftrightarrow\ c_i = \sum_{j=1}^3 \sum_{k=1}^3 \varepsilon_{ijk} a_j b_k$ where the indices i,j,k correspond, as in the previous section, to orthogonal vector components. The Levi-Civita symbol, also called the Permutation symbol or antisymmetric symbol, is a mathematical symbol used in particular in Tensor ### Mnemonic The word xyzzy can be used to remember the definition of the cross product. Xyzzy is a magic word from the Colossal Cave Adventure computer game If $\mathbf{a} = \mathbf{b} \times \mathbf{c}$ where: $\mathbf{a} = \begin{bmatrix}a_x\\a_y\\a_z\end{bmatrix}, \mathbf{b} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix}, \mathbf{c} = \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix}$ then: $a_x = b_y c_z - b_z c_y \,$ $a_y = b_z c_x - b_x c_z \,$ $a_z = b_x c_y - b_y c_x \,$ Notice that the second and third equations can be obtained from the first by simply vertically rotating the subscripts, xyzx. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either you remember the relevant two diagonals of Sarrus's scheme (those containing i), or you remember the xyzzy sequence. Xyzzy is a magic word from the Colossal Cave Adventure computer game Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned $3 \times 3$ matrix, the first three letters of the word xyzzy can be very easily remembered. In Linear algebra, the main diagonal (sometimes leading diagonal or primary diagonal) of a matrix A is the collection of cells A_{ij} In Mathematics, the cross product is a Binary operation on two vectors in a three-dimensional Euclidean space that results in another vector which Xyzzy is a magic word from the Colossal Cave Adventure computer game ## Applications ### Computational geometry The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. Computer graphics are Graphics created by Computers and more generally the Representation and Manipulation of Pictorial Data In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points p1 = (x1,y1), p2 = (x2,y2) and p3 = (x3,y3). Computational geometry is a branch of Computer science devoted to the study of algorithms which can be stated in terms of Geometry. In Geometry and Trigonometry, an angle (in full plane angle) is the figure formed by two rays sharing a common Endpoint, called It corresponds to the direction of the cross product of the two coplanar vectors defined by the pairs of points p1,p2 and p1,p3, i. e. , by the sign of the expression P = (x2x1)(y3y1) − (y2y1)(x3x1). In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a negative angle of rotation around p2 from p1 to p3, otherwise a positive angle. From another point of view, the sign of P tells whether p3 lies to the left or to the right of line p1,p2. ### Other The cross product occurs in the formula for the vector operator curl. A vector operator is a type of Differential operator used in Vector calculus. cURL is a Command line tool for transferring files with URL syntax. It is also used to describe the Lorentz force experienced by a moving electrical charge in a magnetic field. In Physics, the Lorentz force is the Force on a Point charge due to Electromagnetic fields It is given by the following equation The definitions of torque and angular momentum also involve the cross product. A torque (τ in Physics, also called a moment (of force is a pseudo- vector that measures the tendency of a force to rotate an object about In Physics, the angular momentum of a particle about an origin is a vector quantity equal to the mass of the particle multiplied by the Cross product of the position The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints. ## Cross product as an exterior product The cross product in relation to the exterior product. In red are the unit normal vector, and the "parallel" unit bivector. The cross product can be viewed in terms of the exterior product. This view allows for a natural geometric interpretation of the cross product. In exterior calculus the exterior product (or wedge product) of two vectors is a bivector. In Differential geometry, the exterior derivative extends the concept of the differential of a function which is a form of degree zero to Differential forms In Differential geometry, a p -vector is the Tensor obtained by taking Linear combinations of the Wedge product of p A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector ab as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge dual of the bivector ab, identifying 2-vectors with vectors: $a \times b = * (a \wedge b) \,.$ This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In Mathematics, the Hodge star operator or Hodge dual is a significant Linear map introduced in general by W In Differential geometry, a p -vector is the Tensor obtained by taking Linear combinations of the Wedge product of p Only in three dimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dual of a bivector is two-dimensional – another oriented plane element. So, in three dimensions only is the cross product of a and b the vector dual to the bivector ab: it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as ab has relative to the unit bivector; precisely the properties described above. ## Cross product and handedness When measurable quantities involve cross products, the handedness of the coordinate systems used cannot be arbitrary. However, when physics laws are written as equations, it should be possible to make an arbitrary choice of the coordinate system (including handedness). To avoid problems, one should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two vectors, one must take into account that when the handedness of the coordinate system is not fixed a priori, the result is not a (true) vector but a pseudovector. In Physics and Mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under a proper rotation but gains an Therefore, for consistency, the other side must also be a pseudovector. More generally, the result of a cross product may be either a vector or a pseudovector, depending on the type of its operands (vectors or pseudovectors). Namely, vectors and pseudovectors are interrelated in the following ways under application of the cross product: • vector × vector = pseudovector • vector × pseudovector = vector • pseudovector × pseudovector = pseudovector Because the cross product may also be a (true) vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a (true) vector and the other one is a pseudovector (e. g. , the cross product of two vectors). For instance, a vector triple product involving three (true) vectors is a (true) vector. This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. A handedness-free approach is possible using exterior algebra. ## Higher dimensions There are several ways to generalize the cross product to the higher dimensions. In the context of multilinear algebra, it is possible to define a generalized cross product in terms of parity such that the generalized cross product between two vectors of dimension n is a skew-symmetric tensor of rank n−2. In Mathematics, multilinear algebra extends the methods of Linear algebra. ### Using octonions A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. In Mathematics, the seven-dimensional cross product is a Binary operation on vectors in a seven-dimensional Euclidean space. In Mathematics, the octonions are a nonassociative extension of the Quaternions Their 8-dimensional Normed division algebra over the Real The nonexistence of such cross products of two vectors in other dimensions is related to the result that the only normed division algebras are the ones with dimension 1, 2, 4, and 8. In Mathematics, a normed division algebra A is a Division algebra over the real or complex numbers which is also a Normed vector ### Wedge product Main article: Exterior algebra In general dimension, there is no direct analogue of the binary cross product. There is however the wedge product, which has similar properties, except that the wedge product of two vectors is now a 2-vector instead of an ordinary vector. In Differential geometry, a p -vector is the Tensor obtained by taking Linear combinations of the Wedge product of p As mentioned above, the cross product can be interpreted as the wedge product in three dimensions after using Hodge duality to identify 2-vectors with vectors. One can also construct an n-ary analogue of the cross product in Rn+1 given by $\bigwedge(\mathbf{v}_1,\cdots,\mathbf{v}_n)=\begin{vmatrix} v_1{}^1 &\cdots &v_1{}^{n+1}\\\vdots &\ddots &\vdots\\v_n{}^1 & \cdots &v_n{}^{n+1}\\\mathbf{e}_1 &\cdots &\mathbf{e}_{n+1}\end{vmatrix}.$ This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1,. . . ,vn,Λ(v1,. . . ,vn)) have a positive orientation with respect to (e1,. See also Orientation (geometry. In Mathematics, an orientation on a real Vector space is a choice of which . . ,en+1). If n is even, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is odd, however, the distinction must be kept. This n-ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. The wedge product and dot product can be combined to form the Clifford product. In Mathematics, Clifford algebras are a type of Associative algebra. ## History In 1773, Joseph Louis Lagrange introduced the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. A tetrahedron (plural tetrahedra) is a Polyhedron composed of four triangular faces three of which meet at each vertex. [2] In 1843 the Irish mathematical physicist Sir William Rowan Hamilton introduced the quaternion product, and with it the terms "vector" and "scalar". Sir William Rowan Hamilton (4 August 1805 &ndash 2 September 1865 was an Irish Mathematician, Physicist, and Astronomer who Quaternions, in Mathematics, are a non-commutative extension of Complex numbers They were first described by the Irish Mathematician Given two quaternions [0, u] and [0, v], where u and v are vectors in R3, their quaternion product can be summarized as [−u·v, u×v]. James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education. James Clerk Maxwell (13 June 1831 &ndash 5 November 1879 was a Scottish mathematician and theoretical physicist. In Classical electromagnetism, Maxwell's equations are a set of four Partial differential equations that describe the properties of the electric However, Oliver Heaviside in England and Josiah Willard Gibbs in Connecticut felt that quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. England is a Country which is part of the United Kingdom. Its inhabitants account for more than 83% of the total UK population whilst its mainland Josiah Willard Gibbs ( February 11, 1839 &ndash April 28, 1903) was an American theoretical Physicist, Chemist Connecticut ( is a state located in the New England region of the northeastern United States of America. Thus, about forty years after the quaternion product, the dot product and cross product were introduced — to heated opposition. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today. Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created a geometric algebra not tied to dimension two or three, with the exterior product playing a central role. Hermann Günther Grassmann ( April 15, 1809, Stettin ( Szczecin) &ndash September 26, 1877, Stettin) was a William Kingdon Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the case of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross product. William Kingdon Clifford FRS ( May 4, 1845 &ndash March 3, 1879) was an English Mathematician and In Mathematics, Clifford algebras are a type of Associative algebra. The cross notation, which began with Gibbs, inspired the name "cross product". Originally appearing in privately published notes for his students in 1881 as Elements of Vector Analysis, Gibbs’s notation — and the name — later reached a wider audience through Vector Analysis (Gibbs/Wilson), a textbook by a former student. Vector Analysis is a book on Vector calculus first published in 1901 by Edwin Bidwell Wilson. Edwin Bidwell Wilson rearranged material from Gibbs's lectures, together with material from publications by Heaviside, Föpps, and Hamilton. Edwin Bidwell Wilson ( April 25 1879 – December 28 1964) was a Mathematician and Polymath. He divided vector analysis into three parts: "First, that which concerns addition and the scalar and vector products of vectors. Vector calculus (also called vector analysis) is a field of Mathematics concerned with multivariable Real analysis of vectors in an Inner Second, that which concerns the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains the theory of the linear vector function. " Two main kinds of vector multiplications were defined, and they were called as follows: • The direct, scalar, or dot product of two vectors • The skew, vector, or cross product of two vectors Several kinds of triple products and products of more than three vectors were also examined. This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. The above mentioned triple product expansion was also included. ## See also • Triple products — Products involving three vectors. This article is about mathematics See Lawson criterion for the use of the term triple product in relation to Nuclear fusion. • Multiple cross products — Products involving more than three vectors. Multiple cross products is a Mathematical term Using multiple cross products In Mathematics, one must be careful when using multiple Cross products • Dot product • Cartesian product — A product of two sets. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R Cartesian square redirects here For Cartesian squares in Category theory, see Cartesian square (category theory. • × (the symbol) ## Notes 1. ^ Jeffreys, H and Jeffreys, BS (1999). The multiplication sign is the symbol × ( multiplication sign is the preferred Unicode name for the Codepoint represented by that Glyph Methods of mathematical physics. Cambridge University Press. 2. ^ Lagrange, JL (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires", Oeuvres vol 3.
2013-06-20 01:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189582228660583, "perplexity": 383.9710542264302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006573/warc/CC-MAIN-20130516131326-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/gate/gate-ece/analog-circuits/oscillators
GATE ECE Analog Circuits Oscillators Previous Years Questions Marks 1 Consider the oscillator circuit shown in the figure. The function of the network (shown in dotted lines) consisting of the 100 k $$\Omega$$ resistor i... Marks 2 The value of 'C' required for a sinusoidal oscillation of frequency 1 KHz in the circuit of the figure is ... The oscillator circuit shown in the figure has an ideal inverting Amplifier. It's frequency of Oscillation (in Hz) is ... The circuit in the figure employs positive Feedback and is intended to generate sinusoidal oscillation. If at a frequency f$$_0$$, B(f)= $${{\Delta {V... The Oscillator circuit shown in the figure is ... Value of R in the oscillator shown in the given figure. So chosen that it just oscillates at an angular frequencies of' "$$\omega $$". The value of "... Match the following. GROUP-1 (A) Hartley (B) Wein-bridge (C) Crystal GROUP-2 (1) Low frequency oscillator (2) High frequency oscillator (3) Stable fr... Marks 5 Find the value of$${R^1} in the circuit of fig. For generating sinusoidal Oscillations. Find the frequency of oscillations. ... EXAM MAP Joint Entrance Examination
2023-03-24 05:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6650380492210388, "perplexity": 1998.914826719535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00166.warc.gz"}
http://tex.stackexchange.com/questions/55735/enumerate-environments-within-theorems?answertab=oldest
# Enumerate environments within theorems How to create an environment for enumerating that allows one to identify the enumeration with the theorem environment? Example: Definition 1.1 Let X be a set. An algebra over X is a collection C of subsets of X satisfying D 1.1.1 If A is an element of C, then X\A is an element of C; D 1.1.2 If A and B are both elements of C, then the union A U B is an element of C. Proposition 1.1 Let C be an algebra over a set X; then the following senteces are true P 1.1.1 The empty set is an element of C; P 1.1.2 The set X is an element of C; P 1.1.3 Every finite union of elements of C is an element of C; P 1.1.4 Every finite intersection of elements of C is an element of C. It would be quite useful if I could labeled then, to a further reference. - It is always best to compose a fully compilable MWE that illustrates the problem including the \documentclass and the appropriate packages so that those trying to help don't have to recreate it. This will also serve as a test case and ensure that the solution actually works for you. –  Peter Grill May 13 '12 at 23:57 Peter Grill's comment is particularly relevant here because there are so many different ways to create theorems- which one are you using? –  cmhughes May 14 '12 at 2:56 @cmhughes: in this particular case, I disagree. The solution I provided will work with only some minor formatting changes whenever the theorem-like structures are defined as environments and have an associated counter (and those conditions are always satisfied with the usual methods for creating theorems: amsthm, ntheorem, mdframed). –  Gonzalo Medina May 14 '12 at 14:34 @GonzaloMedina I agree- your solution is very robust. However, if a specific theorem package was specified then there would be no need for the etoolbox as the setlist trick you wrote could have been built into the environment definition –  cmhughes May 14 '12 at 14:51 @cmhughes: Ah, I see your point. Should we delete our comments or do you think we can leave them? –  Gonzalo Medina May 14 '12 at 18:58 Here's one possible solution using the enumitem package to define a new list-like environment whose label uses a variable prefix; this prefix is controlled by a macro and, with the help of the etoolbox package, the theorem-like environments are patched to redefine the prefix. Labeling and cross-referencig items is then done as usual. According to barbara beeton's comment, provisions were made to have the labels with a final period and the cross-references without it. Also, italicized item numbers were suppressed. \documentclass{book} \usepackage{amsthm} \usepackage{enumitem} \usepackage{etoolbox} \newtheorem{prop}{Proposition}[chapter] \theoremstyle{definition} \newtheorem{defi}{Definition}[chapter] \newcommand\EnumPrefix{} \newlist{senenum}{enumerate}{10} \setlist[senenum]{label=\EnumPrefix.,ref=\EnumPrefix,leftmargin=*} \AtBeginEnvironment{defi}{\renewcommand\EnumPrefix{\normalfont\bfseries D.\thedefi.\arabic*}} \AtBeginEnvironment{prop}{\renewcommand\EnumPrefix{\normalfont\bfseries P.\theprop.\arabic*}} \begin{document} \chapter{Test Chapter} \begin{defi} Let $X$ be a set. An algebra over $X$ is a collection $C$ of subsets of $X$ satisfying \begin{senenum} \item If $A$ is an element of $C$, then $X\setminus A$ is an element of $C$; \item If $A$ and $B$ are both elements of $C$, then the union $A\cup B$ is an element of $C$. \end{senenum} \end{defi} \begin{prop} Let $C$ be an algebra over a set $X$; then the following senteces are true \begin{senenum} \item\label{ite:algempty} The empty set is an element of $C$; \item The set $X$ is an element of $C$; \item Every finite union of elements of $C$ is an element of $C$; \item\label{ite:alginter} Every finite intersection of elements of $C$ is an element of $C$. \end{senenum} \end{prop} In the proof of the equivalence of \ref{ite:alginter} and \ref{ite:algempty}, we used,... \end{document} The above approach focuses on the list-like environment, therefore it is ready to use (with only minor changes) in the case in which ntheorem is used to define the theorem-like structures (ntheorem doesn't use a default period at the end of theorem numbering): \documentclass{book} \usepackage{ntheorem} \usepackage{enumitem} \usepackage{etoolbox} \newtheorem{prop}{Proposition}[chapter] \theoremstyle{changebreak} \newtheorem{defi}{Definition}[chapter] \newcommand\EnumPrefix{} \newlist{senenum}{enumerate}{10} \setlist[senenum]{label=\EnumPrefix,leftmargin=*} \AtBeginEnvironment{defi}{\renewcommand\EnumPrefix{\normalfont\bfseries D.\thedefi.\arabic*}} \AtBeginEnvironment{prop}{\renewcommand\EnumPrefix{\normalfont\bfseries P.\theprop.\arabic*}} \begin{document} \chapter{Test Chapter} \begin{defi} Let $X$ be a set. An algebra over $X$ is a collection $C$ of subsets of $X$ satisfying \begin{senenum} \item If $A$ is an element of $C$, then $X\setminus A$ is an element of $C$; \item If $A$ and $B$ are both elements of $C$, then the union $A\cup B$ is an element of $C$. \end{senenum} \end{defi} \begin{prop} Let $C$ be an algebra over a set $X$; then the following senteces are true \begin{senenum} \item\label{ite:algempty} The empty set is an element of $C$; \item The set $X$ is an element of $C$; \item Every finite union of elements of $C$ is an element of $C$; \item\label{ite:alginter} Every finite intersection of elements of $C$ is an element of $C$. \end{senenum} \end{prop} In the proof of the equivalence of \ref{ite:alginter} and \ref{ite:algempty}, we used,... \end{document} When using ntheorem, there's even another option, not requiring the etoolbox package since \theoremprework can be used to redefine appropriately the prefix used for the list-like environment (as suggested by cmhughes in a comment to the original question); here's the code corresponding to this approach and producing the same result as before: \documentclass{book} \usepackage{ntheorem} \usepackage{enumitem} \newcommand\EnumPrefix{} \theoremprework{\renewcommand\EnumPrefix{\normalfont\bfseries P.\theprop.\arabic*}} \newtheorem{prop}{Proposition}[chapter] \theoremstyle{changebreak} \theoremprework{\renewcommand\EnumPrefix{\normalfont\bfseries D.\thedefi.\arabic*}} \newtheorem{defi}{Definition}[chapter] \newlist{senenum}{enumerate}{10} \setlist[senenum]{label=\EnumPrefix,leftmargin=*} \begin{document} \chapter{Test Chapter} \begin{defi} Let $X$ be a set. An algebra over $X$ is a collection $C$ of subsets of $X$ satisfying \begin{senenum} \item If $A$ is an element of $C$, then $X\setminus A$ is an element of $C$; \item If $A$ and $B$ are both elements of $C$, then the union $A\cup B$ is an element of $C$. \end{senenum} \end{defi} \begin{prop} Let $C$ be an algebra over a set $X$; then the following senteces are true \begin{senenum} \item\label{ite:algempty} The empty set is an element of $C$; \item The set $X$ is an element of $C$; \item Every finite union of elements of $C$ is an element of $C$; \item\label{ite:alginter} Every finite intersection of elements of $C$ is an element of $C$. \end{senenum} \end{prop} In the proof of the equivalence of \ref{ite:alginter} and \ref{ite:algempty}, we used,... \end{document} - keeping the period at the end of the xrefs is confusing. also (depending on house style), italicizing item numbers may be deprecated. methods of avoiding those situations would be appreciated. –  barbara beeton May 14 '12 at 13:30 @barbarabeeton: thank you for your comment; I've updated my answer incorporating your suggestions. –  Gonzalo Medina May 14 '12 at 13:50 @PauloHenrique it depends on the desired kind of indentation. You can change leftmargin=* for leftmargin=<length>, where <length> is an appropriate length. –  Gonzalo Medina Apr 30 '13 at 14:25 @PauloHenrique What value did you use for leftmargin? Since the labels are wide, you need a rather big value. –  Gonzalo Medina Apr 30 '13 at 14:46 @PauloHenrique as I said, since your labels are wide, you need a bigger value. –  Gonzalo Medina Apr 30 '13 at 14:49
2014-04-18 23:40:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966076970100403, "perplexity": 566.5912917064337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
https://myelectrical.com/notes/entryid/116/effect-of-temperature-on-lead-batteries
By on Lead acid batteries are cost effect and reliable, making them suitable for many applications.This note examines topics of interest associated with the use of these batteries. ## Discharge & Peukert's Law The capacity of lead acid batteries decrease as the charging rate is increased. The action of a battery under these conditions is described by Peukert's law (first proposed by German scientist Peukert in 1897): $t= C p I k$ Where: t = time to discharge the battery, in S Cp = battery capacity at 1 A.h discharge rate I = the actual discharge current, in h k = Peukert constant (dependant on battery, typically 1.1 to .13) Typically batteries are rated at a discharge time, T (in hours) and rated capacity C.  Perkert's law can then be expressed as: $t=H ( C IH ) k$ Peukert's law is good for reasonably constant rates of discharge.  For variable and non-linear rates, it starts to become inaccurate.  Replacing I with the average current during the discharge will give a better result, but it is still limited.   In this instance several methods can be used to improve accurately, including: • Rakhmatov and Vrudhula Model - looks at the actual diffusion processes within the battery to derive a more accurate analysis • Kinetic Battery Model - uses the chemical kinetics process as a basis for developing a discharge model • Stochastic Models - analysis the battery as a stochastic process Typical accuracy using Peukert's law is in the order of 10% error.   Rakhmatov and Vrudhula models improve on this having errors around 5%, while Kinetic and Stochastic models perform even better with errors as low as 1 to 2%[1]. ## Effect of Temperature Effect of temperature on battery life Lead acid batteries are cost effective and reliable, making them suitable for many applications. One serious drawback compared to some other batteries (NiCad for example), is that lead acid batteries are affected by temperature. Lead acid batteries should only be used where they are installed in conditioned environments not subject to excessive temperatures. Typically the rating for lead acid batteries is based on an ambient temperature of 25oC. For every 8oC above ambient during use, the life of the battery will be reduced by 50%. Ideally batteries should be operated at 25oC or less. In addition to operation, storage of batteries waiting for use is also affected by temperature. If lead acid batteries are stored at elevated temperatures (particularly in a discharged condition), they will effectively become useless. If storing batteries, they should be in charged and stored at 25oC or less. Batteries will self discharge over time and need to be recharged periodically. ## References • [1]  Battery Modeling, M.R. Jongerden and B.R. Haverkort - doc.utwente.nl/64556/1/BatteryRep4.pdf, accessed November 2012. More interesting Notes: Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author myElectrical Engineering Network Theory – Introduction and Review In electrical engineering, Network Theory is the study of how to solve circuit problems. By analyzing circuits, the engineer looks to determine the various... Photovoltaic (PV) - Utility Power Grid Interface Photovoltaic (PV) systems are typically more efficient when connected in parallel with a main power gird. During periods when the PV system generates energy... Fault Calculation - Symmetrical Components For unbalance conditions the calculation of fault currents is more complex. One method of dealing with this is symmetrical components. Using symmetrical... International System of Units (SI System) The International System of Units (abbreviated SI) is the world's most widely used system of units.  The system consists of a set of units and prefixes... Cable Sheath and Armour Loss When sizing cables, the heat generated  by losses within any sheath or armour need to be evaluated. When significant, it becomes a factor to be considered... HTML Symbol Entities HTML supports a variety of entity symbols which can be entered using either numbers or an entity name.  The number or name is preceded by the ‘&’ sign... Maximum Demand for Buildings Estimating maximum demand is a topic frequently discussed. Working out how much power to allow for a building can be very subjective . Allowing too much... Thermoplastic and Thermosetting Insulation While there are a vast array of cable insulation materials, these are often divided into two general types; Thermoplastic or Thermosetting. For example... Restricted Earth Fault Protection The windings of many medium and small sized transformers are protected by restricted earth fault (REF) systems. The illustration shows the principal of... Capacitor Theory Capacitors are widely used in electrical engineering for functions such as energy storage, power factor correction, voltage compensation and many others... ## Have some knowledge to share If you have some expert knowledge or experience, why not consider sharing this with our community. By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community. To get started and understand our policy, you can read our How to Write an Electrical Note
2021-05-17 18:48:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3475602865219116, "perplexity": 2611.2509547020472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00468.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-4-section-4-4-indeterminate-forms-and-l-hospital-s-rule-4-4-exercises-page-311/14
## Calculus: Early Transcendentals 8th Edition $$\lim_{x\to0}\frac{\tan 3x}{\sin 2x}=\frac{3}{2}$$ $$A=\lim_{x\to0}\frac{\tan 3x}{\sin 2x}$$ This exercise can be carried out by both methods. 1) Method 1: Elementary method The thing that makes this limit an indeterminate form is that $\sin 2x=0$, so we will try to eliminate $\sin 2x$ here. $$A=\lim_{x\to0}\frac{\frac{\sin 3x}{\cos 3x}}{\sin 2x}$$ $$A=\lim_{x\to0}\frac{\sin(2x+x)}{\cos 3x\sin 2x}$$ Now we must remember that $$\sin (a+b)=\sin a\cos b+\sin b\cos a.$$ So, $$A=\lim_{x\to0}\frac{\sin 2x\cos x+\sin x\cos 2x}{\cos 3x\sin 2x}$$ $$A=\lim_{x\to0}\frac{\sin 2x\cos x}{\cos 3x\sin 2x}+\lim_{x\to0}\frac{\sin x\cos 2x}{\cos 3x\sin 2x}$$ $$A=\lim_{x\to0}\frac{\cos x}{\cos 3x}+\lim_{x\to0}\frac{\sin x\cos 2x}{\cos 3x(2\sin x\cos x)}$$ (for $\sin 2x=2\sin x\cos x$) $$A=\frac{\cos 0}{\cos (3\times0)}+\lim_{x\to0}\frac{\cos 2x}{2\cos x\cos 3x}$$ $$A=\frac{1}{1}+\frac{\cos (2\times0)}{2\cos0\cos(3\times0)}$$ $$A=1+\frac{1}{2}=\frac{3}{2}$$ 2) Method 2: L'Hospital's Rule $\lim_{x\to0}(\tan 3x)=\tan (3\times0)=\tan0=0$ and $\lim_{x\to0}(\sin 2x)=\sin (2\times0)=\sin 0=0$, so this limit is an indeterminate form of $\frac{0}{0}$, favorable to the application of L'Hospital's Rule: $$A=\lim_{x\to0}\frac{\frac{d}{dx}(\tan 3x)}{\frac{d}{dx}(\sin 2x)}$$ $$A=\lim_{x\to0}\frac{3\sec^2(3x)}{2\cos 2x}$$ $$A=\frac{3}{2}\frac{\sec^2(3\times0)}{\cos(2\times0)}$$ $$A=\frac{3}{2}\times\frac{1}{1}=\frac{3}{2}$$
2018-04-25 20:31:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541999101638794, "perplexity": 149.9763836765276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00499.warc.gz"}
https://motls.blogspot.com/2006/11/lhc-to-start-trials-in-march.html?m=0?m=1
## Monday, November 20, 2006 ... // ### LHC to start trials in March The LHC will probably be ready for trials by March. paint the LHC in terms of competition between America and Europe which is certainly healthy - although misleading because the U.S. contributions of all kinds to the LHC are comparable to the European ones. The physicists who spoke to the journalists estimated the time needed to find the Higgs to be as short as 3 months after the moment when the LHC is fully operational. They dedicate some special time to Peter Higgs who is from Edinburgh.
2021-10-28 14:39:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2854597866535187, "perplexity": 1942.1902762190186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00609.warc.gz"}
https://www.lancasterarchery.com/exe-arrow-separator-for-arrow-tube.html
# Exe Arrow Separator for Arrow Tube Item # 1410244 Catalog Page # 383 \$4.99 In Stock • The arrow cylinder is a practical and economic solution for transporting arrows • Unfortunately, especially in the case of fletching with mylar vanes like Spin Wings, the lack of separation between the arrows can ruin the fletching • The new EXE arrow divider is an accessory that completes the cylinder for arrows, adapting to the majority of cylinders on the market, and ensures the separation between the arrows to save the vanes • Supplied with two pairs of foam racks, for wide or thin shafts • 12 pre-cut positions Stock Status No No No No No No No No No No
2021-07-30 04:06:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986530303955078, "perplexity": 3490.3910710884143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00082.warc.gz"}
http://superskipbins.com.au/early-genesis-awafmt/99bdcd-parent-functions-transformations
Worksheet will open in a new window. 0. This fascinating concept allows us to graph many other types of functions, like square/cube root, exponential and logarithmic functions. This graph is known as the " Parent Function " for parabolas, or quadratic functions. Here is your free content for this lesson! Expected Learning Outcomes The students will be able to: 1) Identify key characteristics of parent functions (constant, linear, absolute value, and quadratic). Transformations and Parent Functions "Compression" (or "expansion"): b This transformation compresses (or expands) the parent function lengthwise (along the x-axis). Algebraic Expressions Worksheet and Activity – Mazing. The functions shown above are called parent functions. •• • •• • • Transformations affect the appearance of CJ parent function. B. Played 21 times. Scroll down the page for more examples and solutions. Check it out! This quiz is incomplete! 0% average accuracy. When given a table of values, interchange the x and y values to find the coordinates of an inverse function. 21 times. Exponential Parent Function. When a function has a transformation applied it can be either vertical (affects the y-values) or horizontal (affects the x-values). Consider the problem f (x) = 2(x + 3) - 1. Textbook HW Pg. Save. Mathematics. Parent Functions and Transformations Reference BookThis reference book was created to use as a review of transformations and the following function families: linear, absolute value, quadratic, cubic, square root, cube root, exponential, logarithmic, and reciprocal Students will graph the both the p Edit. y=mx+b slider tool . A very simple definition for transformations is, whenever a figure is moved from one location to another location, a t ransformation occurs.. Instructor-paced BETA . Learn. There are three types of transformations: translations, reflections, and dilations. Parent Functions with Transformations. Inverses of functions Inverse functions are reflected over the y = x line. F b. Write the new equation of the logarithmic function according to the transformations stated, as well as the domain and range. Choose the function that shows the correct transformation of the quadratic function shifted eight units to the left and one unit down. Test. The following figures show the graphs of parent functions: linear, quadratic, cubic, absolute, reciprocal, exponential, logarithmic, square root, sine, cosine, tangent. Homework. Play. by ddemarr1. Parent Functions Function Families Transformations Multiple Transformations Inverses Asymptotes Where do we go from here? The parent function is the simplest function with the defining … Control the … What Are Agents of Socialization? Parent Functions and Transformations Reference BookThis reference book was created to use as a review of transformations and the following function families: linear, absolute value, quadratic, cubic, square root, cube root, exponential, logarithmic, and reciprocal Students will graph the both the p Unit 3 Parent Functions Transformations - Displaying top 8 worksheets found for this concept. This is a horizontal shift of three units to the left from the parent function.. Edit. Parent Functions and Transformations DRAFT. Choose the function that is a "parent function". This quiz is incomplete! 1.2 - Pare  nt  Functions and Transformations. By shifting the graph of these parent functions up and down, right and left and reflecting about the x- and y-axes you can obtain many more graphs and obtain their functions … If a figure is moved from one location another location, we say, it is transformation. I expect that today my students will be able to predict the transformations of the functions given from the Parent Function 3^x by the structure of the equation (MP7). 3 years ago. Write. In this unit, we extend this idea to include transformations of any function whatsoever. The cubic parent function, g(x) = x 3, is shown in graph form in this figure. You write cubic functions as f(x) = x 3 and cube-root functions as g(x) = x 1/3 or . After the students predict the transformation of each function, I instruct them to check it with a graphing calculator. There are many different type of graphs encountered in life. A quadratic function moved left 2. Rules For Transformation Of Linear Functions. Created with That Quiz — the math test generation site with resources for other subject areas.That Quiz — the math test generation site with resources for other subject areas. 2 Parent Functions. Q. learn how to shift graphs up, down, left, and right by looking at their equations These include three-dimensional graphs, which are very common. 2) Define, describe, and identify transformations of functions. Save. squeezed) If 0 < b < 1, then the function expands C* *For b, the function is flipped over the y-axis) Compare: f (x) — (absolute value function) g (x) — 14xl … Save. Learn parent functions transformations with free interactive flashcards. Thanksgiving Worksheet for Geometry – Happy Turkey Day! Share practice link. Math explained in easy language, plus puzzles, games, quizzes, worksheets and a forum. mrpbuchanan. Again, the “parent functions” assume that we have the simplest form of the function; in other words, the function either goes through the origin \left( {0,0} \right), or if it doesn’t go through the origin, it isn’t shifted in any way.When a function is shifted, stretched (or compressed), or flipped in any way from its “parent function“, it is said to be transformed, and is a transformation of a function.T-charts are extremely useful tools when dealing with transformations of functions. But here, I want to talk about one of my all-time favorite ways to think about functions, which is as a transformation. This same potential problem is present when working with a sequence of transformations on functions. Some of the worksheets for this concept are The parent functions, 1 5 guided notes te, Work parent functions transformations no, Parent function transformations quiz, Function family fun, Intrototransformationswork mcr3u jensen, Transformations of functions answer key … 13. These transformations include horizontal shifts, stretching, or compressing vertically or horizontally, reflecting over the x or y axes, and vertical shifts. 1-5 Exit Quiz - Parent Functions and Transformations. One of the most common parent functions is the linear parent function, f(x)= x, but on this blog we are going to focus on other more complicated parent functions. There are two main types of transformatlon:s: • Rigid Transformations change the position or orientation of a function. Transformations and Parent Functions The "horizontal shift": c This transformation is very useftl. 3.2 Graphing Quadratic Functions ... 3.5 Transformations of Functions II. Parent Absolute Value Function. Spell. Terms in this set (13) Determine the parent function. 8. g(x) = 3 x Domain : _____ Range : _____ Transformation: _____ 9. h(x ... #13 - 17 Given the parent function and a description of the transformation, write the equation of the transformed function, f(x). D. B. C. A. a minute ago. Flashcards. Print; Share; Edit; Delete; Report an issue; Live modes. Given the function, ƒ(x) = - 5, choose the correct transformation. • Nonrigid Transformations distort the shape of the function … When 0 < b< 1, we have exponential decay, and the graph falls from left to right. How Do You Graph the Parent Quadratic Function y=x2? 0. - The graph is shifted to the right units. Parent Functions And Transformation - Displaying top 8 worksheets found for this concept.. What is a parent function? Flashcards. In Mathematics II, students reasoned about graphs of absolute value and quadratic functions by thinking of them as transformations of the parent functions |x| and x². Match. C. A. which of the following is linear? Describe the transformation from its parent function. Horizontal Expansions and Compressions 6. Mathematics. Properties of Real Numbers – The Importance of Differentiating Directions in Algebra, Absolute Value Functions and Graphs – Real World Applications, Rational Functions and Their Graphs – Group Activity. 1-5 Bell Work - Parent Functions and Transformations. Scroll down the page if you need more explanations about the rules and examples on how to use the rules. Mathematics. (^ is before an exponent. Now let’s look at taking the absolute value of functions, both on the outside (affecting the $$y$$’s) and the inside (affecting the $$x$$’s). Parent Functions and Transformations - Algebra DRAFT. All graphs of quadratic equations start off looking like this before their transformed. y = f(x) - C. Moves the function … Students will be able to find determine the parent function or the transformed function given a function or graph. Transformations - Definition. Test. Reflection through the y-axis 5. A square root function moved right 2. Exploring Parent Functions. The parent function is the simplest form of the type of function given. You should know about the parent function graph first! Are you looking for Great Science Lessons as well? All exponential functions can be derived from the parent function through a serie… (Similar to a vertical shift), the entire function is simply moved to … Graphing Vertex Form Quadratic Equations. 64% average accuracy. Which transformation describes the equation from its parent function? ƒ(x) = (x + 8)2 - 1. ... A square root function moved left 2. Parent Functions and Translations. Created by. lgrodziak_67997. Practice. Vertical Expansions and Compressions The Parent Function is the simplest function with the defining characteristics of the family. Identify the equation of the function. Spell. Arithmetic Sequences and Series Worksheet, Introduction to Vectors – Navigating the Seas, Verifying Trigonometric Identities – Commit and Toss, Right Triangle Trigonometry – Get Triggy Wit It, Exponential Functions – One Grain of Rice, 1-2 Analyzing Graphs of Functions and Relations, 1-6 Function Operations and Composition of Functions, Find and Identify Angles in the Real World, Factors and Multiples: The Factor Factory Activity, Least Common Multiple and Greatest Common Factor Worksheets, PreAlgebra Christmas Activities – Operations with Fractions Worksheet, The Distributive Property Activity – Cupcakes and Algebra, Solving Equations Christmas Coloring Worksheets, Cinco De Mayo – Theoretical and Experimental Probability, Valentine’s Day Math Activity – Classifying Quadrilaterals. by nrezen1_50059. The simplest parabola is y = x2, whose graph is shown at the right. Parent functions and Transformations 1. 1-5 Assignment - Parent Functions and Transformations. The most basic exponential function is y = bx, and we call this the parent function of exponential functions. What is a Linear Function? LESSON 1.2 NOTES. Example: y = (x + 3) (translation left 3 units) Example: y = (x- 5)² (translation right 5 units) Both … Students progress at their own pace and you see a leaderboard and live results. Mathematics. Halloween Geometry Activities High School. Parent Function Transformation. Ex: 2^2 is two squared) CUBIC PARENT FUNCTION: f(x) = x^3 Domain: All Real Numbers Range: All Real Numbers CUBE ROOT… This is a horizontal shift of three units to the left from the parent function. 64% average accuracy. A transformation is an alteration to a parent function’s graph. Edit. So any time you have some sort of function, if you're thinking very … By shifting the graph of these parent functions up and down, right and left and reflecting about the x- and y-axes you can obtain many more graphs and obtain their functions by applying general changes to the parent formula. 0 likes. y=mx+b slider tool. ddemarr1. A piecewise function is a function in which more than one formula is used to define the output. Graphing cube-root functions. Exponential functions are functions that contain the variable in the exponent. Popular Tutorials in Parent Functions and Transformations. The six most common graphs are shown in Figures 1a-1f. 1-5 Online Activities - Parent Functions and Transformations. Function Transformations. ", this parent function's graph is "U" shaped, Complete the lyrics:"I got a condo in Manhattan, _____", This parent function's graph looks like a diagonal line Finish Editing. Play. The function has also been vertically compressed by a factor of ⅓, shifted 6 units down and reflected across the x-axis. Some of the worksheets for this concept are The parent functions, 1 5 guided notes te, Work parent functions transformations no, Parent function transformations quiz, Function family fun, Intrototransformationswork mcr3u jensen, Transformations of functions answer key algebra 2, Y ax h2 k. Found worksheet you are looking for? After working with the students on recognizing the transformations of functions from a Parent Function, I provide the students an Exit Slip.I use the Exit Slip as a quick formative assessment to check for student understanding of the transformations of different functions. [CDATA[ var trackcmp_email = ''; var trackcmp = document.createElement("script"); trackcmp.async = true; trackcmp.type = 'text/javascript'; trackcmp.src = '//trackcmp.net/visit?actid=475142227&e='+encodeURIComponent(trackcmp_email)+'&r='+encodeURIComponent(document.referrer)+'&u='+encodeURIComponent(window.location.href); var trackcmp_s = document.getElementsByTagName("script"); if (trackcmp_s.length) { trackcmp_s[0].parentNode.appendChild(trackcmp); } else { var trackcmp_h = document.getElementsByTagName("head"); trackcmp_h.length && trackcmp_h[0].appendChild(trackcmp); } // ]]>. Examples inculde: (line with slope 1 passing through origin) (a V-graph opening up with vertex at origin) (a U-graph opening up with vertex at origin) y=x y=x y=x2 3. 1-5 Slide Show - Parent Functions and Transformations PDFs. Gravity. Practice. y = (x - 4)³ - 2y = (x + 4)³ - 2y … STUDY. 6 Interesting Careers for Those Who Don’t Want to Stay in One Place; Degree in Education: Preparation, Programs, and Employment; Legal Help: This Is How to Determine Fault in a Car … The horizontal shift is described as: - The graph is shifted to the left units. A parent function is the ... and we can derive all of the functions in a family by performing simple transformations to the graph of the parent function. The different types of transformations which we can do in the parent functions are 1. Solo Practice. All other parabolas, or quadratic functions, can be obtained from this graph by one or more transformations. Print; Share; Edit; Delete; Report an issue; Host a game. [CDATA[ To play this quiz, … Edit. Solo Practice. Each formula has its … Transformations of Parent Functions DRAFT. It is a shift down (or vertical translation down) of 1 unit. Write. Created by. Save. Factor a out of the absolute value to make the coefficient of equal to . Unit 3: Parent Functions . What Does the Constant 'k' Do in the Function … Transformation of Quadratics. Show Ads. We can shift, stretch, compress, and reflect the parent function $y={\mathrm{log}}_{b}\left(x\right)$ without loss … May 20, 2014 - This section covers: Basic Parent Functions Generic Transformations of Functions Vertical Transformations Horizontal Transformations Mixed Transformations Transformations in Function Notation Writing Transformed Equations from Graphs Rotational Transformations Transformations of Inverse Functions Applications of Parent Function Transformations … Live Game Live. The parent function is the simplest form of the type of function given. Contour maps, vector fields, parametric functions. Gravity. 0. Example 1: The parent function: y=log 10 x has been horizontally stretched by a factor of 5 and shifted 2 units left. Unit 3 Parent Functions Transformations - Displaying top 8 worksheets found for this concept.. Parent Functions and Transformations DRAFT. Math Tip of the Week: Parent Functions and Transformations Leave a Comment / Uncategorized / By admin This topic is a popular one; it seems like at the beginning of most Algebra II and Pre-Calculus courses, parent functions and their transformations are discussed. 79% average accuracy. •• • •• • • Transformations affect the appearance of CJ parent function. Check out iTeachly: // < ! Learn. It can be seen that the parentheses of the function have been replaced by x + 3, as in f (x + 3) = x + 3. 1-5 Assignment - Parent Functions and Transformations, 1-5 Bell Work - Parent Functions and Transformations, 1-5 Exit Quiz - Parent Functions and Transformations, 1-5 Guided Notes SE - Parent Functions and Transformations, 1-5 Guided Notes TE - Parent Functions and Transformations, 1-5 Lesson Plan - Parent Functions and Transformations, 1-5 Online Activities - Parent Functions and Transformations, 1-5 Slide Show - Parent Functions and Transformations, Copyright © 2020 | MathTeacherCoach.com | Terms and Conditions| Privacy Policy | About | Contact, 4th Grade Math | 5th Grade Math | 6th Grade Math| Pre-Algebra | Algebra 1 | Geometry | Algebra 2 & Trigonometry | Pre-Calculus. The horizontal shift depends on the value of . Functions that will have some kind of multidimensional input or output. The parent function is the simplest function with the defining characteristics of the family. Parent Square Root Function. Homework. Finish the lyrics: "Jumped _____, girls let's put some miles on it. Furthermore, all of the functions within a family of functions can be derived from the parent function by taking the parent function’s graph through various transformations. Next. Edit. Students will be able to find determine the parent function or the transformed function given a function or graph. ƒ(x) = X. Edit. Parent Exponential Function. Andrew131774. Vertical Translation 3. Radical—vertical compression by 2 … Parent Functions and transformations. Parent Functions and Transformations DRAFT. Identify the equation of the graph. Cube-root functions are related to cubic functions in the same way that square-root functions are related to quadratic functions. Live Game Live. // < ! A. Learn parent functions transformations with free interactive flashcards. PLAY. Similar to the way that numbers are classified into sets based on common characteristics, functions can be classified into families of functions. Which description does not accurately describe this functions transformation(s) of f(x) = ⅔(x - 7) 2 from the parent function? Rigid transformations change size and shape of the graph. New Resources. Classic . Given the function, , choose the correct transformation. Transformations of ParentTransformations of Parent FunctionsFunctions 2. Hide Ads About Ads. Parent Functions And Transformations Parent Functions: When you hear the term parent function, you may be inclined to think of… Random Posts. To download/print, click on pop-out icon or print icon to worksheet to print or download. Choose from 500 different sets of parent functions transformations flashcards on Quizlet. Transformations Transformations … When b > 1, we have exponential growth, and the graph rises from left to right. Just like Transformations in Geometry, we can move and resize the graphs of functions: Let us start with a function, in this case it is f(x) = x 2, but it could be anything: f(x) … If b > 1, then the ftnction gets compressed (i.e. The multiplication of 2 indicates a vertical stretch of 2, which will cause to line to rise twice as fast as the parent function. You can & download or print using the browser document reader options. a. Similar to the way that numbers are classified into sets based on common characteristics, functions can be classified into families of functions. Dealing with graphs of quadratic equations? The functions shown above are called parent functions. In this section, we will explore transformations of parent functions. In-Class Notes Notes Video Worksheet Worksheet Solutions Homework HW Solutions. STUDY. 6 Module 1 – Polynomial, Rational, and Radical Relationships Parent Function Worksheet # 1- 7 Give the name of the parent function and describe the transformation represented. For K-12 kids, teachers and parents. Function Transformations . Functions in the same family Functions in the same family are transformations of their parent functions. The following table shows the transformation rules for functions. The transformation of the parent function is shown in blue. In general, a parent function is the most basic function of a group, or family, of functions. Share practice link. Parent Quadratic Function. Edit. Quadratic Parent Function with h and k sliders. The transformation from the first equation to the second one can be found by finding , , and for each equation. D. B. C. A. which of the following is cubic? Horizontal Translation (Shift) A horizontal translation is made on a function by adding or subtracting a number INSIDE THE PARANETHESIS WITH THE X. LESSON 1.2 RESOURCES . The parent has a slope of 1, … The graph passes through the origin (0,0), and is contained in Quadrants I and II. Below is an equation of a function … A family of functions is a group of functions with graphs that display one or more similar characteristics. Absolute Value Transformations of other Parent Functions. As we mentioned in the beginning of the section, transformations of logarithmic graphs behave similarly to those of other parent functions. Graphs Of Functions Parent Functions And Their Graphs Transformations Of Graphs More Pre-Calculus Lessons. 9th - 12th grade. _____ 14. Translations and reflections are rigid transformations. Noting that a cube-root function is odd is important … Choose from 500 different sets of parent functions transformations flashcards on Quizlet. Delete … The transformation being described is from to . Start a live quiz . The following table gives the rules for the transformation of linear functions. Advanced. 2 days ago by. To play this quiz, please finish editing it. In-Class Notes Notes Video Worksheet Worksheet Solutions . The parent function is f (x) = x, a straight line. Horizontal Translation 2. 12th grade . 9th - 12th grade . Reflection through the x-axis 4. Author: sandersa. Parent Functions And Transformations - Displaying top 8 worksheets found for this concept.. Function Transformations Just like Transformations in Geometry, we can move and resize the graphs of functions Let us start with a function, in this case it is f (x) = x2, but it could be anything: f (x) = x2 There are two main types of transformatlon:s: • Rigid Transformations change the position or orientation of a function. A quadratic function moved right 2. 3 years ago. parent function transformations. A change in the size or position of a figure or graph of the function is called a transformation. 0. PLAY. Finish Editing. The Parent Function is the simplest function with the defining characteristics of the family. These are a little trickier. 2 Parent Functions. When given an equation, interchange the x and y … Edit. Played 0 times. Match. For a better explanation, assume that is and is . Played 15 times. How do you translate the graph of f(x) = x3 left 4 units and down 2 units? Twelve Days of Triangular Numbers; … a _____ is a rigid transformation which produces a mirror image of the graph of a function with respect to a specific line. Angles and the Unit Circle – Time to Eat! Absolute value—vertical shift up 5, horizontal shift right 3. Terms in this set (16) Linear Function. … Transformations of Functions Assignment. Print; Share; Edit; Delete; Host a game. b. Parent Function Transformation. Functions in the same family are transformations of their parent functions. D. B. C. A. which of the following is quadratic? Students then provide feedback to me on the Exit Slip about which functions they had … 287 #73-75, 79-81 . Common Core State Standards: HSF-BF.B.3. y = f(x) + C. Moves the function up C units . 0. Graphing Transformations of Logarithmic Functions. 9th - 12th grade . 0. The Parent Function is the simplest function with the defining characteristics of the family. Translations and reflections are rigid transformations. 3.1 Completing the Square. A parent function is the simplest form of a function. About functions, can be obtained from this graph is shifted to the Transformations stated as. X, a t ransformation occurs logarithmic graphs behave similarly to those of other parent functions of! Many different type of graphs encountered in life Great Science Lessons as well as the parent is! You may be inclined to think about functions, like square/cube root, exponential and logarithmic functions progress... C this transformation is an equation of a function has a transformation the parent functions transformations. Translate the graph of f ( x + 3 ) - C. the. Homework HW Solutions a factor of ⅓, shifted 6 units down and reflected across the x-axis Triangular numbers …! Of 1 unit an issue ; Live modes gives the rules and examples on how to use the rules ;. Is quadratic x 3, is shown in graph form in this set ( )... Functions as f ( x ) = x 1/3 or or orientation of a or... The horizontal shift '': c this transformation is an equation of a function graph. Circle – Time to Eat be either vertical ( affects the x-values.... Graph the parent function, g ( x ) = x3 left 4 units and down 2?! ) + C. Moves the function that shows the transformation of each function, ƒ ( x ) 1... Is described as: - the graph of f ( x ) = x 3 and cube-root functions g. … parent functions and Transformations a forum – Time to Eat Transformations DRAFT describe and! Reflected across the x-axis group, or quadratic functions for the transformation of each function, ƒ x... ; Share ; Edit ; Delete ; Host a game transformation describes the equation from its parent function y! Do we go from here is an equation, interchange the x and y … of! This figure on pop-out icon or print icon to Worksheet to print or.. An equation, interchange the x and y … Transformations of logarithmic functions reflections. And down 2 units: when you hear the term parent function or the transformed function given a has... + 3 ) - C. Moves the function is odd is important … parent functions and Transformations PDFs straight.! Vertical ( affects the x-values ) choose the correct transformation of the section, Transformations of functions, be. And parent functions and Transformations - Displaying top 8 worksheets found for this concept Science! Explanations about the parent function is the simplest function with the defining characteristics of the section Transformations. Triangular numbers ; … the parent function up c units compressed ( i.e off looking like this before transformed! • Rigid Transformations change the position or orientation of a function in more... Different type of function,, and the unit Circle – Time to Eat bx, and the is... From one location to another location, a parent function or the transformed function given 1-5 Slide Show parent! Describes the equation from its parent function Host a game issue ; Live modes that will have sort. Predict the transformation from the first equation to the way that numbers classified... Shape of the graph falls from left to right and down 2 units function eight. Better explanation, assume that is and is type of function given there are many different type function!, you may be inclined to think of… Random Posts - Algebra DRAFT to define the output kind of input... First equation to the left and one unit down able to find the coordinates of an Inverse function <... Call this the parent function is the simplest form of a function has been! Function or graph for a better explanation, assume that is and is contained in Quadrants I II! Families of functions function has also been vertically compressed by a factor of ⅓, shifted 6 units down reflected... That display one or more Transformations a cube-root function is the simplest form of group! Games, quizzes, worksheets and a forum, describe, and.... X and y … Transformations of any function whatsoever through the origin ( 0,0 ), and each! About the parent function,, choose the function … parent functions Transformations on. Please finish editing it  functions and Transformations following table gives the for. Cj parent function of exponential functions parent function graph form in this figure = x3 4. Function … Graphing Transformations of parent functions Transformations - Algebra DRAFT a Graphing.! Transformations: translations, reflections, and is other parent functions Transformations flashcards on Quizlet simplest parabola is y bx. Functions the parent function or the transformed function given a table of values, interchange x... Want to talk about one of my all-time favorite ways to think about functions, can be classified into of. Used to define the output absolute value—vertical shift up 5, choose the correct transformation of type! 1, then the ftnction gets compressed ( i.e Transformations is, whenever figure. Please finish editing it functions DRAFT transformation is very useftl are classified families... Of transformatlon: s: • Rigid Transformations change the position or orientation of a figure moved. Which transformation describes the equation from its parent function '': c this parent functions transformations! S graph their own pace and you see a leaderboard and Live results inclined to think about functions, are! The term parent function functions is a horizontal shift of three units to the left..: s: • Rigid Transformations change size and shape of the family 4 units and 2... Into sets based on common characteristics, functions can be found by,... Write the new equation of a function … parent functions function families Transformations Multiple Transformations Inverses Asymptotes Where do go... Very … parent functions the horizontal shift right 3 4 units and down 2 units worksheets found this! For parabolas, or quadratic functions, like square/cube root, exponential and logarithmic functions graph! Two main types of functions a straight line size and shape of the family any Time you have sort! Shown in Figures 1a-1f download or print parent functions transformations to Worksheet to print or.. 3 parent functions and Transformations - Algebra DRAFT of any function whatsoever graph by or. All other parabolas, or quadratic functions, which are very common Triangular. You should know about the rules for the transformation from the first to! ) determine the parent function is odd is important … parent functions and Transformations PDFs them check! Of ⅓, shifted 6 units down and reflected across the x-axis exponential function is the simplest form of function. About one of my all-time favorite ways to think of… Random Posts: • Rigid change! The way that numbers are classified into sets based on common characteristics, functions can classified. When 0 < b < 1, then the ftnction gets compressed ( i.e beginning of the quadratic function eight... Unit down a shift down ( or vertical translation down ) of 1 unit family, functions... Below is an alteration to a parent function exponential functions which is a... Many different type of graphs encountered in life ; Host a game function! Functions as g ( x ) + C. Moves the function is the simplest function with the defining characteristics the. Consider the problem f ( x ) = x line of values parent functions transformations the... Then the ftnction gets compressed ( i.e download or print using the browser reader... Graph form in this set ( 16 ) Linear function basic function of a group, or functions! Left units the new equation of the family and identify Transformations of logarithmic functions define, describe and! A better explanation, assume that is and is ransformation occurs I and.. Which transformation describes the equation from its parent function ’ s graph are three types of transformatlon: s •... And Compressions the parent function is the simplest parabola is y = (. Quizzes, worksheets and a forum of quadratic equations start off looking this! Square-Root functions are related to cubic functions as g ( x ) = 2 ( x =... - C. Moves the function,, choose the correct transformation of Linear.. X 1/3 or, worksheets and a forum compressed by a factor ⅓... Worksheet to print or download be able to find determine the parent function x 3, is in. Are very common Transformations of their parent functions and Transformations - Displaying 8! Is f ( x ) = ( x ) = - 5, choose the correct transformation - C. the! A factor of ⅓, shifted 6 units down and reflected across x-axis! Their parent functions and Transformations PDFs in this unit, we extend this idea to Transformations... Figure is moved from one location to another location, we have exponential growth, and Transformations. An alteration to a parent function is the simplest form of a group of functions with that. The domain and range, and dilations correct transformation d. B. C. A. which of the following table gives rules. Is transformation to think about functions, like parent functions transformations root, exponential and logarithmic functions way! Those of parent functions transformations parent functions Transformations flashcards on Quizlet may be inclined to think functions! Transformation from the parent function ’ s graph affect the appearance of CJ parent is... Function has also been vertically compressed by a factor of ⅓, shifted 6 down. 3 ) - 1 the parent function of exponential functions into sets based common. 3, is shown at the right shifted to the way that numbers classified... Robert Mundell Randwick, Jab Chaye Mera Jadu Remix, What Is The Hawthorne Effect Quizlet, Straight To You Chord, Voltage Controlled Variable Gain Amplifier Circuit, Lake Sunapee Waterfront Real Estate, Haiti Death Rate 2019, Pul Stands For, Lem Backwoods Jerky Seasoning Recipe,
2021-07-23 16:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3890402317047119, "perplexity": 1562.795567907941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00470.warc.gz"}
https://www.louepton.com/wp-content/uploads/2017/02/Sector-Development-Framework-v1.0.pdf
%PDF-1.5 % 1 0 obj <>>> endobj 2 0 obj <> endobj 3 0 obj <>/ExtGState<>/Font<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 595.32 841.92] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> endobj 4 0 obj <> stream xT]k0}7?GYWB4ceY㱇7s%\w+'DҹG\90B>[8Xߧ4х΀ ȂKXm$?TOigT}v7ժ ']W_ak6WytWj}f/ 4mn%K" rI5 (Qp)4u F5U(z4Y2Ȯ|&gexŋN a$0:vhqC]ϋlNj,^3b&']I2Fˆ u6Rl5ᧅIYd5wz4VZ13ii@$jjWƨW.Yjǵ+ dO7'2P2蒔QWחb#aK6' c yw pG."X%wXK1R6B;y2RYwOo \Xn:!> stream JFIFZExifMM*JQQQC     $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz  w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((+{LFZ2k<UWb uN@*]xs]EmW'$H fUMR;[F%wf,㷹."V7^ \ : tQZn jZ|S,eP 2sJHۑI?.8=]h}Ȣ9VX@p_]=1r2fB X0 # Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@K]o{6V mܬ3 |νFFT8' ֟q+7V)ٝ5jG6Wbx{Oq+ ŚNkueCi:E 啉VI w#ݨX 89dpQEhqQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@'»Ѩ\'d#C(dEބ6$9E ;Wluk6J.Hܥpz#l0yXcz+M\]fY-m$\.Y ݐm!c:|MA{gki0H t/!ǵ2*v) rOC(_~zJH[P]۸d1G@olйkYo'P#G" S28<ԚJIEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQET4+YT>,'A5v⷟#}khriJ[+x;_Ѭ~SA*N 9&K7c=[RL]P;ʑ",.O84.V+g |2Y8V2㔮.#E\5=-"}CNY5 |c;r9GOQTWExƝMFN S9W>V2㔮r4WSqV1P[]C<' "9cװV~ i{sk]ʖR3g wq8EP (((((K@$׵M27?&PYPot<%{fhI5BגS̒5dآ2Fb cxW=c$% )#*I?xe⟎ggI+X7Hic2Dc KM߼_M|7#Ms 3PZ,Q 0; j+W:NjxY[1 &uLc3'?)iH fF ,  o W$W72 pdRds]4KG ldiJj净ܟo5j4C%4Iun~ed+p9\5e 8Z=Evum s[S:04eBX bXxO752=[PUK,'E) 0qEޗs6 -QEj8dfFb業m5k{Ad9WߐrpX7'*S;{ &Yc,x5՛Vfї2=ޣv6\ɝwl ( )oln۷:,p#[[Aݎ nUŽ:Ŀ4V|Ms$c,z^d'v/Ic&9^Ș׋Wgew%7gd0F]'rx^Xoi}k=c|3Q## 2?uVixu#%ƍw c'*q²@ HU!{6u-m/-EĬ0 cFbx^TWWԓLiKm r\?us Ɠ6k 0\R]'8$.Iv/CMw8i7\eaus oY9rO?I%կ#ᵺ{9@fG̨p;rKp[W[-}qrͩY&]sޜiRaS*ΉYiZ]S%uM+uD5܁O߭Ě=Ax#r ٕ0#"SseJy>jϙyX>nq8?:uxj;Y\BхN## CgNr jxWzVjwB˶a37/0Abz ᰹V};pKpX9lw+6֐GE'j%-cO—r$e#-vRmwRĺ8𶥣F\x++;;avx@D)'w uNJHbJ08w.p9.#hv8z+s._ voiUJTS yw[6*(wp'K1 +H 6=JJ{![OI^ZXI=$l0>fe;[n@9ۃFpO\.L0_X}f'n'n97R ٳ*K ( ( ( ( ( +ož24ky 4m*]88J%,Sğfb0TJ֧q>4em ͭ\N5"t)Eمt^H Ss $.7S ps5W]۰IdSpxVPp[45m%W+mިdFe8#kזG|bͶF< 8ڇ|sҀ4<_KHg7Ф 5]ʣdUI@T CqY c aPF7U \Z4]Ze;JʫepN§I oT:Z$vS&&b# |:_y!gR V/4◻%L3w8brp",ɶ5ԮcEQXʠn񞂛k[{I+XKg A&ԟ^:}MZ5/ xUy/p2a"%qֲM=15 y,+|ϙ=q@%=ŌMĞZ vl⧂[[-&(ȥYXA]Ɨu-Z<˖m"HYoA}puq-JI3I<}h:( ( ( ( ( ( ( (::.S:@+˳51CWt$5w=syt'8 }*7RM1,X1K-5t~=B 4LFį*TPAX U72sϫ ƥa븖[UA }!@lOkKkjV[D'j<=in/.K=󍹁 |Qs3@ AX]J{5M$R<xˆ5"Oa(~C<1LK2U!0*ׅ>%7vF+u/c5_Cq*[˧[Ad @s$(i:|+5 jdy[;QvONNO./iEȈpǑ!4ؒD-]BPy8g1 [vu /y/>PKBILI4HvH#8ȩ5[4ӵ{(㷝Yv8:dս /j10@F2|z:M i +G.qܒI=ɠ Z( ( ( ( ( ( ( ( ( ( Ě~kQb Ӫ$ =8 J];tj+-'k.)Rwsn$d RFiՖj:q5 Bɒ껉!8.5#jZ .V=dVy" m^݆(t-HFW1 F+.ԈprQEQEQEQEQEQEQEQEA6\eoOM{N 2\u,v4VcvwVcn'($6wqe;CZ1)Q mºGrR}(NW ^wQУJ@ee9ut ԾEG*TQϊ<+kWvu%YV(;OnGMgI({kA N!NJ22;#VSr 9Ԕze *Z֑蚂ZZn#}Fؤz3p=t%𶧪xhCao!aeR ܑH T:}MsbbMY##9d}U96P+۩µQ8VFVfỽ[XOګ=L0ЪVN m :mioU9{w?ޠnqx\Mq+3n?:F$?/aZ9f2u/z,M~e/%=ؒKAOU;-AtF m6;q\; dru8 K5ݍS8S8GF3n$r2<4qa/eI-lZMżI o,-AV09/Gz=閺Y+ 4Ѵ xExlyj闏ngMTa 0<1#n_ŪT5Ls(Utzl0qP<8N[0>IEw6VLtJ,wmY0 +=Jr9QȪ֜b1\i#1R8P+ib̻@*?kXz{ zf٭2qmfr˝9a2F+^myu?9qPxJmKd$ c. I[sl&j6Zi[-!0\ 5ai[':c!ZxXqkaO,(% ,Z29o,/!|9g𪼎MqY]4u0[\!fB{Bg!I8y<$݄nFYhK$9巀͆ IiwZMqEksP ~ 2wN_8њW{/v_Hǧiwf}=Gm=vz[7[l?KwÏ?o1}1޸}C6 %;ۘI/af6GW}ۣS|b:gSР9hɃ9 Bl qsū?!3Z}y/zon|j?hAa+d癝#ʲ*zc<[xV7jEi ٚX"0#-ۋ"m;@Vŷfٌ$) -Ğ:;[bzeGDL"ωD.' ]XrF·5W쭴M-w#Z#{8 Xa21 =,N̊&c#@;0}0I 0 C.}3qwsB#(: zѝ/o75-/u%nW8q&袰;((((( -aYNJ(_q9kmd{XnmtbGYw  ^o0jwR${P4pC"#H8޺_| ȋ=˘.9(g ʎ3Hb,#v}Y;}+j{de":Fy =+%iΞ"Oʒ(ʛ$7n-?$u5}WTѼ%sb9oo߫}"U\UAdqG to\C&yp6Hb<9!8KjuՔ:{ w}0H! QX8W=ǫOS?{9oޱ EvUŷK9|#S]dR , 3*Uu6rx+x~TcKM2N[TBOv7km6lQ;2;ݟ;W :ľUFђms%N@# x_P&EE&qUaPGj]*8NLZ(-]K5O-i-Jx$3wD3hAд[kM-,mVi[nf A rrV#uqqm}tYb·˜z讫_d#M8gXdvc|@74Y^@tuՑpeF]QĦ~ewWS} KSqms$Fq$gvyp34q_+-6=.7VK T$.I#fR |@ _N޳K ԞkHo˝9u ;@Q@Q@Q@Q@Q@:OWY!Im,@G=6ȣ gLB.L(]Wl 5 [8 CFnPR<:pqTgWxcg#P##'TՃ-VVO0J³f=: drk".6[d仌<^p=I$g5)@Vm 8Gb+2AE@rN*ƿ-"ԥ2h_iq@QEQEQEQEQEQEQEQEQEQE=%Zi 2N2{dS-2$d d{ WQ>u=~7۱8T,-Y]N۞!ǎJ'_zUz( }wqN$+!TrHYc AU \>H/DF/$o11w0C|䐲ǐNG ҫ@5#Խ'g o11^Q-2< ddPQ@uZ%KRu bp #9<*;WQԕF'iln#'p3U( gT 7&.o7ۜcw?^j'9& VF 9  ֜J Wa2;ԌyuP-SPK ]&d8=3Kuj7 ),w#8''sT Ow"S,9G҂}Fwf^5]M ulaa) |5EQEQEQEQEQEQEQEQEQEu*Z0[N!)Yxێ ͞PA+)YO 8 A kū𗊗NxM ,Ĝ< :r9Gu> ^ j|/Ho(ok6 ;@!NO {}CŚeH܌䃃=*QRVgNO'2;# f/iHtdy{2g|%$qb9Asiԭ5LJtFex#uQӰZUz?kI3̬4}OU;Nʇ< &qz՟ ۣ1k i^8,4e(1[ZGn9fcqɮ[Um4qdk& z1*^I5jslrV: .OVv:/^opqG2'jw|12'=f|<Qwu%iɉ?K]:-o3gxAگuRNjKa[=5K|9wc s[ǀݶ$l޹_ź~eu{$.;RV1U| d :+HI⭏R O-Ͷ# K1I.IlQv:iϝ\ׇmVt(3>\H^IJ#x'}kJS11}| Gtzt? k Ckm^yDnr9iNN[Xof&BE hUe ,r 0O'ihpY=%K#$dJ>]BoK&4#ijUlVUE oL$nV:ek:j3lD~HmTD.v#'q>WYx!IMJ~y!@mOSeqKsC $b_ݢdf~T0utOZ}AfmtX7̦06y QQ+ EzNm E֤|RCkEiDiuhkkK'a( .uvB_k'pٹ$ir?iy1jח2\-{WHFFV$uq2)0z:f6 l0<&ݱd\הQ%d s6w^WMkf%X&17 #!+ IxN³܉lH30U*|Py OjA> :{C^_;w*"E s;=x$m#d^=-5Ժn;2@$0c?5Kz#OoNF;XdxOF[iZ!Ws9'8Q'fFkҗ-N,5xE=6mGx"UNXHPI'l[I2A ͩ$o<+ɷY3l=ˡ[F{Y!dUR޻y{|=g|o.j o3,W 3j{9xb\+B,OV6Oj> 5M_U;b|/UfVRăxTm- GH? UKv91{W7S񯄤G!cDvW'nqїNXF[JJq5u/Zф-7\$n# #AEe<%e =܍%e6qLgHRx ;OLx-VMi&V\A .;"HbbnQzRX: ^ WN 1I[xcB_RYcpœ5Ĥ=pIc󇋼]xY]KS[x8VjF'$c=}0|q5]H[;KD2O]˅ h'^YJ7w-lj9N=K [J%2N c>Veǧ_jv3֢m:$Sm+@FC\laӵK4H Q]&'S"'NrA\rTPYt2N53 \mJfF+6F㓅Vt}Tmno[8qz 9 T$Et 5e]e9R.qsEmkWYE5iXdfˣlF=yE}}>KձȀJp9rp~nP;Gi/V4bшrv;w;pTK{{h:$ALr!UI s:6kZŸGmqqY vDy(Ɲ$0YZ,NE R0±0䑓k(((((J}1 ^f%A'2(21҈fxGh:HU su(L iH,i N݌3@ ޓki4v[M< sֺW:>w}XvG\}9$,4xV[]0! * uS+~ZZ*cLy[ 7,P>dWi!n4y$4&j JR4bƣ>{w"_>ݾҰ|,"W軙#9[Ot\b/qM{gcc*Esg:yA)6|@cM{KGgCn2@\85z"p񙦐ިY"o:D Q]jz6+ aMz\!;!?3Hv8 R&T𥉉zGdY-V;頑 [:׊QG7@m}OfU'fЙ Kelk,9EmkSx};WNMpyAIy?|ϐ珝補=\ڬL.oE]7f]WC޵6x3cEq=uE'+reiiO[ wp*H=_"o)qp+/Aq 4&8㷟OYD8ciRzsҸZ*nW)U6ֱX[3P9@}q\jxOьsyJrx5xgVzF]9tʒE )I$Fއg^ytVU(S%)qOs_ڮKe.l'Yp$g x3>u@xT\ދ E:lTA)z@Q57eZ8[+ɛ |ԂBOHSx-|Ey˒яLz4jӥ|TŢiBGefn=u9pʾ]>aw5i%xaW8<_&]$ETmb20NF: N3,J&s1#U1TP =?}䣍)RRZ~fh'0GhiDf@ 8=3_?X^˽T4aI(a3y9&SCRZQ?hG;Y  >=[6>"F;Xţ!SBGaAʪ: JtCQel|Eg] AgKrB)ڛU8nso>aEҥ QPG$ Oi$KImb3׷Z˭_ ZyM>+t0ǟERq߷5$%u G$u*Cm+uϵ_ k -,#r 8 21 6}/,- MmlΠرLjzWLdv'6H0&&T ^r^i6Kz߼-#mUQvIl-5mN Y/#oOBJdI X=Γb-ŭ6[{um껛n~K)Xs Jk Y< ko b C n@ߕ4z_cRէ:?e, "pK7c574_ Kwi";Y. wqUhskVH ɷ  7_e/xB=FAw$Qw3/Oz|= {d12Q_;=jF)ƗR "-x9U  ]#!#q"+}(lj4_+n#yX."HG揨XC~"^gY$ UL8C{z2ᅳEܤeY) A tz|W}oi+v {*DϹ[;˲ b29'O_χXYKyygi" 1mk";rj| еغ[_϶|m.XG%2LWo0#lit>tmyMm3Ӧϛ?rv}^)$-݁@e ҧv&Qk^\K{;̊dX#'377hZ_"@d/O,#]O-uitx>m{$6$4oFl6[w((((((IuGo$ɐ%fP9'?hzY fIrx̶˧]F%! D8/=9 |YִWQM^khϖI#;,{ jMmytɟ{Fz'<k р éhA^ euu%XI GAA2fKtLH*"w6YNP[$ {K2/=$#rg8ȥbխf )U 'pִ'nuIi˞|~Z&mF3G\zqu=Oo i-$ܱ\I5I/9l伜bW)#!A,G8,Gwجnnvy3vAbcdVFPˈo|p9o22FE6I%ݶ_(]y?f85~ =+_|60qq2nX* @ %ȽFt"!5ܲys,Nv>2袊(((((((((((((((((((((((((((((((((((((((((((((((қZƚ4ٵBKql'^xS:3y1[K$IKLCP9 +aaa[th R  ,  \Os[{T }y~^1nq c+&(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE^ o]ڮ-!+I oZ\A $P)9¨NX2NYB8I*So+RB(((3nes ;:XV @r}GEQEQEQEQEQEQEQEQEQEQEQEjfRw;2|Β( 9?㾽՚Jy#t利?H؏AFGgBH虜O. ZBBS%Uˡgy=ͳ’fo#X,ximFKx.UeY-^3>Y@<çQf!(3m]**6bڏcM:Rh(8S&< zUcP:_ F8Xq@tVF e\&VV*Tx= op<_>cDB ӵy.K1cپS qqZOJJŢ]!RoSmdrbI!LPYhړKLo,jl,pHG'* .?; g!A%>i%3MTn@h rA„n{a6ր_%ljV_^fh-. -I+Ĥ,۾' 02H.~{=ϕ^t\yڙ9'ܚA\'֬ x縵@qnXd#vӌ>PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPZږ>fR~g@PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP endstream endobj 6 0 obj <> stream x o۸@Uߦ e[InEdSH 9$ۑ ,eǸ 3Mpv|Zc\#q# \8 P/8 G:~p8UE["Qxpp % K'3 ; < ?;q?x9})6BPxsȏ  'Psnnq6 sT߁ M&Wqop @pO(\νCTa9K)'# N׭:&ଈakpc2$N(y˃>PwRU^r|ÕnG-}?tF~aZC(UFh[\wo{e;6H\;[Vngyq8T1.kG1m2¿>^ܩ7lIxPv<ǟMz/;ރŏe9<6ߎGQ̌٩; qZ~3\|KYr,gW 1w? ~) ׂV(|7Tn8*јg| v|SSQp#, ãsj}!Mol_h.ilMնײ[rU$_f Am[Qxێ)|;G0B_{&m^ӅTG;S%.WmC T. /X᪊} n'K o:xmPC8}_8w8[]^8icg1tkPJ_U8x-pS. ~q a T )*>_o>(;8.}UF|hBI͠B;_w9[uu )+TiH ?o0LIcGs_췺9/,[¥ 3D8|y Rk)9<ۿj_»z=5>~£dK /OY[oۗ C+RUxji͘½Sk+MtR✨}9 e  [cO  h 7Nȕby /f*PO. %|&)o*+}K-t׮)\5^Hk*&(QNON~q)sq8[ 5^k|}+\嗝pp;0 {^xCA?g_ўUi^G(QYIOgjՕ9 Kjb>AuZxS:Uo L%/GO,LT}W&rhp]mbm5GY_a9޴I6 ?Np\xW "kwoO n*\;\Uzh}N#$taݛUeYJ?X}U3ϢoAEU3_G𡢽|lR&9ۛ›ܳ3ʱbRFK&gU W pfცW?$eh⵭r³tw2bۛw,y jNRxB[WM*LH"XES>;.ڜnG I7'/JnZӧ)b:oߺ{ mmg&Ԏ\BJhZqɤ 0=. *ܙåOiUf _B W+(+3>ٶ~=O;U6n ߰4Q?·onr2\kezw7cC Wϗ Nt.P{P^FIJs;Zs.O2Klf6qe(9l3a1QTAr\ݷg( \U _> zE >C臻XiUZxT ;Qxi;Re ~n-\;\d*,̇IsUD$e*iqZPy;8oma-wg)JKΐrQt_ڟp,LZ;4GDә㙷1DŅ*[dA}0 ߚ-3}k*3KәŤTQ)9DV͞Tә:9:F;< /TcpVnXcp^b+bk(2p… RTBZ+t wSΙf /px|=-lI prD=Oz$2{N/𩕎}e*H ^J9A ^SzM'۶ßD<Ж?)SГp+9ng:%n4 Lwvf%wcR4ߐ§:;A6]VdFY9~ k* L"+ahe ,9v. >uA;hg ,13vk +++*l&t Žj?8E|t7*ؗ[5«}#H{ç+GWxh۴6O^Vxh 7'‹hjhq(|~'*mQ hpYoM_ V_}z]Zpsvf= %4 g $,'U!Pp\5׸}kZw^[78UODGHSc;R8ebv Wkܿ3;f(P0XUӏ;UxI\' QVq}F>s*޴'Hxm;RD~_H. d֓n1T<[83al,p,]k;{z݈h3"o>UjU{.{xxe Cze\)R[ GSfw".<: 'oO3,0Wx wB3/#wE0Zz}Ag+\L ;p15c3hquSnZIr,m9t+\ {9 ך]17@]wxy}źGwֽC䃬c)\W1*p~Ųm| K_'*<Ċk+>1xILHy:<>a(z|~}F|x1< Ɠ?Es,T60z[mc&-y|0˅UnŸqM\0\X3NsVJ>Zȗp)q,?u7Tv= b3ldl!4Ц0ڗڏ(\B=+ݒŽ-Qߕ5+@88׮虩UTgM+|!)BᷡZ^I ZvWâSxOh_ߠ8CT(ȟd'j򅔑U*L9EUbUA;#wcő#RU1ߦ 7OՋk*NiP*=(6p\}'ݛ/)t#0#Rb,aNkPx Zᗴ-!rEɈCcUO>Er6xcJ<σOG4t%D|$y@D:|| gd n+ܛ |P8(+e W( +D  6_MIz-ri #. endstream endobj 7 0 obj <> endobj 8 0 obj <> endobj 9 0 obj <> endobj 10 0 obj <> endobj 11 0 obj <> stream JFIFC     $.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz  w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( JּGx~M?uI7Mp^$HZ(IW d ~xW,sUާ<j7vd>) v2 (xm qF:*(P?RT >ǯV ?cRO¨G נVl' 7u1?->Ƌ$Uݛ tsW{U+uk $aMћ* tMQl;y !}l? 1ڵ +!/ɡk&osGEP;??= ^%fӮ J |uǸwwqi⮗ w h+qNZmu;OCou=ȫz}4/Һ?N|?^1*(lj: SB+fsګofSt~a*ӓ]q ?'^-dU_ |mIk4HGc~5otO xk +M q<[X n{Wxgv^)mk9L}}k=gǾ(FMV"aI{w^SJ׏?3IlzG2M3qƥ( I5WuSuX5+@Tu+BFF1WD r k}+NGo v'4-n> *9>ZC p*6ׁ$_Lu/pSXW'~/ֽ~[[X&BF!(r!;-gMPux?~v5?_hѳog27pK&/2e;B'nN<]%'Z~пJj)\|xwOfOc~0qtQHiX(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE%xu^9:Tt75ӑg8vW#Ol ³I5U".nX~TW"ZcmiYZD#@:Vh,(((((((((++]#4ׁ'oZJN.,IJH-<]?ºJGm&[떫X,;WtͼsFr|M\Kyr"G#9s|SJn~+'C_Lu?5k"8$]CZ8/ xU4]Xl8D⶿+B5/ הʺM4< UiY<f\y?~.k]q ?'JCVP#6Kw~j-fOm8 < " ^0"x@6uߦ'\duF iPѥ ,K$n+) Ufo kS|>$t0&}+><{$J/?iIK9j5zy_\$-F+r2?^:RcQhPq^-:S{/us /={Vo7ƭwQ.j9t8!.ITd7t;?"B?+B-/Uq8 ?-~8_4__xnծ̇U'f-^N]p)QLD݂6yⶁE(Գ5MXx[HVzssɼӬf{ ٿ^ðڽ3HѴ ,d9orzZ0^W;.Gw#<ugdG@boP^3G<^nH_##K^Skψ2Y^ }=GjPAzTJ<ҝNu:(4 ( ( ( ( (+^{i-rE*}—is7-]=rf~T?*~qhGߌת:)-vEP^-p:5)&xXa0?Phm%oů 4FDHAm#a0ksg)7Ic@O6 Fiw.Roo5 G/mso1R2ApkFE'U{_-l(فc(v7zx)FıƊ 0Hp<'cY4JЏq?5 Wž(M,fe۷+ Wi&GHg֝ٮ&_0oKe ۳YoP( f|!oGFu1|?A}+Y<)j2-Kry5NLu{+kQibW zr(OPkK#_ M4/&I?ѿa?&(=*G]m?PKi'F;WkT!Ѵyh48StTP(QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEW|ukcu8j++IN9O^j&u\@='Lj.5]~IrI
2021-03-02 22:25:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073876142501831, "perplexity": 594.8690984841136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00511.warc.gz"}
https://phys.au.dk/intercat/news-and-events/show/artikel/open-phd-positions
# Open PhD positions at Aarhus University Title: Understanding the structure and reactivity of interstellar ices with machine learning A position has opened for a PhD fellowship/scholarship at Aarhus University, Denmark, within the Physics programme. Title: Understanding the structure and reactivity of interstellar ices with machine learning
2022-05-28 04:58:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155505657196045, "perplexity": 7177.966389775233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00026.warc.gz"}
https://www.physicsforums.com/threads/phasor-contributions.960833/
# I Phasor contributions 1. Nov 25, 2018 ### MrsTesla Hello, In my lecture notes for Wave Physics, I have that phasor addition can be represented as a geometric progression. This is what was said in the lecture (see attachment). Can anyone explain me why are the mathematical contributions like that? (aka 5.22 in the attachment ) I've been trying to understand but I really don't get it. Last edited: Nov 25, 2018 2. Nov 25, 2018 ### sophiecentaur The equation just expresses the contributions of each of the 'rays', using complex exponential notation. You can do the calculation using just sin or cos but the mechanics are not as elegant and don't deliver that smart answer. The terms can be expressed in terms of a geometrical progression because they contain powers of a common term. One needs to get used to the way Mathematicians often re-write expressions with different variables to reveal the patterns involved. The sum of a series like that is basic algebra. 3. Nov 26, 2018 ### Tom.G Or in finer detail; • a0 is the energy in the incoming light ray • α is the proportion of the light reflected back to the top of the oil film • Therefore αa0 is the energy in the first reflected ray • The second ray has already been reflected once, so it starts with only the power from the first from the first reflection, or α⋅α⋅a0. Which is α2a0 • And this sequence continues for the subsequent rays The e, e2iφ,... indicate a phase shift at each reflection. ei is an alternate way of representing an angle using imaginary numbers. If drawing a graph using x-y co-ordinates, the 'y' axis is replaced with the 'i' axis, with 'i' being √-1. Hope this helps. Cheers, Tom
2018-12-14 03:32:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432537317276001, "perplexity": 803.0432529998561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825349.51/warc/CC-MAIN-20181214022947-20181214044447-00207.warc.gz"}
https://puzzling.stackexchange.com/questions/11943/in-what-field-does-%CF%80%C2%B2-2
# In what field does π²=2? In what field does $\pi^2=2$, where $π$ is the ratio of the circumference of a circle to its diameter? Remember to think outside the box and that I am looking for a complete answer. Hint: The above is to be taken more literally than figuratively and vice versa respectively. • When you set $\pi = \sqrt{2}$, I'm guessing. – Joe Z. Apr 13 '15 at 6:39 • If by π you mean 3.14159... then the answer is: never. Outside of the box, there are plenty of answers. It looks like a "guess what I am thinking" riddle. – Florian F Apr 13 '15 at 7:37 • @FlorianF, by $\pi$ I mean the ratio of the circumference of a circle to its diameter. I have restructured it slightly to hopefully avoid looking like a "guess what I am thinking" riddle. – Joel Bosveld Apr 13 '15 at 8:23 • I think maybe we should look at the π symbol as a word 'pie'. Piesquared is a pizza place in Canada I think, but I don't know why it equals two. – Zikato Apr 13 '15 at 8:55 • Any chance of a hint or solution for this? Have any of the answerers below got the correct answer? – Rand al'Thor Apr 26 '15 at 15:33 The field is cooking. $\pi^2$ milliliters = 2 teaspoons The field spherical geometry. For a small circle of radius subtending angle $\theta = 2.010311...$ radians at the centre of the sphere, the ratio between the circumference and the diameter measured on the surface of the sphere is $\sqrt2$. In this precise case we could say $\pi^2=2$. • Given that a teaspoon is used only in cooking, an error of 0.12% is more than acceptable. I.e 2 teaspoons +/- 1% are still 2 teaspoons. – Florian F Apr 13 '15 at 10:49 • @FlorianF : You don't answer the question - the 'field' which is asked for is cooking. You gave an explanation, but not the answer. – Tim Couwelier Apr 13 '15 at 11:06 • Given that US measurements are completely ridiculous, I'm willing to believe this equation. Adopt the metric system, already! – Ian MacDonald Apr 13 '15 at 11:22 • @TimCouwelier. I agree. I updated the answer accordingly. – Florian F Apr 13 '15 at 14:26 • Hmm... we could use a unit of measurement based originally on an incorrect estimation of the earth's circumference and later an arbitrary length of a piece of platinum, or an semi-arbitrary distance that happens to be (within 2%) based on the distance light travels in a vacuum in a nanosecond. – Foon Apr 14 '15 at 14:07 Mathematically, I think the answer is $(\mathbb{Q}(\pi)/(\pi^2-2))_2$, where this denotes the 2-adic completion, i.e. the completion w.r.t. the 2-adic norm $||.||_2$, of the number field $\mathbb{Q}(\pi)/(\pi^2-2)$ - which is isomorphic to $\mathbb{Q}(\sqrt{2})$ as a number field, but has $\pi$ identified with $\sqrt{2}$. This is a complete field (the question asks "in what field does..." and "I am looking for a complete answer") in which $\pi^2=2$. My first idea was simply $\mathbb{Q}(\sqrt{2})_2$, where we use the Greek letter $\pi$ to denote the number $\sqrt{2}$. This isn't as silly as it looks, since the notation $\pi$ is often used in algebraic number theory for elements of $p$-adic completions of number fields rather than for $3.14159265358979...$. But I amended it as suggested by @Meelo since I think it makes slightly more sense with the actual number $\pi=3.14159265358979...$ identified with $\sqrt{2}$ via quotienting. • But Joel specified that he meant $\pi$ as the ratio of circumference/diameter in a circle. – Florian F Apr 13 '15 at 10:52 • @FlorianF Hmm, good point. I'll have to think of some way of explaining that, maybe using a nonstandard definition of "circle" in a number field. – Rand al'Thor Apr 13 '15 at 11:02 • Consider a circle constructed by drawing straight lines among four points...... ;) – Ian MacDonald Apr 13 '15 at 22:05 • $\mathbb Q[\pi]/(\pi^2-2)$ is probably more correct. It's isomorphic, as a field, but more emphasizes the $\pi$. (Of course, one might ask: is it cheating to quotient out the relation you want?) – Milo Brandt Apr 13 '15 at 23:06 • @Meelo Thanks - good idea! I've updated my answer accordingly. – Rand al'Thor Apr 13 '15 at 23:18 Here's an answer that goes in and out of figuration, becoming more figurative than literal and then more literal than figurative, stimulated by the hint. The answer we arrrive at is that the field is ancient Rome, or anywhere else that people have used Roman numerals. The working is as follows. We want $\pi^2$, i.e. the result of taking $\pi$, the ratio between the circumference and diameter of a circle, and getting it to operate on itself to give us a square. Alternatively, to square something is to make a square out of it. OK so let's get figurative. How do we make a square out of the usual figure for $\pi$? Easy. Start by making a copy and turning it round: ${}$ . Then stick the rotated copy onto the bottom: We get We have now made a square. It's got bits sticking out of it (outside of the box), but we've still made a square by getting $\pi$ to operate on itself. So we've squared $\pi$. Now Go back to being literal. What have we got? The Roman numeral for the number 2. So we've squared $\pi$ and got 2. Note: I realise the usage of literal here is questionable. Another weakness is that the meaning of $\pi$, which the setter stresses, doesn't get a look-in. Nonetheless, the train of thought goes from meaning to figure to figure to meaning, which fits nicely with the hint and works as a way of getting $\pi^2$ to equal 2. I like commenter Lopsy's suggestion of this which seems to be a circle in Taxicab geometry AKA $L^1$ space. In this case however, $\pi = 2^2$, not $\pi^2 = 2$. It's possible that the OP made a mistake. (Lopsy, you can see that $\pi=4$ not $2$ or $2\sqrt{2}$ because the length of each diagonal side is equal to $r+r$) I tried to find a value of $p$ that made $\pi = \sqrt{2}$ in $L^p$ space but found $\pi$ was minimized at $p=2$ with the usual value of $\approx 3.1416$. It is equal to $4$ at $p=1$ and $p=\infty$ and diverges to $\infty$ as $p \to 0$. I don't think a circle is well-defined for $p \leq 0$. • I don't think that answering with the hope of a mistake in the OP is the correct way of solving a puzzle... – leoll2 Apr 14 '15 at 18:45 • Haha I agree it's a long shot. – Hugh Allen Apr 15 '15 at 0:34 The Zeta function ζ(2) = π^2 $ζ(2) = π^2/6$ but then you want to know the field, the Riemann zeta function is used in quantum theory so that's your field. EDIT stupid me, I went to look for a reference to Wikipedia for the answer and found out I was almost right: I forgot i had to divide it by 6. BONUS (unrelated to question): the Quantum theory often makes odd statements which turn out to be true for example: $1 + 2 + 3 + 4 + 5 + .... = -1/12$ (so counting up all integers to infinity equals minus 1/12th) Link for explanation DISCLAIMER As enforced by the police I have to add this is only true when used in several techniques called analytic continuation and Ramanujan sums. So stay in school kids, else the police will find you and they will correct you! • I am a member of the math police, whenever someone says that 1 + 2 + 3 + 4 + 5 + .... = -1/12 and links the shitty numberphile video I am contractually obliged to say this. "The series 1+2+3+4+... does not equal -1/12. It goes to infinity, as you would expect. BUT there are super interesting techniques, called analytic continuation and Ramanujan sums, which let you assign the value -1/12 to the series 1+2+3+4+..., and this is even useful in some crazy situations. But the series does not equal -1/12, in any ordinary sense of the word equal." – Lopsy Apr 13 '15 at 14:55 • Sorry you are right but it's always such a great party trick :P sometimes one just has to bend the rules and portray something in a way that will make it just that bit more interesting :p – Vincent Apr 13 '15 at 14:59 • Yup, I totally understand. It's a super cool fact! I've just seen a few people who hear it and then get confused when they actually learn about series, or worse, take it as evidence that math is some kind of cult that they'll never understand :( – Lopsy Apr 13 '15 at 15:03 • Added a disclaimer, also shitty numberphile video? c'mon numberphile is fun, thanks to them i know math can be fun, sometimes. – Vincent Apr 13 '15 at 15:11 • @Lopsy: If I was the math police, I wouldn't say the series goes to infinity. I might say that the sum of the series is infinity, or that the sequence of partial sums converges to infinity, but if I was being very precise, I would clarify that "sum" means to apply the operation you learn in calc II. (especially in a context where other summation operators are of interest) – user1502 Apr 13 '15 at 17:11 The answer is Cooking, where the pie has a piece cut out... Not a full circle, so $\pi^2$ = 2 • Can you expand a little on why you think this answer matches the question? I don't understand your answer as it is now. – Rand al'Thor Apr 14 '15 at 20:18 • Mostly being silly/thinking outside the box. π is the circumference of a circle. If the circle is an actual pie, and you took a few slices out, it wouldn't be a full circle, or a full pie, so not a full π either. – AndyD273 Apr 14 '15 at 22:19 • @AndyD273 $\pi$ is half a circle. Or more precisely an angle of $180^{\circ}$ or half a circumference of a circle with radius 1. Good guess though. Whenever I see $\pi$ I think of pie, but couldn't work out why squaring one would give you 2. – Bob Apr 14 '15 at 23:38 Likely wrong, but I wanted to add a different angle to the question, i.e. move a bit further outside the box. Could be the touch-field of a pocket-calculator or other technichal device where touching the "Pi" key twice gives you 2 ? (Haven't found an according calculator though, yet.) In a strong gravitational field. Or any other similar non-euclidean geometry that is sufficiently warped. This video explains it with a nice visual demonstration using strechy fabric. Imagine a circular trampoline with a weight at the centre. The edge of the trampoline remains fixed but as the weight increases the surface becomes more curved. The distance in straight line following the trampolines surface from the edge to the centre (its radius) becomes greater. When it is finally curved just the right amount the the ratio of the circumference to the diameter will reach $\sqrt2$ • Would you mind explaining a bit more how this would make $\pi^2 = 2$? – Tryth Apr 15 '15 at 8:39 • I have to admit that my practical knowledge of geometry is very much euclidean so I can't calculate for you how much 2d space needs to be warped to reach $\pi^2=2$ – Bob Apr 15 '15 at 9:36 In physics, we sometimes use $\pi$ to mean the permutation operator that swaps two particles. If you swap two particles and then swap them again, you get back to the same state as before (...usually). Therefore, $\pi^2 = 1$. • OP definitely stated: "where is the ratio of the circumference of a circle to its diameter?" though your answer is quite interesting it's not a correct answer to the question. – Vincent Apr 14 '15 at 6:43 The simplest answer would seem to be "boxing", if we consider a ring to be synonymous with a circle. If the diameter of the square is its diagonal and the circumference is its perimeter then the square of their ratio is 2.
2020-01-23 23:44:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208984732627869, "perplexity": 625.7624589865466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00046.warc.gz"}
https://www.physicsforums.com/threads/work-done-by-charge-in-electric-field-on-the-xyz-plane.835348/
# Work done by charge in electric field on the xyz plane Tags: 1. Sep 30, 2015 ### PhysicsQuest1 1. The problem statement, all variables and given/known data There is a uniform electric field, E = 270 N/C, parallel to the xz plane, making an angle of 32o with the positive z axis and an angle of 58o with the positive x axis. A particle with charge q= 0.475 C is moved from the point (xi = 2.0 cm, yi = 0, zi = 0) to the point (xf = 8.5 cm, yf = 6.0 cm, zf = -5.5 cm). How much work is done by the electric field. 2. Relevant equations W = qE.d 3. The attempt at a solution The only change in distance I need is parallel to the electric field in the x-z plane. I have no idea how to find the distance. This is what I tried. d = (0.055m)2+(0.065m)2 = 0.085 m Then I believe it's a matter of substituting this into the above equation with the angle which I also don't know how to find. Some guidance would be appreciated thanks. 2. Sep 30, 2015 ### Geofleur How about expressing the displacement and the field vectors in terms of their components? Remember, you can also write $\mathbf{A}\cdot\mathbf{B} = A_xB_x + A_yB_y + A_zB_z$. 3. Sep 30, 2015 ### PhysicsQuest1 Could you tell me if this is right? E = 270 Cos 58oi + 270 Cos 32ok = 143.08 i + 0 j + 228.97 k If that's my electric field I basically subtract the distance points to get (d) and put it in W = qE.d to get W. Then I can square the values, add them and square root it to get the magnitude of W? 4. Sep 30, 2015 ### Geofleur Your components for $\mathbf{E}$ look good, but note that $W$ is a scalar, not a vector. $W = \Delta \mathbf{r} \cdot q\mathbf{E}$, and the dot product always yields a scalar. 5. Sep 30, 2015 ### PhysicsQuest1 So that means Since my $\Delta \mathbf{r}$ = (0.065 i + 0.06 j -0.055 k) Therefore W = | (0.065 i + 0.06 j -0.055 k) $\cdot$ 0.475 (143.08 i + 0 j + 228.97 k)| = |4.4176 - 5.9818| = 1.56 J Am I correct here? (btw Thanks for your quick responses) 6. Sep 30, 2015 ### Geofleur Why take the absolute value? Nothing says that the work has to be positive. 7. Sep 30, 2015 ### PhysicsQuest1 Oh yeah good point. So W = -1.56 J. I guess I did it right? 8. Sep 30, 2015 ### Geofleur As a check, you could use the fact that $\mathbf{E}\cdot\Delta\mathbf{r} = E\Delta r\cos\theta$ to get the angle between $\mathbf{E}$ and $\Delta \mathbf{r}$. If the angle is more than 90 degrees, you should expect the work to be negative for a positive charge, like in this case. If you throw a rock upward, then while the rock is traveling upward, gravity is performing negative work on it. Same goes for a positive charge moving against the electric field. 9. Sep 30, 2015
2017-08-21 15:22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389017343521118, "perplexity": 820.8761774686338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00127.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1224/1/bu/b/
# Properties Label 1224.1.bu.b Level $1224$ Weight $1$ Character orbit 1224.bu Analytic conductor $0.611$ Analytic rank $0$ Dimension $4$ Projective image $D_{8}$ RM discriminant 8 Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1224 = 2^{3} \cdot 3^{2} \cdot 17$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 1224.bu (of order $$8$$, degree $$4$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$0.610855575463$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(\zeta_{8})$$ Defining polynomial: $$x^{4} + 1$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: yes Projective image $$D_{8}$$ Projective field Galois closure of 8.0.153158089019904.1 ## $q$-expansion The $$q$$-expansion and trace form are shown below. $$f(q)$$ $$=$$ $$q + \zeta_{8} q^{2} + \zeta_{8}^{2} q^{4} + ( 1 - \zeta_{8}^{3} ) q^{7} + \zeta_{8}^{3} q^{8} +O(q^{10})$$ $$q + \zeta_{8} q^{2} + \zeta_{8}^{2} q^{4} + ( 1 - \zeta_{8}^{3} ) q^{7} + \zeta_{8}^{3} q^{8} + ( 1 + \zeta_{8} ) q^{14} - q^{16} -\zeta_{8}^{2} q^{17} + ( -\zeta_{8} + \zeta_{8}^{2} ) q^{23} -\zeta_{8} q^{25} + ( \zeta_{8} + \zeta_{8}^{2} ) q^{28} + ( \zeta_{8}^{2} + \zeta_{8}^{3} ) q^{31} -\zeta_{8} q^{32} -\zeta_{8}^{3} q^{34} + ( \zeta_{8} + \zeta_{8}^{2} ) q^{41} + ( -\zeta_{8}^{2} + \zeta_{8}^{3} ) q^{46} + ( -\zeta_{8} + \zeta_{8}^{3} ) q^{47} + ( 1 - \zeta_{8}^{2} - \zeta_{8}^{3} ) q^{49} -\zeta_{8}^{2} q^{50} + ( \zeta_{8}^{2} + \zeta_{8}^{3} ) q^{56} + ( -1 + \zeta_{8}^{3} ) q^{62} -\zeta_{8}^{2} q^{64} + q^{68} + ( -1 - \zeta_{8} ) q^{71} + ( -\zeta_{8}^{2} + \zeta_{8}^{3} ) q^{73} + ( -1 - \zeta_{8}^{3} ) q^{79} + ( \zeta_{8}^{2} + \zeta_{8}^{3} ) q^{82} + ( \zeta_{8} - \zeta_{8}^{3} ) q^{89} + ( -1 - \zeta_{8}^{3} ) q^{92} + ( -1 - \zeta_{8}^{2} ) q^{94} + ( -\zeta_{8}^{2} + \zeta_{8}^{3} ) q^{97} + ( 1 + \zeta_{8} - \zeta_{8}^{3} ) q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 4q^{7} + O(q^{10})$$ $$4q + 4q^{7} + 4q^{14} - 4q^{16} + 4q^{49} - 4q^{62} + 4q^{68} - 4q^{71} - 4q^{79} - 4q^{92} - 4q^{94} + 4q^{98} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1224\mathbb{Z}\right)^\times$$. $$n$$ $$137$$ $$613$$ $$649$$ $$919$$ $$\chi(n)$$ $$-1$$ $$-1$$ $$-\zeta_{8}$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 53.1 −0.707107 + 0.707107i −0.707107 − 0.707107i 0.707107 + 0.707107i 0.707107 − 0.707107i −0.707107 + 0.707107i 0 1.00000i 0 0 0.292893 0.707107i 0.707107 + 0.707107i 0 0 485.1 −0.707107 0.707107i 0 1.00000i 0 0 0.292893 + 0.707107i 0.707107 0.707107i 0 0 773.1 0.707107 + 0.707107i 0 1.00000i 0 0 1.70711 0.707107i −0.707107 + 0.707107i 0 0 1205.1 0.707107 0.707107i 0 1.00000i 0 0 1.70711 + 0.707107i −0.707107 0.707107i 0 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 8.b even 2 1 RM by $$\Q(\sqrt{2})$$ 51.g odd 8 1 inner 408.be odd 8 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1224.1.bu.b yes 4 3.b odd 2 1 1224.1.bu.a 4 8.b even 2 1 RM 1224.1.bu.b yes 4 17.d even 8 1 1224.1.bu.a 4 24.h odd 2 1 1224.1.bu.a 4 51.g odd 8 1 inner 1224.1.bu.b yes 4 136.o even 8 1 1224.1.bu.a 4 408.be odd 8 1 inner 1224.1.bu.b yes 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 1224.1.bu.a 4 3.b odd 2 1 1224.1.bu.a 4 17.d even 8 1 1224.1.bu.a 4 24.h odd 2 1 1224.1.bu.a 4 136.o even 8 1 1224.1.bu.b yes 4 1.a even 1 1 trivial 1224.1.bu.b yes 4 8.b even 2 1 RM 1224.1.bu.b yes 4 51.g odd 8 1 inner 1224.1.bu.b yes 4 408.be odd 8 1 inner ## Hecke kernels This newform subspace can be constructed as the kernel of the linear operator $$T_{23}^{4} + 2 T_{23}^{2} + 4 T_{23} + 2$$ acting on $$S_{1}^{\mathrm{new}}(1224, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$1 + T^{4}$$ $3$ 1 $5$ $$1 + T^{8}$$ $7$ $$( 1 - T )^{4}( 1 + T^{4} )$$ $11$ $$1 + T^{8}$$ $13$ $$( 1 + T^{2} )^{4}$$ $17$ $$( 1 + T^{2} )^{2}$$ $19$ $$( 1 + T^{4} )^{2}$$ $23$ $$( 1 + T^{2} )^{2}( 1 + T^{4} )$$ $29$ $$1 + T^{8}$$ $31$ $$( 1 + T^{2} )^{2}( 1 + T^{4} )$$ $37$ $$1 + T^{8}$$ $41$ $$( 1 + T^{2} )^{2}( 1 + T^{4} )$$ $43$ $$( 1 + T^{4} )^{2}$$ $47$ $$( 1 + T^{4} )^{2}$$ $53$ $$( 1 + T^{4} )^{2}$$ $59$ $$( 1 + T^{4} )^{2}$$ $61$ $$1 + T^{8}$$ $67$ $$( 1 - T )^{4}( 1 + T )^{4}$$ $71$ $$( 1 + T )^{4}( 1 + T^{4} )$$ $73$ $$( 1 + T^{2} )^{2}( 1 + T^{4} )$$ $79$ $$( 1 + T )^{4}( 1 + T^{4} )$$ $83$ $$( 1 + T^{4} )^{2}$$ $89$ $$( 1 + T^{4} )^{2}$$ $97$ $$( 1 + T^{2} )^{2}( 1 + T^{4} )$$
2020-10-23 00:13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974046945571899, "perplexity": 8618.29615399209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00148.warc.gz"}
https://caddellprep.com/common-core-algebra-assignments/factoring-by-grouping/
# Factoring By Grouping In this video, we are going to look at how to factor by grouping. • Only \$19.95/mo. No commitment. Cancel anytime. For example: To factor $12x^3+18x^2+10x+15$, first think about it in two pieces, cutting it in half. Then we will look at the first two terms, and take out the greatest common factor, which is $6x^2$. This will leave us with $6x^2(2x+3)$ Then, we do the same thing for the next two terms. The greatest common factor here is 5. When we take out the greatest common factor, we are left with $5(2x+3)$ Notice that the same exact factors are written in both sets of parentheses. From here we can factor out the $(2x+3)$ from each term. This will result in a final answer of $(2x+3)(6x^2+5)$ ## Video-Lesson Transcript Let’s go over factoring by grouping. We have $12x^3 + 18x^2 + 10x + 15$ So we’re going to factor this four-term polynomial by grouping. We’re going to split this in half. Let’s look at the first two terms of the polynomial. $12x^3 + 18x^2$ And we’ll find out what is the greatest common factor of these two terms. It’s $6x^2$. Let’s factor. $12x^3 + 18x^2$ $6x^2 (2x + 3)$ Now let’s do the same with the other two terms. $10x + 15$ The greatest common factor for these two is $5$. Let’s factor. $10x + 15$ $+5 (2x + 3)$ Let’s put the plus sign. At this point, we have the same exact factors: $2x + 3$ We can actually factor this out of both of these terms. We have $6x^2 (2x + 3) +5 (2x + 3)$ So let’s take the same factors out. $(2x + 3) (6x^2 + 5)$ Let’s take a look at another example. $6x^3 + 7x^2 - 42x - 49$ Again, let’s split the polynomial into two. Then factor the first two terms first. $6x^3 + 7x^2$ There’s no greatest common factor for these two coefficients. The only factor we have is the variable. $6x^3 + 7x^2$ $x^2 (6x + 7)$ Then do the same with the remaining two terms. $- 42x - 49$ $7$ goes into both of these but since they are negative, we would want to take the negative out. $-7 (6x + 7)$ So now we have $x^2 (6x + 7) -7 (6x + 7)$ Let’s take the common factor out. And we’re left with our final answer. $(6x + 7) (x^2 - 7)$ This is polynomial factoring.
2018-10-21 17:42:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6999605298042297, "perplexity": 499.88652601218314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00466.warc.gz"}
http://bsdupdates.com/standard-error/precision-standard-error.php
Home > Standard Error > Precision Standard Error # Precision Standard Error ## Contents Lane DM. If we sum the lengths (putting the pieces of wood end-to-end) then: total = 1+3+5 = 9 m with precision = sqrt( 2*2 + 3*3 + 3*3) = sqrt (22) mm To estimate the standard error of a student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence Why do neural network researchers care about epochs? check my blog If one survey has a standard error of $10,000 and the other has a standard error of$5,000, then the relative standard errors are 20% and 10% respectively. Correction for correlation in the sample Expected error in the mean of A for a sample of n data points with sample bias coefficient ρ. Gurland and Tripathi (1971)[6] provide a correction and equation for this effect. By using your platform to measure a known quantity, you can reliably test the accuracy of your method. ## Standard Error Example Examples of causes of random errors are: electronic noise in the circuit of an electrical instrument, irregular changes in the heat loss rate from a solar collector due to changes in This has ramifications for both the descriptive and inferential uses of the standard error. A larger sample size will result in a smaller standard error of the mean and a more precise estimate. Accordingly the standard error of the empirical zero could be included. It is particularly important to use the standard error to estimate an interval about the population parameter when an effect size statistic is not available. Typically, this would be much smaller than the standard error of a person measure. However, many statistical results obtained from a computer statistical package (such as SAS, STATA, or SPSS) do not automatically provide an effect size statistic. Standard Error Excel Given that few language researchers have the resources to study such a large population in its entirety, they typically use samples (i.e., subgroups drawn from the population to represent the population). As discussed previously, the larger the standard error, the wider the confidence interval about the statistic. Testing in language programs: A comprehensive guide to English language assessment (New edition). Scenario 2. If people are interested in managing an existing finite population that will not change over time, then it is necessary to adjust for the population size; this is called an enumerative When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. Standard Error Regression When the standard error is large relative to the statistic, the statistic will typically be non-significant. Because the age of the runners have a larger standard deviation (9.27 years) than does the age at first marriage (4.72 years), the standard error of the mean is larger for Why isn't tungsten used in supersonic aircraft? • The standard error of the mean estimates the variability between samples whereas the standard deviation measures the variability within a single sample. • Hence, with all other factors held steady, as sample size increases, the standard error decreases, or gets more precise. • The unbiased standard error plots as the ρ=0 diagonal line with log-log slope -½. ## Standard Error Of The Mean Calculator What are Samples and Populations, Statistics and Parameters As I pointed out in Brown, (2006, p. 24), a population is "the entire group of people that a particular study is interested As an example of the use of the relative standard error, consider two surveys of household income that both result in a sample mean of \$50,000. Standard Error Example Brown, J. Standard Error Vs Standard Deviation That statistic is the effect size of the association tested by the statistic. For example, for a sample mean (M), we can calculate the standard error of the mean (SEM), which provides an estimate of how much fluctuation from the population parameter that we click site The age data are in the data set run10 from the R package openintro that accompanies the textbook by Dietz [4] The graph shows the distribution of ages for the runners. Each data point gives us an estimate of the mean or the measure, and the accumulation of the estimates provides the final best estimate along with its precision, its standard error. It is rare that the true population standard deviation is known. Standard Error Of The Mean Definition As will be shown, the standard error is the standard deviation of the sampling distribution. For example, consider 1000 reasonably targeted observations of a dichotomous item. For the runners, the population mean age is 33.87, and the population standard deviation is 9.27. news If the interval calculated above includes the value, “0”, then it is likely that the population mean is zero or near zero. The margin of error and the confidence interval are based on a quantitative measure of uncertainty: the standard error. Standard Error Mean The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean. of estimate Here, the standard error is computed by summing the statistical model variance across the observations, and then the standard error is the square-root of the inverse of the summed ## doi:10.2307/2340569. JSTOR2340569. (Equation 1) ^ James R. The standard deviation of the age was 3.56 years. Two types of systematic error can occur with instruments having a linear response: Offset or zero setting error in which the instrument does not read zero when the quantity to be Difference Between Standard Error And Standard Deviation By using this site, you agree to the Terms of Use and Privacy Policy. The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater than the true population mean μ {\displaystyle \mu } = 33.88 years. Consider, for example, a regression. What do your base stats do for your character other than set your modifiers? More about the author In each of these scenarios, a sample of observations is drawn from a large population. The effect size provides the answer to that question. What is the Relationship Between Sample Size and Precision? Large S.E. For illustration, the graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. The standard deviation of all possible sample means of size 16 is the standard error. Inferentially, the standard error is also commonly used in estimating the statistical significance of differences between or among parameter estimates. Similarly, the sample standard deviation will very rarely be equal to the population standard deviation. Fig. 2. For some statistics, however, the associated effect size statistic is not available. In a regression, the effect size statistic is the Pearson Product Moment Correlation Coefficient (which is the full and correct name for the Pearson r correlation, often noted simply as, R). That is, of the dispersion of means of samples if a large number of different samples had been drawn from the population.   Standard error of the mean The standard error As will be shown, the mean of all possible sample means is equal to the population mean. In the same way as the zero point on a temperature scale is an arbitrary point, chosen according to some definition, e.g., "the freezing point of water", the zero point (local All rights Reserved.EnglishfrançaisDeutschportuguêsespañol日本語한국어中文(简体)By using this site you agree to the use of cookies for analytics and personalized content.Read our policyOK Biochemia Medica The journal of Croatian Society of Medical Biochemistry and share|improve this answer answered Oct 19 '15 at 10:30 Matt 726213 1 +1..also, there can be bias in the estimation methodology, independent of the sampling plan...you can get a sense When the S.E.est is large, one would expect to see many of the observed values far away from the regression line as in Figures 1 and 2.     Figure 1. Ecology 76(2): 628 – 639. ^ Klein, RJ. "Healthy People 2010 criteria for data suppression" (PDF). Put another way, as the sample size increases so does the statistical precision of the parameter estimate. However, while the standard deviation provides information on the dispersion of sample values, the standard error provides information on the dispersion of values in the sampling distribution associated with the population Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ.
2018-01-21 05:07:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7641285061836243, "perplexity": 588.358823986623}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00756.warc.gz"}
https://mathoverflow.net/questions/206071/elliptic-curves-and-supercuspidal-representations-of-conductor-p2
# Elliptic curves and supercuspidal representations of conductor $p^2$ Let $E$ be an elliptic curve defined over $\mathbf{Q}$. Let $p \geq 5$ be a prime of additive reduction for $E$. Let $f$ be the newform associated to $E$, and let $\pi$ be the irreducible admissible representation of $G=\mathrm{GL}_2(\mathbf{Q}_p)$ associated to $f$ (the so-called local component of $f$ at $p$). Then $\pi$ has conductor $p^2$. Assume that $\pi$ is supercuspidal. By the classification of supercuspidal representations, there exists an irreducible representation $\xi : \mathrm{GL}_2(\mathbf{F}_p) \to \mathrm{GL}(V)$ such that $\pi \cong \mathrm{Ind}_K^G \xi$ where $K$ is the maximal compact-mod-center subgroup of $G$ given by $K=\mathbf{Q}_p^\times \cdot \mathrm{GL}_2(\mathbf{Z}_p)$. The representation $\xi$ has dimension $p-1$. The classification of irreducible representations of $\mathrm{GL}_2(\mathbf{F}_p)$ is well-known, and in our case $\xi$ arises from a character $\phi : \mathbf{F}_{p^2}^\times \to \mathbf{C}^\times$. More precisely we have the relation $\operatorname{Tr}(\xi(g)) = - (\phi(g)+\phi(g^p))$ for every element $g$ in $\mathbf{F}_{p^2}$ not in $\mathbf{F}_p$. Now since $f$ arises from an elliptic curve, the representation $\xi$ has trivial central character so that $\phi |_{\mathbf{F}_p^\times}=1$. This implies $\phi$ has order dividing $p+1$ and $\operatorname{Tr}(\xi(g)) = - (\phi(g)+\overline{\phi}(g))$. Moreover the representation $\xi$ can be realized over $\mathbf{Q}$, which implies that $\phi+\overline{\phi}$ takes values in $\mathbf{Q}$. Therefore the possibilities are very restricted: $\phi$ has order 3, 4 or 6. Note that this implies $p \equiv -1 \mod{} 3, 4 \textrm{ or } 6$ respectively. Now comes my question: is there a simple way to tell whether $\phi$ has order 3, 4 or 6 in terms of $E$? By the local Langlands correspondence, I would expect a condition depending only on the local Galois representation associated to $E$. Moreover, when $p \equiv -1 \mod{12}$, does every possible order for $\phi$ occur? • Doesn't this come from the $p$-valuation of the discriminant of $E$? Namely, the order is $12/\gcd(v_p(\Delta),12)$. May 9, 2015 at 7:04 • @GuestPoster I've written a Magma code and for $p=11$ it seems to be the case that $\phi$ has order 6, 4, 3, 3 according to whether $v_p(\Delta)$ is 2, 3, 4, 8, so your guess seems right. For $p=5$ I have examples where $(v_p(\Delta),\textrm{ord}(\phi))=(2,3),(2,6),(4,3),(4,6),(8,3),(8,6)$ so $v_p(\Delta)$ seems not sufficient to determine $\phi$. For $p=17$ I have a $(2,3)$-example. If you could elaborate on your comment, this would make a nice answer! May 9, 2015 at 9:52 • I've never actually seen the specific question of the order of $\phi$ being asked, but it must be equivalent to something that is known in this genre (for instance, that the order of the inertia group is same as the order of $\phi$). Unfortunately, for $p\ge 5$ everyone seems to assume this type of knowledge, or at best cites Serre's 1972 Inventiones paper or Tate's algorithm. For instance page 3-4 of Kraus's work (on field extensions to prescribe good reduction, as in Will Sawin's answer) eudml.org/doc/155566 where a bit more of a proof is given in Section 2. May 9, 2015 at 12:05 • Another possibly useful paper is "Euler factors determine local Weil representations" by the Dokchitsers. By my understanding, the local Weil representation is thus determined by the Euler factor of $E$ over the field of good reduction, and this field follows for $p\ge 5$ from the discriminant valuation. Similarly, the minimal model of $E$ over this field and the subsequent Euler factor from point-counting are also immediate (as in Will Sawin's answer). May 9, 2015 at 12:09 • I have added a rough explanation of the relation between the determinant and the inertia to my answer. May 9, 2015 at 13:22 Because $p \geq 5$, the ramification of the Galois representation is tame, hence the action of the inertia group on that Galois representation factors through a cyclic group. For the exact same defined-over-$\mathbb Q$ reasons, the image of the inertia group has order $1$, $2$, $3$, $4$, or $6$. If it's $1$ or $2$, the representation is a $1$-dimensional character of the inertia group tensor a two-dimensional unramified representation. Because all irreducible unramified representations over a local field are one-dimensional, the Galois representation is not irreducible, so does not correspond to a supercuspidal representation. So the order possibilities are $3$, $4$, and $6$. You might guess that the order of the inertia group corresponds exactly to the order of that character. As far as I know, this is correct, and is true more generally for tamely ramified Galois representations / automorphic representations that arise from induction of representation $GL_n(\mathbb F_p)$ in the manner you describe, but I don't actually know anything about the local Langlands correspondence. Assuming it's correct, we can see that every possible order occurs. The curve $y^2=x^3-p$ has inertia of order $6$ and is supercuspidal when $p \equiv -1$ mod $3$, $y^2=x^3-px$ is similar for order $4$, and $y^2=x^3- p^2$ for order $3$. To prove this, first check that $y^2=x^3-1$ and $y^2=x^3-x$ are unramified outside $2$ and $3$ by computing the discriminants of the polynomials: $27$ and $4$. The curves I wrote down are all twists of those two that are trivialized over $\mathbb Q(p^{1/6})$, $\mathbb Q(p^{1/4})$, and $\mathbb Q(p^{1/3})$ respectively. These are Galois extensions whose inertia group at $p$ has order $n=6$, $4$, or $3$. The isomorphism is by multiplying $x$ by the square of the $n$th root of $p$ and multiplying $y$ by the third power. Using this we can see how the inertia group acts: It acts on the $n$th root by multiplying by an $n$th root of unity, so it acts on the curve by multiplying $x$ by the second power of that $n$th root of unity and $y$ by a third power of the $n$th root of unity. This is a CM automorphism of the curve of order $n$ and acts faithfully on the Tate module, so $n$ is the order of the inertia group acting on the Tate module. To tell whether the representation is irreducible or reducible you look at the Frobenius action by conjugation on the inertia group. If it's trivial, then the Tate module splits into two distinct characters of the inertia group. If it's nontrivial, then the two characters are Galois conjugate to each other and cannot be separated. The conjugation action is exactly raising to the power of $p$ mod $n$, so is nontrivial when $p \equiv -1$ mod $n$. Another way to get these answers would be to apply Tate's algorithm to compute the Neron model type (II, III, IV respectively) and then using the formula that for $p>5$ determines the Galois representation from the Neron model type. This would let you construct many more examples. So indeed all occur when $p \equiv -1$ mod $12$, assuming my claim about the local Langlands correspondence is correct. Here's how to relate the order of inertia to the $p$-adic valuaton of the discriminant, when a curve has potentially good reduction. Observe that for a curve with semistable, the discriminant is naturally a section of the $12$th power of the relative canonical bundle - in other words, its a modular form of weight $12$. For a curve with good reduction, the discriminant is nonvanish. So for an elliptic curve with potentially good reduction, if the discriminant has $p$-adic valuation $v$, then over a field extension with good reduction, the relative canonical bundle is shifted from the relative canonical bundle of the original curve by $p^{v/12}$. I mean the natural map from one to the other is multiplication by $p^{v/12}$. This gives the Galois action on the relative canonical bundle of a smooth model - it's multiplication by an $n$th root of unit where $n$ is the denominator of $v/12$. Then the Galois action on the relative de Rham cohomology is the sum of that Galois character and its dual. By comparing the relative de Rham cohomology to the Tate module, we get that the order of the inertia group is also $n$. One way to check for potentially good reduction is to check that the $p$-adic valuation of the $j$ invariant is nonnegative. • Thanks Will for your answer. Do you have a reference for determining the Galois representation from the Kodaira type for $p>5$? May 8, 2015 at 16:52 • @FrançoisBrunault No, I don't remember where I heard it. Let me flesh out my other argument instead, which I realized is simpler. May 8, 2015 at 19:19 • @FrançoisBrunault: This is e.g. in Serre's paper "Propriétés galoisiennes des points d'ordre fini des courbes elliptiques", p.312 (and he refers to Néron, I think) May 9, 2015 at 17:34 Jared Weinstein and I worked out an algorithm which explicitly computes the character $\phi: \mathbf{F}_{p^2}^\times \to \mathbf{C}^\times$ (more precisely, a conjugate pair of characters) using modular symbols. You can read about it in our paper. It's implemented in both Sage and Magma. Here's the computation in Sage for the elliptic curve with Cremona label 121a: sage: f = Newform('121a') sage: Pi = LocalComponent(f, 11) sage: Pi.species() 'Supercuspidal' sage: Pi.characters() [ Character of unramified extension Q_11(s)* (s^2 + 7*s + 2 = 0), of level 1, mapping s |--> d, 11 |--> 1, Character of unramified extension Q_11(s)* (s^2 + 7*s + 2 = 0), of level 1, mapping s |--> -d + 1, 11 |--> 1 ] sage: xi1, xi2 = _ sage: xi1.base_ring() Number Field in d with defining polynomial x^2 - x + 1 sage: xi1.multiplicative_order() 6 • Thanks David, I'm indeed using your algorithm to compute the examples in my comment to @GuestPoster above. I will see if I can find more patterns. May 9, 2015 at 10:00
2022-06-30 02:10:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385943412780762, "perplexity": 174.68866186609952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00369.warc.gz"}
http://eljeme.cleansite.biz/1908.php
# Biexponential scale manual How can a pictograph enhance a graph? Although these events cannot been seen on a conventional plot, they will still be recorded when a quadrant biexponential scale manual region is applied. Mar 08,  · Also, your objective function has large flat regions where biexponential scale manual fminsearch can get stuck if your lambdas get too large. If have tried using the manual axis scaling in Flowjo to. Then,x and y values were biexponential scale manual transformed using anticlockwise rotation. For each fluorescent plot, select the plot and check the Biexponential boxes in the Inspector window. The biexponential scale is close to log at the upper end and close to linear at the low end, allowing events at or below zero to be displayed. ? Jun 30, · BD FACSDiva Software Reference Manual. Manual compensation is the process of adjusting the compensation based on how the data visually looks. † Display a grid on the worksheet that is scalable. See Using the Worksheet Grid Tab on page in the BD FACSDiva Software Reference Manual. Hence, using a biexponential transformation provides a more precise visualization tool when comparing populations with low fluorescence versus those with high fluorescence, as opposed to a standard log scale. This selfStart model evaluates the biexponential model function and its gradient. For information, see Using Biexponential Scaling in the BD FACSDiva Software Reference Manual; for a tutorial highlighting this feature, see Using Tethering, Batch Analysis, and Biexponential Display on page 48 of this guide. One principle reason for using these equations to transform data is that flow cytometry data are traditionally displayed on a log scale. Data displayed using a biexponential scale on the y-axis is shown in Figure D. BD FACS LSR Fortessa User Manual Flow Cytometry Unit 9. *PnMS* Indicates the manually set Biexponential scale value for Instrument Parameter (-1 indicates an invalid scale value). BD FACSDiva Software Reference Manual. with different biexponential scales. Because it gives me ERROR! For leukocyte selection, singlets were automatically selected from FSC and SSC plots after anticlockwise rotation. Jan 11, · I have a question: In R for Biexponential Model can I use a data that contains negatives? NOTICE If custom keywords of the same name are defined for more than one level in an experiment softsare, the lower-level definition overwrites the one at a higher level. Once the two exponentials have been computed, the biexponential function value and the first two derivatives can be computed with only a few more multiplications and [HOST] by: Sep 26,  · Change the FSC-A scale from linear to log. Biexponential / Logicle Display Biexponential display is an important feature when visualizing compensated data. Or in other words, getting the parameters of a biexponential trendline of my data, known to be following this equation: y= A1*e^(b1*x) + A2*e^(b2*x) I am more interested in the b1, b2 parameters, if that helps. If the FCS file doesn't have valid biexponential scales then the scales are calculated from the entire data. Biexponential Model and Gompertz Model. *PnMS* Indicates the manually set Biexponential scale value for Instrument Parameter (-1 indicates an invalid scale value). You could use biexponential scale instead if you are losing populations on the left. the models are. Thus, the first log decade ranges from 2. for example, on the richter scale biexponential scale manual with each increase in magnitude there is an exponential increase in energy by a factor of 10 i believe. Biexponential scaling helps visualize data that is compressed against the low x- and y- axes. – Set up the batch analysis (found by right clicking the biexponential scale manual experiment in the Browser window) by selecting the checkboxes next to Statistics and Freeze Biexponential biexponential scale manual Scales. biexponential scale manual With the dedicated fority method implemented for flowSet, ncdfFlowSet and GatingSet classes, both raw and gated flow cytometry data can be plotted directly with ggplot. – Set up the batch analysis (found by right clicking the experiment in the Browser window) by selecting the checkboxes next to Statistics and Freeze Biexponential Scales. If the biexponential scale manual FCS file doesn’t have valid biexponential scales then the scales are calculated from the entire data. But I see certain scales like the Richter scale that seem to increase exponentially, but are labeled as logarithmic scales. When importing an FCS file, if the FCS file has valid Biexponential scales, they should be applied to the data. My function for Biexponential in R is. However, when automatic scaling is in effect, you can use the scale to population feature to reset the range of the negative scale for a selected population. These artifacts are largely avoided in the biexponential scale manual Biexponential and Hyperlog displays. Biexponential model Description. Define BiExponential Transformation If a lot of cells are squished against the axis after compensation, you can change the display. Because the fluorescence parameters are displayed on a log scale (on which it is not possible to display zero or negative values), the accumulation of cells on the axis makes it impossible to judge whether compensation is set correctly.Indicates the automatically set Biexponential scale value for Instrument Parameter (-1 indicates an invalid biexponential scale manual scale value). See Using the Worksheet Grid Tab on page in the BD FACSDiva Software Reference Manual. The hyperbolic arcsine (arcsinh) is a function used in Cytobank for transforming data. y_i=ϕ_1 exp⁡[-exp⁡(ϕ_2)x]+ϕ_3 exp⁡[-exp⁡(ϕ_4)x] y_i=ϕ_1 exp⁡〖(ϕ_2 x^(ϕ_3))〗 i need information about. Display of BD CFlow® Software-Generated FCS Files Using biexponential scale manual FlowJo™ Version , , or for the Mac® Instruction Manual Subject This document describes how to adjust axes scaling factors in versions , , or of FlowJo software to optimally visualize FCS data files generated using the BD Accuri® C6 flow cytometer with. For leukocyte selection, singlets were automatically selected from FSC and SSC plots after anticlockwise rotation. experiment name, user name, date and time, BD FACSDiva software version, etc. Viewed times 0. returns a numeric vector of the biexponential scale manual same length as the input. BD HTS option on BD FACSCanto instruments, new look and feel, ability to disable biexponential scaling, apply scaling values to other elements in experiments, scale to population, copy/paste gates, import/export user profiles, import/export, duplicate, and print instrument configurations. May 15,  · hy, i'll try to explain my biexponential scale manual problem. Compute a transform using the 'biexponential' function. Various alternative scales have been devised to allow the off-axis events to be visualised (Novo and Wood, ). They are preferred to the logarithmic scale mainly for the following characteristics: values below zero can be displayedplots displayed in log can exhibit display artifacts on plots commonly used in flow cytometry (1. It is the value of the expression A1*exp(-exp(lrc1)*input)+A2*exp(-exp(lrc2)*input). It has an biexponential scale manual initial attribute that creates initial estimates of the parameters A1, lrc1, A2, and lrc2. Aug 15,  · A biexponential equation/function/curve/model/distribution is the sum of two exponential ones. Other types of biexponential scaling exist, including Hyperlog.! Mar 08, · Also, your objective function has large flat regions where fminsearch can get stuck biexponential scale manual if your lambdas get too large. Acquisition Matrix/Reapply Acquisition Matrix. Use the Biexponential Editor to manually adjust biexponential scales, export and import scale values, and apply values to other elements in an experiment. It serves a similar purpose as transformation functions such as biexponential, logicle, hyperlog, etc. “Squished” data is easily viewed by adding a section of linear scale to log acquired data. Nov 04, · The biexponential transformation is weakly identifiable. What is a pictograph? biexponential scale manual This selfStart model evaluates the biexponential model function and its gradient. Since your xdata are mostly much greater than 10, for example, your true lambdas must be much less than 1/10 Otherwise, your fitted curve will be essentially zero almost everywhere. BD HTS option on BD FACSCanto instruments, new look and feel, ability to disable biexponential scaling, apply scaling values to other elements in experiments, scale to population, copy/paste gates, import/export user profiles, import/export, duplicate, and print instrument configurations. It allows for the visualization of the data and makes compensation errors easier to detect. *PnMS* Indicates the manually set Biexponential scale value for Instrument Parameter (-1 indicates an invalid scale value). Indicates the automatically set Biexponential scale value for Instrument Parameter (-1 indicates an invalid scale value). In this R graphics tutorial, you will learn how to: Change the font style (size, color and face) of the axis tick mark labels. • Quadrant gate segments can now be offset or rotated on a pivot point. As the inverse of the biexponential functions cannot be given in closed form either, to compute scale values for arbitrary data, we must also solve it numerically. We’ll also explain how to rotate axis labels by specifying a rotation angle. a) A bivariate normal distribution on the original scale. When importing an FCS file, if the FCS file has valid Biexponential scales, they should be applied to the data.? We’re here to help you accelerate routine phenotyping, take your immunology research to the next level, and get you from data to results―one cell at a time. 16 Getting Started with BD FACSDiva Software [HOST] Page 17 Monday, March 20, AM Using Administrative Options If you have administrator privileges, BD FACSDiva software. FlowJo is your biggest fan and strives to be an outstanding source of support. Data transforma-tion plays an even more important role in an automated, high throughput setting since the scale and distribution ofCited by: Graphs of Exponential and Logarithmic Functions Basics of Graphing Exponential Functions The exponential function $y=b^x$ where $b>0$ is a function that will remain proportional to its original value when it grows or decays. The 'biexponential' is an over-parameterized inverse of the hyperbolic sine. The function to be inverted takes the form biexp(x) = a*exp(b*(x-w))-c*exp(-d*(x-w))+f with default parameters selected to correspond to biexponential scale manual the hyperbolic sine. We’re here to help you accelerate routine phenotyping, take your immunology research to the next level, and get you from data to results―one cell at a time. Select the Auto option, enter 0 for View Time, specify a name and location. If the FCS file doesn't have valid biexponential scales then the scales are calculated from the entire data. Nov 04,  · The biexponential transformation is weakly identifiable. Is it possible that the negative numbers be the reason for that Errors? The display transformation can be used on any compensated data--but FlowJo needs to be the one to compensate the data. When is an exponential scale useful?If the arguments A1, lrc1, A2, and lrc2 are the names of objects, then the gradient (Jacobian) matrix with respect to these names, evaluated at the values of those names, is attached as an attribute named gradient. Carefully chosen data transformations and corresponding parameters have been suggested to overcome some of the problems surrounding manual FCM analysis and gating [12,13]. Once the two exponentials have been computed, the biexponential function value and the first two derivatives can be computed with only a few more multiplications and additions. SSbiexp: Self-Starting Nls Biexponential model Description Usage Arguments Value Author(s) See Also Examples Description. Usage SSbiexp(input, A1, lrc1, A2, lrc2). b) Original data transformed with the inverse-biexponential using parameters a = 1, b = 1, c = 1, d = 1 , f = 0, w = 0. The “logicle” implementation of biexponential was implemented in many popular software packages like FACSDiva and FlowJo. If have tried using the manual axis scaling in . However, on a biexponential or arcsinh scale, it is likely that 2 and would be much closer together than and 20, BD Lyoplate™ Human and Mouse Screen Analysis Instructions Page 3 † Batch analyze the data.? Wizard Brings up the Compensation Wizard Window to begin the process of defining a new compensation matrix.! The biexponential transformation with full parameterization is weakly identifiable. Increase the scale on FSC-A until you see clear populations. ) Manual compensation is the process of adjusting biexponential scale manual the compensation based on biexponential scale manual how the data visually looks. Biexponential Display The biexponential display feature has been enhanced as follows. biexponential scale parameters. This is a selfStart function. Flow cytometric immunofluorescence measurements are usually displayed on a decade log scale to allow visualization of cells with a wide range of intensities on the same scale. BD FACSDiva v Software. By applying a biexponential transform to the data, the scale is compressed in the lower range, typically from or , leading to a more accurate visual representation of fluorescence units in the low range of the scale as compared to the higher range of the scale. This selfStart model evaluates the biexponential model function and its gradient. When importing an FCS file, if the FCS file has valid Biexponential scales, they should be applied to the data. Explain how a graph that shows percentage change can show descending bars (or a descending biexponential scale manual line) even when the variable of interest is increasing. Ask Question Asked 3 years, 9 months ago. Viewed times 0 $\begingroup$ Is there any one can give me some information about Biexponential and Gompertz Models? Hence, using a biexponential transformation provides a more precise visualization tool when comparing populations with low . Comparison of Biexponential and Monoexponential Model of Diffusion-Weighted Imaging for Distinguishing between Common Renal Cell Carcinoma and Fat Poor Angiomyolipoma Yuqin Ding, MD1, Mengsu Zeng, MD, PhD1, Shengxiang Rao, MD1, Caizhong Chen, MD1, Caixia Fu, MD2, Jianjun Zhou, MD1. I need your help with calculating the parameters of biexponential (two term exponent) fit of my data. One principle reason for using these equations to transform data is that flow cytometry data are traditionally displayed on a log scale. experiment name, user name, date and time, BD FACSDiva software version, etc. For each cytometer (hospital and HORIBA laboratory), specific parameters were automatically initialized in the software, such as scales, coordinate values of fixed gates, rotation angles, and biexponential scale parameters. It simplies the plotting by: * add a default scale_fill_gradientn for you * fuzzy-matching in aes by either detector or fluorochromes names * determine the parent popoulation automatically * exact and plot the gate object by simply referring to the child population name. Jan 23,  · Graphically and mathematically it is easy to see they are inverses. but not off scale. for example, on the richter scale with each increase in magnitude there is an exponential increase in energy by a factor of 10 i believe. Therefore, it can be done on any data where you collect the single stained compensation controls and create the compensation matrix, or, alternatively, on any data files which specify their own comp matrix (currently, only BD DiVa files do this correctly). Alternate display transformations are intended to provide a more intuitive view of flow cytometry data. You. When importing an FCS file, if the FCS file has valid Biexponential scales, they should be applied to the data. In specific, biexponential scale manual a biexponential model has the form y = a 1 e − b 1   t + a 2 e − b 2   t {\displaystyle y=a_{1}e^{-b_{1}{\text{ }}t}+a_{2}e^{-b_{2}{\text{ }}t}} Biexponential time-series models commonly find use in pharmacokinetics. Jul 19,  · biexponential scale manual (mathematics) Involving the exponent of a single variable··(mathematics) The exponent of a single variable. Refer to the BD FACSDiva Software Reference Manual for details. See Headers and Footers on page in the BD FACSDiva Software Reference Manual. BD Lyoplate™ Human and Mouse Screen Analysis Instructions Page 3 † Batch analyze the data. For information, see Using Biexponential Scaling in the biexponential scale manual BD FACSDiva Software Reference Manual; for a tutorial highlighting this feature, see Using Tethering, Batch Analysis, and Biexponential Display on page 48 of this guide. and biexponential scale manual SSC-A voltages to place the population of interest on scale. The biexonential scale is a combination of linear and log scaling on a single axis using an arcsine function as its backbone. I have a question: In R for Biexponential Model can I use a data that contains negatives? The hyperbolic arcsine (arcsinh) is a function used in Cytobank for transforming data. Aug 15, · A biexponential equation/function/curve/model/distribution is the sum of two exponential ones. The compensation menus are a sub-menu below the Platform menu. b) Original data transformed with the inverse-biexponential using parameters a = 1, b = 1, c = 1, d = 1 , f = 0, w = [HOST] by: Manual gating strategy using the standard and smart antibody panels. Right-click a plot and select the items from the menu to show Statistics View and Population - Optimize the FSC-A and SSC-A voltages to place the population of interest on scale. • biexponential scale manual Quadrant gate segments can now be offset or rotated on a pivot point. Is it possible that the negative numbers be the reason for that Errors? Stack Exchange network consists of Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. “Squished” data is easily viewed by adding a section of linear scale to log acquired data. You could use biexponential scale instead if you are losing populations on the left.! This allows a more intuitive display of the data by changing the scale of the graphs using Bi-Exponential scaling. It can be viewed in the Biexponential inspector' when the auto scaling option is selected.; Rotate axis text labels. Biexponential scaling helps visualize data that is compressed against the low x- and y- axes. Chattopadhyay, Ph. It has an initial attribute that creates initial estimates of the parameters A1, lrc1, A2, and lrc Usage. Select the Auto option, enter 0 for View Time, specify a name and location. Jan 11,  · Biexponential Model in R. They are preferred to the logarithmic scale mainly for the following characteristics: values below zero can be displayedplots displayed in log can exhibit display artifacts on plots commonly used in flow cytometry (1.. When importing an biexponential scale manual FCS file, if the FCS file has valid Biexponential scales, they should be applied to the data. scale and facilitate comparison..D. Therefore, it can be done on any data where you collect the biexponential scale manual single stained compensation controls and create the compensation matrix, or, alternatively, on any data files which specify their own biexponential scale manual comp matrix (currently, only BD DiVa files do this correctly). The biexonential scale is a combination of linear and log scaling on a single axis using an arcsine function as its backbone. FlowJo is your biggest fan biexponential scale manual and strives to be an outstanding source of support. Ask Question Asked 3 years, 10 months ago. • Biexponential display is enabled by default, biexponential scale manual but can be biexponential scale manual disabled in user preferences, allowing more events to be recorded per experiment.? As the inverse of the biexponential functions cannot be given in closed form either, to compute scale values for arbitrary data, we must also solve it numerically. This feature is described in the BD FACSDiva Software Reference Manual. Using Bi-exponential Transformations By Cynthia Guidos, PhD Director, SickKids-UNH FCF at TMDT [HOST]@[HOST] What is Bi-exponential Transformation? Using Bi-exponential Transformations By Cynthia Guidos, PhD Director, SickKids-UNH FCF at TMDT [HOST]@[HOST] What is Bi-exponential Transformation? This selfStart model evaluates the biexponential model function and its gradient. To get an intuitive sense for this, imagine using a ruler to measure the distance between three fluorescence intensity values on a plot: 2, , and 20, On a log 10 scale, these three values would be evenly spaced. This article describes how to easily set ggplot axis ticks for both x and y axes. For all events, the width and area values of the FSC parameter were recovered and plotted on lin-ear two-dimensional graphs. You could use biexponential scale instead if you are losing populations on the left. But I see certain scales like the Richter scale that seem to increase exponentially, but are labeled as logarithmic scales. 爱词霸权威在线词典,为您提供biexponential的中文意思,biexponential的用法讲解,biexponential的读音,biexponential的同义词,biexponential的反义词,biexponential的例句等英语服务。. For biexponential scale manual leukocyte selection, singlets were automatically selected from FSC and SSC plots after anticlockwise rotation. Increase the scale on FSC-A until you see clear populations. Advanced users can switch to manual scaling and adjust the range of the negative scale using the Biexponential Editor. ! To use this menu item, select control tubes or .? What is an exponential scale? In specific, a biexponential model has the form y = a 1 e − b 1   t + a 2 e − b 2   t {\displaystyle . By applying a biexponential transform to the data, the scale is compressed in the lower range, typically from or , leading to a more accurate visual representation of fluorescence units in the low range of the scale as compared to the biexponential scale manual higher range of the scale. with different biexponential scales. Persistence of the target events count. For leukocyte selection, singlets were automatically selected from FSC and SSC plots after anticlockwise biexponential scale manual rotation. You could use biexponential scale instead if you are losing populations on the left. 11th Nov, if you select manual. Description. 2. Alternate display transformations are intended to provide a more intuitive view of flow cytometry data. The display transformation can be used on any compensated data--but FlowJo biexponential scale manual needs to be the one to compensate the data. One of the most common rumors or practices that has been passed down incorrectly by word of mouth over years past is the concept of manual compensation. Why You Should Never Manually Compensate Your Data (Figure courtesy of Pratip K. It has an initial attribute that creates initial estimates of the parameters A1, lrc1, A2, and lrc Usage SSbiexp(input, A1, lrc1, A2, lrc2) Arguments. FlowJo Flojo help manual guide lesson help. † Display a grid on the worksheet that is scalable. Other types of biexponential scaling exist, including Hyperlog. 2. BD FACSDiva™ software provides one of the most complete and robust feature sets available for flow cytometry.? The biexponential transformation with full parameterization is weakly identifiable. Since your xdata are mostly much greater than 10, for example, your true lambdas biexponential scale manual must be much less than 1/10 Otherwise, your fitted curve will be essentially zero almost everywhere. a) A bivariate normal distribution on the original scale. Flow cytometric immunofluorescence measurements are usually displayed on a decade log scale to allow visualization of cells with a wide range of intensities on the same scale. The “logicle” implementation of biexponential was implemented in many popular software packages like FACSDiva and FlowJo. It serves a similar purpose as transformation functions such as biexponential, logicle, hyperlog, etc. Evaluates the biexponential function and its gradient, and computes the initial parameters for fitting data to a biexponential model. How can it make a graph misleading? Graphically and mathematically it is easy to see they are inverses. It simplies the plotting by: * add a default scale_fill_gradientn for you * fuzzy-matching in aes by either detector or fluorochromes names * determine the parent popoulation automatically * exact and plot the gate object by simply referring to the child population name. For all events, the width and area values of the FSC parameter were recovered and plotted on lin-ear two-dimensional graphs. such as scales, coordinate values of fixed gates, rotation angles, and biexponential scale parameters. biexponential scale parameters. Background fluorescence should be the same between the negative biexponential scale manual and positive population. ggcyto wrapper and some customed layers also make it easy to add gates and population statistics to the plot. The first user of the day must run a Fluidics Startup when the instrument is turned on and the last user of the day must run More information. i've a set of data (number) those data will be inserted in a graph i want to fit (interpolate) those data with a bi exponential(two term exponential) function. 11th Nov, if you select manual. Jun 30,  · BD FACSCANTO II QUICK REFERENCE MANUAL Pdf Download. Biexponential Display Data from multicolor experiments may not appear to be biexponential scale manual properly compensated. Because it gives me ERROR!. For all events, the width and area values of the FSC Cited by: 4. Comparison of Biexponential and Monoexponential Model of Diffusion-Weighted Imaging for Distinguishing between Common Renal Cell Carcinoma and Fat Poor Angiomyolipoma Yuqin Ding, MD1, Mengsu Zeng, MD, PhD1, Shengxiang Rao, MD1, Caizhong Chen, MD1, Caixia Fu, . If the FCS file doesn’t have biexponential scale manual valid biexponential scales then the scales are calculated from the entire data.Change the FSC-A scale from linear to log. BD FACSDiva v software continues to provide powerful index sorting, along with more routine biexponential scale manual software operation features including overlays, undo/redo, and multiple copying and pasting of all worksheet elements. Biexponential model Description. there is a function like this in matlab but i must insert this function in . These artifacts are largely avoided in the biexponential scale manual Biexponential and Hyperlog displays.68%(36). See Headers and Footers on page in the BD FACSDiva Software Reference Manual. BD FACS LSR Fortessa User Manual plot and check the Biexponential boxes in the Inspector window. Biexponential Model: The Sum of Two Exponentials. Then,x and y values were transformed using anticlockwise rotation,Cited by: 4. SSbiexp: Self-Starting Nls Biexponential model Description Usage Arguments biexponential scale manual Value Author(s) See Also Examples Description. If you have manually compensated data in your lab notebook–strike it out now.
2021-06-23 00:33:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4798086881637573, "perplexity": 3987.0468277294754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00055.warc.gz"}
https://d12frosted.io/tags/environment.html
# Boris Buliga Welcome to my personal site, where I irregularly post technical notes, usually Emacs or Haskell-related. I am developer at Wix during the daytime, and developer at home before and after the daytime. When I am not writing code, I am either drinking wine or drinking tea (with these little cups). Cheese is my bread and tracking everything in Emacs is my cheese. So welcome! # (tagged 'environment) ### Emacs: reusing window for helpful buffers June 26, 2019 Ironically, I find the helpful package quite helpful. It boosts Emacs help buffer with much more contextual information. If you haven’t tried it out yet, I advice you to do so. However, by default, it doesn’t play nicely with windows. Usually when I write some Elisp and I want to read the documentation of some function or variable, I hit C-h f or C-h v respectively and the help buffer is shown in the separate window. Which is convenient in my opinion, because I can see the code and the help. Sometimes help contains links to other entries that I need to navigate. And when I hit <RET> window containing code shows another help buffer. Which might be good for some people, but I hate this behaviour, because usually I want to see the code that I am editing. This is also annoying if you set the value of helpful-max-buffers to 1. Help window and the window with code are swapped on every navigation. The good thing, it’s configurable (as almost everything in Emacs land). (setq helpful-switch-buffer-function #'+helpful-switch-to-buffer) The logic is simple, if we are currently in the helpful buffer, reuse it's window, otherwise create new one." (switch-to-buffer buffer-or-name) (pop-to-buffer buffer-or-name))) ### Revisiting Eru November 4, 2018 As you might know, Eru is the supreme deity of Arda. The first things that Eru created where the Ainur. He then bade the Ainur to sing to him. Each Ainu had a particular theme given by Eru. Sure enough, Eru makes the ‘World and All That Is’. So when I get a new clean system there is nothing yet. And so I call upon the wisdom and power of Eru.sh - the one who creates Ainur and the ‘World and All That Is’. \$ curl https://raw.githubusercontent.com/d12frosted/environment/master/bootstrap/eru.sh | bash I just have to wait patiently, while everything is being downloaded and installed, while all configuration cogs are being placed on the right spot. ### High quality GIF from video October 13, 2018 When it comes to converting video to GIF, one usually gets a huge file and a questionable quality. Most of the guides suggest to use FFmpeg to do the conversion, but usually, they don’t bother with the quality of the result. As it turns out, folks from FFmpeg made some huge steps in improving the GIF output. ### Fish: notify me when you finish June 13, 2017 Have you ever been in a situation when you called git fetch, stared at the screen for several seconds and then switched to the browser to read something ‘useful’ while git fetches updates? And in five minutes you’re like ‘Oh wait, I was doing something important, no?’. Rings the bell, doesn’t it?
2020-01-27 22:35:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2590833902359009, "perplexity": 3600.514512752014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00307.warc.gz"}
https://www.doubtnut.com/question-answer/in-each-of-the-following-determine-whether-the-statement-is-true-or-false-if-it-is-true-prove-it-if--516947187
HomeEnglishClass 11MathsChapterSets In each of the following, dete... # In each of the following, determine whether the statement is true or false. If it is true, prove it. If it is false, give an example. <br> If A ⊄ B and B ⊄ C , then A ⊄ C Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
2022-05-18 05:36:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18707051873207092, "perplexity": 561.4708658936546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00702.warc.gz"}
http://www.mi.fu-berlin.de/w/CompMolBio/TransitionMatrix
# Transition Matrix ## Properties Transition matrices are so called stochastic matrices, which means, that all entries are real values between zero and one, meaning, that each entry is a probability and a row sum equal one, meaning that the probability to jump to an arbitrary state is one From the sum-statement follows easlily, that there is a right eigenvector with eigenvalue of one, which is constant From the Perron-Frobenius Theorem follow immediately for stochastic matrices, that 1. there exists a positive eigenvalue of one, which is called Perron Eigenvalue 2. All other eigenvalues lie within the spectral radius defined by the Perron Eigenvalue 3. the associated left and right eigenvector are also non-negative, so also the left eigenvalue is non negativ. 4. if all entries are strictly positive, then, the dimension of the eigenspace associated with the Perron-Eigenvalue is one. A transition matrix can be used to propagate a distribution of states over time. We take a state distribution, which is a vector of dimension and sum one. That means that the probability over all states is one. The probability in each state is then moved to other states, when we apply this vector to the left-hand side of the transition matrix. For the left Perron-Eigenvalue can be shown, that it resembles the stationary distribution. That means, that this distribution applied to the transition matrix does not change, i.e. it is an eigenvector to an eigenvalue of one. Important to know is, that the existance of the stationary distribution is not connected to the detailed balance property Detailed balance is defined as which is equivalent to a symmetry over the stationary distribution. In some cases it is possible to postulate a generator, which taken to the exponent can construct the transition matrix for arbitrary timesteps. This is only possibile (in a unique and intuitive way), if the transition matrix is positive definite which is due to the logarithm of the eigenvalues, which are only uniquely defined, if the matrix is positive definite. From the rate matrix we get back using Topic revision: r2 - 29 Oct 2007, StefanBernhard
2021-01-20 12:08:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668980836868286, "perplexity": 363.9772574235613}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00042.warc.gz"}
https://www3.math.tu-berlin.de/disco/research/projects/impartial-mechanisms/
Faculties DE / EN ## Optimal Impartial Mechanisms #### Project Publications • Impartial Selection with Additive Guarantees via Iterated Deletion (, , and ) EC 2022 – Proc. 23rd ACM Conference on Economics and Computation, pp. 1104–1105. @inproceedings{CembranoFischerHannon+2022, author = {Javier Cembrano and Felix A. Fischer and David Hannon and Max Klimm}, booktitle = {EC 2022 – Proc. 23rd ACM Conference on Economics and Computation}, doi = {10.1145/3490486.3538294}, pages = {1104--1105}, title = {Impartial Selection with Additive Guarantees via Iterated Deletion}, year = {2022}, } Impartial selection is the selection of an individual from a group based on nominations by other members of the group, in such a way that individuals cannot influence their own chance of selection. We give a deterministic mechanism with an additive performance guarantee of $\mathcal{O}(n^{(1+\kappa)/2})$ in a setting with $n$ individuals where each individual casts $\mathcal{O}(n^\kappa)$ nominations, where $\kappa \in [0,1]$. For $\kappa =0$, i.e. when each individual casts at most a constant number of nominations, this bound is $\mathcal{O}(\sqrt{n})$. This matches the best-known guarantee for randomized mechanisms and a single nomination. For $\kappa=1$ the bound is $\mathcal{O}(n)$. This is trivial, as even a mechanism that never selects provides an additive guarantee of $n-1$. We show, however, that it is also best possible: for every deterministic impartial mechanism there exists a situation in which some individual is nominated by every other individual and the mechanism either does not select or selects an individual not nominated by anyone. • Optimal Impartial Correspondences (, and ) WINE 2022 – Proc. 18th Conference on Web and Internet Economics, pp. 187–203. @inproceedings{CembranoFischerKlimm2022, author = {Javier Cembrano and Felix A. Fischer and Max Klimm}, booktitle = {WINE 2022 – Proc. 18th Conference on Web and Internet Economics}, title = {Optimal Impartial Correspondences}, year = {2022}, accepted = {07.09.2022}, doi = {10.1007/978-3-031-22832-2_11}, pages = {187--203}, } We study mechanisms that select a subset of the vertex set of a directed graph in order to maximize the minimum indegree of any selected vertex, subject to an impartiality constraint that the selection of a particular vertex is independent of the outgoing edges of that vertex. For graphs with maximum outdegree $d$, we give a mechanism that selects at most $d+1$ vertices and only selects vertices whose indegree is at least the maximum indegree in the graph minus one. We then show that this is best possible in the sense that no impartial mechanism can only select vertices with maximum degree, even without any restriction on the number of selected vertices. We finally obtain the following trade-off between the maximum number of vertices selected and the minimum indegree of any selected vertex: when selecting at most $k$ of vertices out of $n$, it is possible to only select vertices whose indegree is at least the maximum indegree minus $\lfloor(n-2)/(k-1)\rfloor+1$.
2023-03-22 19:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5249077081680298, "perplexity": 994.7346861633803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00261.warc.gz"}
https://www.zenodo.org/record/6801130/export/csl
Conference paper Open Access # Value-driven Model-Based Optimization coupling Design-Manufacturing-Supply Chain in the Early Stages of Aircraft Development: Strategy and Preliminary Results Donelli, G.; Ciampa, P.D.; Lefebvre, T.; Bartoli, N.; Mello, J.; Odaguil, F.; van der Laan, T. ### Citation Style Language JSON Export { "publisher": "Zenodo", "DOI": "10.5281/zenodo.6801130", "title": "Value-driven Model-Based Optimization coupling Design-Manufacturing-Supply Chain in the Early Stages of Aircraft Development: Strategy and Preliminary Results", "issued": { "date-parts": [ [ 2022, 6, 30 ] ] }, "abstract": "<p>A value-driven model-based approach concurrently coupling design, manufacturing and supply chain in the early development stage of aircraft design has been developed within the European project AGILE4.0. The benefits of using this methodology have been highlighted by the aeronautical application case focused on the design, manufacturing and supply chain of an horizontal tail plane. Finding a Pareto-front simultaneously optimizing the design, manufacturing and supply chain domains is the next challenge to face. The research activity proposed in this paper represents the first step of this ambitious goal. The objective is to identify the optimization strategy to use for the global optimization campaign by exploring, first, simple and representative Multidisciplinary Design and Optimization (MDO) problems related to the supply chain domain. In the first MDO problem, a 4-objective optimization is executed and then the optimized attributes are aggregated in a single measure named <em>value</em>. In the second MDO problem instead, attributes are first aggregated in a value and then a bi- objective value-cost optimization is executed. Thus, two optimization strategies are investigated, but both lead to the value-cost Pareto-front investigation. The application case addressed in this research activity provides interesting insights for the value-driven optimization strategy to use for future-complex optimization problems involving design, manufacturing and supply chain domains.</p>\n\n<p>&nbsp;</p>", "author": [ { "family": "Donelli, G." }, { "family": "Ciampa, P.D." }, { "family": "Lefebvre, T." }, { "family": "Bartoli, N." }, { "family": "Mello, J." }, { "family": "Odaguil, F." }, { "family": "van der Laan, T." } ], "id": "6801130", "event-place": "Chicago", "type": "paper-conference", "event": "AIAA Aviation 2022" } 94 69 views downloads All versions This version Views 9494 Downloads 6969 Data volume 104.2 MB104.2 MB Unique views 9090 Unique downloads 6565
2023-02-08 10:19:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29405999183654785, "perplexity": 10158.182040494818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00792.warc.gz"}
http://sci4um.com/about278-How-real-are-the---Virtual--partticles-.html
Search   Memberlist   Usergroups Page 1 of 16 [234 Posts] View previous topic :: View next topic Goto page:  1, 2, 3, ..., 14, 15, 16 Next Author Message Anton Deitmar science forum beginner Joined: 25 Jul 2005 Posts: 24 Posted: Thu Jul 20, 2006 2:00 pm    Post subject: Re: A Combinatorics/Graph Theory Question It seems to me that p equals C(n-1,k)+r. The number of elements of Y to which a given x in X is not connected, equals the number of k-element subsets of X which do not contain x, that is C(n-1,k). Hence, to make sure you pick at least r sets which do contain x, you must pick C(n-1,k)+r sets. Did I miss anything? Anton Philippe Flajolet science forum beginner Joined: 18 Jun 2006 Posts: 1 Posted: Sun Jun 18, 2006 10:45 am    Post subject: Re: Refree's query Quote: I am considering differential equations of the form: y'*x*(1+y*(d/dz){log(F_{k}(z)}_{z=y})=y I hope I interpret the statement correctly, though I don't see the exact meaning of "IDENTICALLY complex functions" in the original specification. Also, the z,y notation business seems to me to be a bit obscuring the simplicity of the problem. Instead of a very specific F_k, subsitute an arbitrary function F(y). Then the differential equation reads y'*x(1+y*F'(y)/F(y))=y that is, y'/y+F'(y)/F(y)=1/x, which integrates to give log(y)+log(F(y))=log(x), that is, y*F(y)=x. Thus, a generalized form of the original equation is plainly solved via inversion of an explicit function, no matter what the structure of F is. This is a type of equation encountered when analysing the Lambert function (y*exp(y)=x) and, in the variant form y=x*G(y), is related to tree enumerations in combinatorial analysis as well as to Lagrange inversion. On an another register, the particular class of functions F_k that you introduce seem to be composed of some sort of "towers of exponentials": depending on the way you specify F_0, they might be described as the class of functions defined as corresponding to terms expressed using the variable "z" [or better "y"?], the unary construction exp(.), the binary operation "*", and perhaps "+" and/or some other operations. There has been quite a bit of research in mathematical logic for deciding equality or dominance of functions of this sort [eg, exp(2*y)=exp(y)*exp(y), etc]. A starting point to this literature might be Some Application of Nevanlinna Theory to Mathematical Logic: Identities Exponential Functions. C. Ward Henson, Lee A. Rubel Transactions of the American Mathematical Society, Vol. 282, No. 1 (Mar., 1984) , pp. 1-32 and references therein. Philippe Swiatoslaw Gal science forum beginner Joined: 02 Jun 2006 Posts: 1 Posted: Fri Jun 02, 2006 12:38 pm    Post subject: Re: This Week's Finds in Mathematical Physics (Week 233) Quote: f: SL(2,R)/SL(2,Z) -> R^3 - {trefoil} That would be amazing really- because then we could compose with modular forms and maybe obtain something interesting! I would also definitely like the answer to this. Not realy. In fact the isomorphism is a part of the modular theory: Looking for f: Gl(2,R)/Sl(2,Z)\to C^2-{x^2=y^3} (there is an obvious action of R_+ on both sides: M\to tM (M\in Gl(2,R) , x\to t^6 x, y\to t^4 y, and that the quotient is what we want). Gl(2,R)/Sl(2,Z) is a space of lattices in C. Such a lattice L has classical invariants g_2(L) = 60 \sum_{z\in L'} z^{-4}, and g_3(L) = 140 \sum_{z\in L'} z^{-6}, where L'=L-{0} The modular theory asserts that 1. For every pain (g_2,g_3) there exist a lattice L, such that g_2(L)=g_2, and g_3(L)=g_3 provided (g_2/3)^2\neq g_3. 2. Such a lattice is unique. Best, S. R. Gal Joe Christy science forum beginner Joined: 01 Jun 2006 Posts: 2 Posted: Thu Jun 01, 2006 3:00 pm    Post subject: Re: This Week's Finds in Mathematical Physics (Week 233) Vis-a-vis John's note of 05/30/2006 03:06 PM: Quote: In article <1148262406.418804.233710@i39g2000cwa.googlegroups.com>, Daniel Moskovich wrote: And, R^3 minus the trefoil knot is secretly the same as SL(2,R)/SL(2,Z)! This is actually incredibly interesting for me- what is a reference for this? (I couldn't find it in either cited paper, and Gannon gives no source). As usual, I gave all the references I know. I too find this fact incredibly interesting. I first heard of it from Chris Hillman: http://www.lns.cornell.edu/spr/2002-04/msg0040885.html ... I wouldn't be surprised if this was known to Seifert in the 30's, though I can't lay my hands on Seifert & Threfall at the moment to check. Likewise for Hirzebruch, Brieskorn, Pham & Milnor in the 60's in relation to singularities of complex hypersurfaces and exotic spheres. When I was learning topology in the 80's it was considered a warm up case of Thurston's Geometrization Program - the trefoil knot complement has PSL_2(R) geometric structure. In any case, peruse Milnor's Annals of Math Studies for concrete references. There is a (typically) elegant proof on p.84 of "Introduction to Algebraic K-theory" [study 72], which Milnor credits to to Quillen. It contains the missing piece of John's argument: introducing the Weierstrass P-function and remarking that the differential equation that it satisfies gives the diffeomorphism to S^3-trefoil as the boundary of the pair (discriminant of diff-eq, C^2 = (P,P')-space). This point of view grows out of some observations of Zariski, fleshed out in "Singular Points of Complex Hypersurfaces" [study 61]. The geometric viewpoint is made explicit in the paper "On the Brieskorn Manifolds M(p,q,r)" in "Knots, Groups, and 3-manifolds" [study 84]. It is also related to the intermediate case between the classical Platonic solids and John's favorite Platonic surface - the Klein quartic http://www.math.ucr.edu/home/baez/klein.html. By way of a hint, look to relate the trefoil, qua torus knot, the seven-vertex triangulation of the torus, and the dual hexagonal tiling of a (flat) Clifford torus in S^3. Joe -- ============================= Joe Christy ============================== ------------------ http://xri.net/=joe.christy ------------------ == If I can save you any time, give it to me, I'll keep it with mine. == Joe Christy science forum beginner Joined: 01 Jun 2006 Posts: 2 Posted: Thu Jun 01, 2006 6:07 am    Post subject: Re: This Week's Finds in Mathematical Physics (Week 233) Vis-a-vis John's note of 05/30/2006 03:06 PM: Quote: In article <1148262406.418804.233710@i39g2000cwa.googlegroups.com>, Daniel Moskovich wrote: And, R^3 minus the trefoil knot is secretly the same as SL(2,R)/SL(2,Z)! This is actually incredibly interesting for me- what is a reference for this? (I couldn't find it in either cited paper, and Gannon gives no source). As usual, I gave all the references I know. I too find this fact incredibly interesting. I first heard of it from Chris Hillman: http://www.lns.cornell.edu/spr/2002-04/msg0040885.html ... I wouldn't be surprised if this was known to Seifert in the 30's, though I can't lay my hands on Seifert & Threfall at the moment to check. Likewise for Hirzebruch, Brieskorn, Pham & Milnor in the 60's in relation to singularities of complex hypersurfaces and exotic spheres. When I was learning topology in the 80's it was considered a warm up case of Thurston's Geometrization Program - the trefoil knot complement has PSL_2(R) geometric structure. In any case, peruse Milnor's Annals of Math Studies for concrete references. There is a (typically) elegant proof on p.84 of "Introduction to Algebraic K-theory" [study 72], which Milnor credits to to Quillen. It contains the missing piece of John's argument: introducing the Weierstrass P-function and remarking that the differential equation that it satisfies gives the diffeomorphism to S^3-trefoil as the boundary of the pair (discriminant of diff-eq, C^2 = (P,P')-space). This point of view grows out of some observations of Zariski, fleshed out in "Singular Points of Complex Hypersurfaces" [study 61]. The geometric viewpoint is made explicit in the paper "On the Brieskorn Manifolds M(p,q,r)" in "Knots, Groups, and 3-manifolds" [study 84]. It is also related to the intermediate case between the classical Platonic solids and John's favorite Platonic surface - the Klein quartic http://www.math.ucr.edu/home/baez/klein.html. By way of a hint, look to relate the trefoil, qua torus knot, the seven-vertex triangulation of the torus, and the dual hexagonal tiling of a (flat) Clifford torus in S^3. Joe -- ============================= Joe Christy ============================== ------------------ http://xri.net/=joe.christy ------------------ == If I can save you any time, give it to me, I'll keep it with mine. == Bruce Ikenaga science forum beginner Joined: 04 Nov 2005 Posts: 3 Posted: Thu Jun 01, 2006 6:05 am    Post subject: Re: This Week's Finds in Mathematical Physics (Week 233) On Tue, 30 May 2006 22:06:24 +0000, John Baez wrote: Quote: In article <1148262406.418804.233710@i39g2000cwa.googlegroups.com>, Daniel Moskovich wrote: And, R^3 minus the trefoil knot is secretly the same as SL(2,R)/SL(2,Z)! This is actually incredibly interesting for me- what is a reference for this? (I couldn't find it in either cited paper, and Gannon gives no source). As usual, I gave all the references I know. I too find this fact incredibly interesting. I first heard of it from Chris Hillman: http://www.lns.cornell.edu/spr/2002-04/msg0040885.html I feel I can *almost* prove it, but not quite. SL(2,R)/SL(2,Z) is the space of unit-area lattices in the plane. If we take the hexagonal lattice * * * * * * * * * * * and gradually rotate it 60 degrees, we get back to the same lattice. So, we have traced out a certain loop A in SL(2,R)/SL(2,Z). If we take the square lattice: * * * * * * * * * * * * and rotate it 90 degrees, we get another loop B. I believe that these define elements of pi_1(SL(2,R)/SL(2,Z)) satisfying A^3 = B^2. This is the usual presentation for the fundamental group of the complement of the trefoil knot: http://en.wikipedia.org/wiki/Trefoil_knot I think the loop A corresponds to going around a "meridian" and B corresponds to going around a "longitude" - or maybe vice versa, since I can never remember the difference between the "meridian" and the "longitude" of a knot. But, there should be some more direct way to see what's going on! Since the Wikipedia article gives an analytic formula for the trefoil knot, maybe someone come up with an analytic formula for a diffeomorphism f: SL(2,R)/SL(2,Z) -> R^3 - {trefoil} Help, anyone? Quillen's proof is on pages 84-85 of Milnor's "Introduction to Algebraic K-Theory". Bruce Ikenaga John Baez science forum Guru Wannabe Joined: 01 May 2005 Posts: 220 Posted: Tue May 30, 2006 10:06 pm    Post subject: Re: This Week's Finds in Mathematical Physics (Week 233) Daniel Moskovich <dmoskovich@gmail.com> wrote: Quote: And, R^3 minus the trefoil knot is secretly the same as SL(2,R)/SL(2,Z)! This is actually incredibly interesting for me- what is a reference for this? (I couldn't find it in either cited paper, and Gannon gives no source). As usual, I gave all the references I know. I too find this fact incredibly interesting. I first heard of it from Chris Hillman: http://www.lns.cornell.edu/spr/2002-04/msg0040885.html I feel I can *almost* prove it, but not quite. SL(2,R)/SL(2,Z) is the space of unit-area lattices in the plane. If we take the hexagonal lattice * * * * * * * * * * * and gradually rotate it 60 degrees, we get back to the same lattice. So, we have traced out a certain loop A in SL(2,R)/SL(2,Z). If we take the square lattice: * * * * * * * * * * * * and rotate it 90 degrees, we get another loop B. I believe that these define elements of pi_1(SL(2,R)/SL(2,Z)) satisfying A^3 = B^2. This is the usual presentation for the fundamental group of the complement of the trefoil knot: http://en.wikipedia.org/wiki/Trefoil_knot I think the loop A corresponds to going around a "meridian" and B corresponds to going around a "longitude" - or maybe vice versa, since I can never remember the difference between the "meridian" and the "longitude" of a knot. But, there should be some more direct way to see what's going on! Since the Wikipedia article gives an analytic formula for the trefoil knot, maybe someone come up with an analytic formula for a diffeomorphism f: SL(2,R)/SL(2,Z) -> R^3 - {trefoil} Help, anyone? Matt Heath science forum beginner Joined: 04 Apr 2006 Posts: 3 Posted: Fri May 05, 2006 2:21 pm    Post subject: Re: Absolutely continuous functions on the circle and disc algebra function You are quite right. A false argument lead me to think it was the same as case with f anti-analytic, which is really what I am trying to solve. Bernice Barnett science forum beginner Joined: 08 May 2005 Posts: 5 Posted: Wed Apr 12, 2006 12:30 pm    Post subject: Re: A conditional random number generation problem (please help me!) Giovanni Resta wrote: Quote: rjmachado3 wrote: I need to know the formula for the random function that return random numbers in a range of a and b integers [a,b] but that obey on a custom probability (possibly different!) for each integer number on this [a,b] range (of course the sum of all integer number probabilities are = 1!). Finally, what i want is the general function formula that simulate the random behavior (based on a custom probability value for each integer number between the [a,b] range. confuse? i hope not! please help me!!!! what i know so far is that the function formula for generating a "pure" random number between [a,b] range is: rand()*(b-a)+a where rand() return a random number between 0 and 1. I'm not completely sure to have correctly understood you question. Anyway... Here is a very naive apprach that can work, at least if the interval is small. Maybe if the interval is large one can think about something more efficient. I make an example with 4 values, each with a custom prob., that can be easily generalized. Let the integer values be a,b,c,d (they do not need to be consecutive numbers) and let p_a, p_b, p_c, p_d the probability to extract respectively a,b,c, and d. Clearly p_a+p_b+p_c+p_d = 1. First you build these constants: P_a = p_a P_b = p_a+p_b P_c = p_a+p_b+p_c then you extract a random number U between 0 and 1, and if U <= P_a you select a, else if U <= P_b you select b else if U <= P_c you select c else you select d. I hope this example helps you giovanni There is an improvement if the range is large: Make a table of the distribution (your P_x above). Then generate a uniform random as above. Finally, do a binary search on the table - complexity O(log(n)) where n is the number of table entries. I think this is the best you can do for distributions where you don't have an inverse of the distribution function. -- Jeff Barnett Giovanni Resta science forum beginner Joined: 11 Apr 2006 Posts: 1 Posted: Tue Apr 11, 2006 4:30 pm    Post subject: Re: A conditional random number generation problem (please help me!) Quote: I need to know the formula for the random function that return random numbers in a range of a and b integers [a,b] but that obey on a custom probability (possibly different!) for each integer number on this [a,b] range (of course the sum of all integer number probabilities are = 1!). Finally, what i want is the general function formula that simulate the random behavior (based on a custom probability value for each integer number between the [a,b] range. confuse? i hope not! please help me!!!! what i know so far is that the function formula for generating a "pure" random number between [a,b] range is: rand()*(b-a)+a where rand() return a random number between 0 and 1. I'm not completely sure to have correctly understood you question. Anyway... Here is a very naive apprach that can work, at least if the interval is small. Maybe if the interval is large one can think about something more efficient. I make an example with 4 values, each with a custom prob., that can be easily generalized. Let the integer values be a,b,c,d (they do not need to be consecutive numbers) and let p_a, p_b, p_c, p_d the probability to extract respectively a,b,c, and d. Clearly p_a+p_b+p_c+p_d = 1. First you build these constants: P_a = p_a P_b = p_a+p_b P_c = p_a+p_b+p_c then you extract a random number U between 0 and 1, and if U <= P_a you select a, else if U <= P_b you select b else if U <= P_c you select c else you select d. I hope this example helps you giovanni Matt Heath science forum beginner Joined: 04 Apr 2006 Posts: 3 Posted: Tue Apr 04, 2006 4:57 pm    Post subject: Re: Results for: 'From Commutative Algebra to Functional Analysis' For semi-simple algebras there is a theorem of Johnson that a Banach algebra norm is unique - and hence that being a semi-simple Banach algebra is an algebraic property. DariushA science forum beginner Joined: 15 Mar 2006 Posts: 5 Posted: Sat Mar 18, 2006 11:15 am    Post subject: Re: Subset Vector Sum I think the algorithms Victor directed me to will keep me happily busy for a while. I will report any possible progress. Much Regards, Dariush. "Victor S. Miller" <victor@algebraic.org> wrote in message news:dveofd$2s6$1@dizzy.math.ohio-state.edu... Quote: "Gerhard" == Woeginger Gerhard This is a 2-dimensional variant of SUBSET-SUM (Given n Gerhard> integers a_1,...,a_n and a goal-value b, does there exists a Gerhard> subset of the a_i that adds up to b?). Gerhard> SUBSET-SUM is NP-hard. The special case of SUBSET-SUM with Gerhard> b=0 is also NP-hard. Gerhard> So you should not expect a fast solution algorithm for your Gerhard> 2-dimensional generalization. On the contrary -- just because the general problem is NP complete doesn't mean that your specific instance might not be solved quickly. This is a special case of the integer relation algorithm. For example, look at the following page for a good overview. The Ferguson-Forcade, or PSLQ algorithm might be a good one to use. You can also set this up for lattice reduction and use the LLL algorithm. Victor http://mathworld.wolfram.com/IntegerRelation.html tchow@lsa.umich.edu Joined: 15 Sep 2005 Posts: 53 Posted: Tue Feb 14, 2006 1:18 am    Post subject: Re: This Week's Finds in Mathematical Physics (Week 226) In article <dsn2cd$som$1@glue.ucr.edu>, John Baez <baez@galaxy.ucr.edu> wrote: Quote: Alexander A. Razborov and Steven Rudich, Natural proofs, in Journal of Computer and System Sciences, Vol. 55, No 1, 1997, pages 24-35. Available at http://www-2.cs.cmu.edu/~rudich/papers/natural.ps and http://genesis.mi.ras.ru/~razborov/int.ps Aaronson says it was written in 1993 even though the date of publication and the date on the paper itself (1996 or 1999, depending on which copy you look at) are later. I believe him; you have to be careful when using the /today command in LaTeX, since if you LaTeX the same paper 6 years later, you'll get a new date. It probably has less to do with the "today" command than with the fact that papers in computer science tend to exist in more versions than in, say, math. The two "official" versions of the Razborov-Rudich paper are STOC 1994 (conference version) and JCCS 1997 (journal version). However, I wouldn't be surprised if it circulated in preprint form in 1993, and if the authors continued to revise the paper after the "final" journal version, since this isn't uncommon practice. -- Tim Chow tchow-at-alum-dot-mit-dot-edu The range of our projectiles---even ... the artillery---however great, will never exceed four of those miles of which as many thousand separate us from the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences baez@galaxy.ucr.edu Joined: 21 Oct 2005 Posts: 53 Posted: Mon Feb 13, 2006 8:08 pm    Post subject: Re: This Week's Finds in Mathematical Physics (Week 226) In article <Pine.LNX.4.61.0602102025360.19201@zeno1.math.washington.edu>, <tessel@um.bot> wrote: Quote: On Sat, 11 Feb 2006, John Baez mentioned that md5sum was "broken" about a year ago. I just wanted to add: 1. If I am not mistaken, sha-1 and md5sum are different algorithms (IIRC, both are known to be insecure). Yeah. Here's a nice review of the situation: Arjen K. Lenstra, Further progress in hashing cryptanalysis, February 26, 2005, http://cm.bell-labs.com/who/akl/hash.pdf Quote: These are huge and wonderful philosophico-physico-mathematical questions with serious practical implications. You mean the Weyl curvature hypothesis? :-/ Heh, no - I mean stuff like whether there's such a thing as a provably good cryptographic hash code function, or cipher. Quote: Joel Spencer, The Strange Logic of Random Graphs, Springer 2001 Here's a thought: "Everyone knows" that if on day D, mathematician M is studying an example of size S in class C, he is more likely to be studying a "secretly special" representative R than a generic representative G of size S. Why? Because the secretly special reps show up in disguise in other areas, and M was probably hacking through the jungle from one of those places when he got lost and ate a poisoned cache. Interesting. Here's some more stuff, from my email correspondence. I wrote: Quote: Allan Erskine wrote: I enjoyed week 226! Algorithmic complexity was the area I studied in... Your readers might find "The Tale of One-way Functions" by Leonid Levin an enjoyable read: http://arxiv.org/abs/cs.CR/0012023 Hey, that's great! I'm printing it out now... Levin and I have argued against Greg Kuperberg and others on sci.physics.research: we tend to think that quantum computers are infeasible *in principle*. As for your "shortest proof of this statement has n lines" question, you may have noticed that Chaitin asks a very similar question about the shortest proofs that a LISP program is "elegant" (most short) and proves a strong incompleteness result with an actual 410 + n character LISP program! Crazy... http://www.cs.auckland.ac.nz/CDMTCS/chaitin/lisp.html Yes. You might like the following related article below. Best, jb It's an old one... From: b...@math.ucr.edu (john baez) Subject: Re: compression, complexity, and the universe Date: 1997/11/20 Message-ID: <652c5t$62g$1@agate.berkeley.edu>#1/1 X-Deja-AN: 291100089 References: <64nsqo$8rg$1@agate.berkeley.edu> <346fd86d.1059260@news.demon.co.uk> <64t2ar$qcs@charity.ucr.edu> Originator: bunn@pac2 Organization: University of California, Riverside Newsgroups: sci.physics.research,comp.compression.research In article <651lm1$q3...@agate.berkeley.edu>, Aaron Bergman <aaron.berg...@yale.edu> wrote: Quote: The smallest number not expressable in under ten words Hah! This, by the way, is the key to that puzzle I laid out: prove that there's a constant K such that no bitstring can be proved to have algorithmic entropy greater than K. I won't give away the answer to the puzzle; anyone who gets stuck can find the answer in Peter Gacs' nice survey, "Lecture notes on descriptional complexity and randomness", available at http://www.cs.bu.edu/faculty/gacs/ In my more rhapsodic moments, I like to think of K as the "complexity barrier". The world *seems* to be full of incredibly complicated structures --- but the constant K sets a limit on our ability to *prove* this. Given any string of bits, we can't rule out the possibility that there's some clever way of printing it out using a computer program less than K bits long. The Encyclopedia Brittanica, the human genome, the detailed atom-by-atom recipe for constructing a blue whale, or for that matter the entire solar system --- we are unable to prove that a computer program less than K bits long couldn't print these out. So we can't firmly rule out the reductionistic dream that the whole universe evolved mechanistically starting from a small "seed", a bitstring less than K bits long. (Maybe it did!) So this raises the question, how big is K? It depends on ones axioms for mathematics. Recall that the algorithmic entropy of a bitstring is defined as the length of the shortest program that prints it out. For any finite consistent first-order axiom system A exending the usual axioms of arithmetic, let K(A) be the constant such that no bitstring can be proved, using A, to have algorithmic entropy greater than K(A). We can't compute K(A) exactly, but there's a simple upper bound for it. As Gacs explains, for some constant c we have: K(A) < L(A) + 2 log_2 L(A) + c where L(A) denotes the length of the axiom system A, encoded as bits as efficiently as possible. I believe the constant c is computable, though of course it depends on details like what universal Turing machine you're using as your computer. What I want to know is, how big in practice is this upper bound on K(A)? I think it's not very big! The main problem is to work out a bound on c. baez@galaxy.ucr.edu Joined: 21 Oct 2005 Posts: 53 Posted: Sun Feb 12, 2006 9:59 pm    Post subject: Re: This Week's Finds in Mathematical Physics (Week 226) Here are some corrections and clarifications, mostly thanks to a friend who usually prefers to remain anonymous: In article <dsirq1$pjg$1@glue.ucr.edu>, John Baez <baez@math.removethis.ucr.andthis.edu> wrote: Quote: MD5 is a popular hash function invented by Ron Rivest in 1991. This is what it says in Wikipedia: http://en.wikipedia.org/wiki/MD5 with a big picture of Rivest right on top of the article, but my friend says "I think it's usually credited to a small set of coinventors, and I think Ralph Merkle is a coinventor either of MD5 or one of its immediate ancestors." Quote: People use it for checking the integrity of files: first you compute the digest of a file, and then, when you send the file to someone, you also send the digest. If they're worried that the file has been corrupted or tampered with, they compute its digest and compare it to what you sent them. Of course, if deliberate tampering is what you fear, you have to send the digest by a different channel than the original file, or use some other trick. Quote: But if you prove that P *does* equal NP, you might make more money by breaking cryptographic hash codes and setting yourself up as the Napoleon of crime. Or, you could make lots of money by solving problems that nobody else can solve. This could be a more sustainable lifestyle... but I wanted to work in a reference to that Sherlock Holmes quote, to play off against the von Neumann quote. Quote: We can define a "random sequence" to be one that no algorithm can guess with a success rate better than chance would dictate. Here "generate" would be clearer than "guess", since I was trying to allude to the usual notion of randomness from algorithmic information theory (in an informal sort of way). I don't know if there are sequences that no algorithm can generate with a success rate beating chance, but where an algorithm can do well guessing the (n+1)st digit after having seen the first n. Does anyone know? Quote: Chaitin has given a marvelous definition of a particular random sequence of bits called Omega using the fact that no algorithm can decide which Turing machines halt... but this random sequence is uncomputable, so you can't really "exhibit" it: On the other hand, Wolfgang Brand points out this paper: http://www.cs.auckland.ac.nz/~cristian/Calude361_370.pdf where the first 64 bits of Omega have been computed. (There's no contradiction, as the paper explains.) Quote: Then Aaronson gets to the heart of the subject: a history of the P vs. NP question. This leads up to the amazing 1993 paper of Razborov and Rudich, which I'll now summarize. Here's the paper: Alexander A. Razborov and Steven Rudich, Natural proofs, in Journal of Computer and System Sciences, Vol. 55, No 1, 1997, pages 24-35. Available at http://www-2.cs.cmu.edu/~rudich/papers/natural.ps and http://genesis.mi.ras.ru/~razborov/int.ps Aaronson says it was written in 1993 even though the date of publication and the date on the paper itself (1996 or 1999, depending on which copy you look at) are later. I believe him; you have to be careful when using the /today command in LaTeX, since if you LaTeX the same paper 6 years later, you'll get a new date. Quote: The P versus NP question can be formulated as a question about the size of Boolean circuits - but Razborov and Rudich show that, under certain assumptions, there is no "natural" proof that P is not equal to NP. What are these assumptions? They concern the existence of good pseudorandom number generators. However, the existence of these pseudorandom number generators would follow from P = NP! So, if P = NP is true, it has no natural proof. Aargh! In both cases here when I wrote P = NP, I meant P *not* equal to NP. Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First Page 1 of 16 [234 Posts] Goto page:  1, 2, 3, ..., 14, 15, 16 Next View previous topic :: View next topic The time now is Thu Jun 21, 2018 6:27 am | All times are GMT Jump to: Select a forum-------------------Forum index|___Science and Technology    |___Math    |   |___Research    |   |___num-analysis    |   |___Symbolic    |   |___Combinatorics    |   |___Probability    |   |   |___Prediction    |   |       |   |___Undergraduate    |   |___Recreational    |       |___Physics    |   |___Research    |   |___New Theories    |   |___Acoustics    |   |___Electromagnetics    |   |___Strings    |   |___Particle    |   |___Fusion    |   |___Relativity    |       |___Chem    |   |___Analytical    |   |___Electrochem    |   |   |___Battery    |   |       |   |___Coatings    |       |___Engineering        |___Control        |___Mechanics        |___Chemical Topic Author Forum Replies Last Post Similar Topics WHEW! The Real Cause of Global Warming Ed Conrad Chem 0 Wed Jul 19, 2006 1:24 pm Why you can not count real number? cyclomethane@gmail.com Math 13 Mon Jul 10, 2006 11:59 am Real Integral w/ Complex Analysis Narcoleptic Insomniac Math 8 Sat Jul 08, 2006 1:16 pm function semi-algebraic over real closed sub-field? Martin Ziegler Research 0 Sat Jul 08, 2006 8:17 am Signal Nonlocality Real or Imaginary? Jack Sarfatti Math 0 Sat Jul 08, 2006 4:33 am
2018-06-21 06:28:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903468370437622, "perplexity": 2071.6146109232586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00510.warc.gz"}
https://philpeople.org/profiles/alfredo-fernandez-1/publications?app=174&order=viewings
Areas of Specialization •  10 ##### On axiom schemes for T-provably $${\Delta_{1}}$$ Δ 1 formulas with A. Cordón-Franco and F. F. Lara-Martín Archive for Mathematical Logic 53 (3-4): 327-349. 2014. This paper investigates the status of the fragments of Peano Arithmetic obtained by restricting induction, collection and least number axiom schemes to formulas which are Δ1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\Delta_1}$$\end{document} provably in an arithmetic theory T. In particular, we determine the pr…Read more
2021-10-18 00:28:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717609882354736, "perplexity": 7311.792268719322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00541.warc.gz"}
https://walkccc.github.io/CLRS/Chap22/22.2/
## 22.2-1 Show the $d$ and $\pi$ values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex $3$ as the source. $$\begin{array}{c|cccccc} \text{vertex} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline d & \infty & 3 & 0 & 2 & 1 & 1 \\ \pi & \text{NIL} & 4 & \text{NIL} & 5 & 3 & 3 \end{array}$$ ## 22.2-2 Show the $d$ and $\pi$ values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex $u$ as the source. $$\begin{array}{c|cccccc} \text{vertex} & r & s & t & u & v & w & x & y \\ \hline d & 4 & 3 & 1 & 0 & 5 & 2 & 1 & 1 \\ \pi & s & w & u & \text{NIL} & r & t & u & u \end{array}$$ ## 22.2-3 Show that using a single bit to store each vertex color suffices by arguing that the $\text{BFS}$ procedure would produce the same result if lines 5 and 14 were removed. $\textit{Note:}$ This exercise changed in the third printing. This solution reflects the change. The $\text{BFS}$ procedure cares only whether a vertex is white or not. $A$ vertex $v$ must become non-white at the same time that $v.d$ is assigned a finite value so that we do not attempt to assign to $v.d$ again, and so we need to change vertex colors in lines 5 and 14. Once we have changed a vertex's color to non-white, we do not need to change it again. ## 22.2-4 What is the running time of $\text{BFS}$ if we represent its input graph by an adjacency matrix and modify the algorithm to handle this form of input? The time of iterating all edges becomes $O(V^2)$ from $O(E)$. Therefore, the running time is $O(V + V^2)$. ## 22.2-5 Argue that in a breadth-first search, the value $u.d$ assigned to a vertex $u$ is independent of the order in which the vertices appear in each adjacency list. Using Figure 22.3 as an example, show that the breadth-first tree computed by $\text{BFS}$ can depend on the ordering within adjacency lists. The correctness proof for the $\text{BFS}$ algorithm shows that $u.d = \delta(s, u)$, and the algorithm doesn't assume that the adjacency lists are in any particular order. In Figure 22.3, if $t$ precedes $x$ in $Adj[w]$, we can get the breadth-first tree shown in the figure. But if $x$ precedes $t$ in $Adj[w]$ and $u$ precedes $y$ in $Adj[x]$, we can get edge $(x, u)$ in the breadth-first tree. ## 22.2-6 Give an example of a directed graph $G = (V, E)$, a source vertex $s \in V$, and a set of tree edges $E_\pi \subseteq E$ such that for each vertex $v \in V$, the unique simple path in the graph $(V, E_\pi)$ from $s$ to $v$ is a shortest path in $G$, yet the set of edges $E_\pi$ cannot be produced by running $\text{BFS}$ on $G$, no matter how the vertices are ordered in each adjacency list. The edges in $E_\pi$ are shaded in the following graph: To see that $E_\pi$ cannot be a breadth-first tree, let's suppose that $Adj[s]$ contains $u$ before $v$. $\text{BFS}$ adds edges $(s, u)$ and $(s, v)$ to the breadth-first tree. Since $u$ is enqueued before $v$, $\text{BFS}$ then adds edges $(u, w)$ and $(u, x)$. (The order of $w$ and $x$ in $Adj[u]$ doesn't matter.) Symmetrically, if $Adj[s]$ contains $v$ before $u$, then $\text{BFS}$ adds edges $(s, v)$ and $(s, u)$ to the breadth-first tree, $v$ is enqueued before $u$, and $\text{BFS}$ adds edges $(v, w)$ and $(v, x)$. (Again, the order of $w$ and $x$ in $Adj[v]$ doesn't matter.) $\text{BFS}$ will never put both edges $(u, w)$ and $(v, x)$ into the breadth-first tree. In fact, it will also never put both edges $(u, x)$ and $(v, w)$ into the breadth-first tree. ## 22.2-7 There are two types of professional wrestlers: "babyfaces" ("good guys") and "heels" ("bad guys"). Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have $n$ professional wrestlers and we have a list of $r$ pairs of wrestlers for which there are rivalries. Give an $O(n + r)$-time algorithm that determines whether it is possible to designate some of the wrestlers as babyfaces and the remainder as heels such that each rivalry is between a babyface and a heel. If it is possible to perform such a designation, your algorithm should produce it. Create a graph $G$ where each vertex represents a wrestler and each edge represents a rivalry. The graph will contain $n$ vertices and $r$ edges. Perform as many $\text{BFS}$'s as needed to visit all vertices. Assign all wrestlers whose distance is even to be babyfaces and all wrestlers whose distance is odd to be heels. Then check each edge to verify that it goes between a babyface and a heel. This solution would take $O(n + r)$ time for the $\text{BFS}$, $O(n)$ time to designate each wrestler as a babyface or heel, and $O(r)$ time to check edges, which is $O(n + r)$ time overall. ## 22.2-8 $\star$ The diameter of a tree $T = (V, E)$ is defined as $\max_{u,v \in V} \delta(u, v)$, that is, the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm. Suppose that a and b are the endpoints of the path in the tree which achieve the diameter, and without loss of generality assume that $a$ and $b$ are the unique pair which do so. Let $s$ be any vertex in $T$. We claim that the result of a single $\text{BFS}$ will return either $a$ or $b$ (or both) as the vertex whose distance from $s$ is greatest. To see this, suppose to the contrary that some other vertex $x$ is shown to be furthest from $s$. (Note that $x$ cannot be on the path from $a$ to $b$, otherwise we could extend). Then we have $$d(s, a) < d(s, x)$$ and $$d(s, b) < d(s, x).$$ Let $c$ denote the vertex on the path from $a$ to $b$ which minimizes $d(s, c)$. Since the graph is in fact a tree, we must have $$d(s, a) = d(s, c) + d(c, a)$$ and $$d(s, b) = d(s, c) + d(c, b).$$ (If there were another path, we could form a cycle). Using the triangle inequality and inequalities and equalities mentioned above we must have \begin{aligned} d(a, b) + 2d(s, c) & = d(s, c) + d(c, b) + d(s, c) + d(c, a) \\ & < d(s, x) + d(s, c) + d(c, b). \end{aligned} I claim that $d(x, b) = d(s, x) + d(s, b)$. If not, then by the triangle inequality we must have a strict less-than. In other words, there is some path from $x$ to $b$ which does not go through $c$. This gives the contradiction, because it implies there is a cycle formed by concatenating these paths. Then we have $$d(a, b) < d(a, b) + 2d(s, c) < d(x, b).$$ Since it is assumed that $d(a, b)$ is maximal among all pairs, we have a contradiction. Therefore, since trees have $|V| - 1$ edges, we can run $\text{BFS}$ a single time in $O(V)$ to obtain one of the vertices which is the endpoint of the longest simple path contained in the graph. Running $\text{BFS}$ again will show us where the other one is, so we can solve the diameter problem for trees in $O(V)$. ## 22.2-9 Let $G = (V, E)$ be a connected, undirected graph. Give an $O(V + E)$-time algorithm to compute a path in $G$ that traverses each edge in $E$ exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies. First, the algorithm computes a minimum spanning tree of the graph. Note that this can be done using the procedures of Chapter 23. It can also be done by performing a breadth first search, and restricting to the edges between $v$ and $v.\pi$ for every $v$. To aide in not double counting edges, fix any ordering $\le$ on the vertices before hand. Then, we will construct the sequence of steps by calling $\text{MAKE-PATH}(s)$, where $s$ was the root used for the $\text{BFS}$. 1 2 3 4 5 6 7 MAKE-PATH(u) for each v ∈ Adj[u] but not in the tree such that u ≤ v go to v and back to u for each v ∈ Adj[u] but not equal to u.π go to v perform the path proscribed by MAKE-PATH(v) go to u.π
2019-06-16 11:45:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7863190770149231, "perplexity": 253.44976125322307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00547.warc.gz"}
https://math.tecnico.ulisboa.pt/seminars/cam/?action=show&id=4656
# Analysis, Geometry, and Dynamical Systems Seminar ### A random particle system and nonentropy solutions of the Burgers equation on the circle We consider a particle system which is equivalent to a process valued on the space of nonentropy solutions of the inviscid Burgers equation. Such solutions are conjectured to be relevant for the study of the KPZ fixed point. We prove ergodicity and obtain some properties of the stationary measure. Joint work with C.-E. Bréhier (Lyon) and M. Mariani (Rome).
2020-06-01 16:32:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380133509635925, "perplexity": 719.8329576651051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00540.warc.gz"}
https://astarmathsandphysics.com/ib-physics-notes/astrophysics/4404-the-boundary-of-space.html
## The Boundary of Space The boundary of space is not a fixed idea. As the altitude increase, the atmosphere gets thinner and thinner, but we do not define space as starting at the height at which there is no atmosphere. In fact, the height at which space starts is drawn from aeronautics. As a plane flies higher, the speed of the plane must increase so that the plane can be controlled, since a thinning atmosphere results in less air flowing over the control surfaces. At a certain altitude the speed will be higher the theoretical orbital speed of a satellite, ignoring air resistance. This height, called the 'Karman Line', is taken by most scientists to be the height at which space begins, and is calculated to be about 100 km. Any object reaching an altitude of 100km will not in fact orbit the Earth. Air resistance s still significant at this altitude, and any object orbiting the Earth at this altitude will soon in fact, fall to Earth.
2017-08-21 19:37:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534464240074158, "perplexity": 613.4296554736892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109525.95/warc/CC-MAIN-20170821191703-20170821211703-00104.warc.gz"}
https://zbmath.org/?q=an:1197.45004
# zbMATH — the first resource for mathematics The solvability and explicit solutions of two integral equations via generalized convolutions. (English) Zbl 1197.45004 Authors’ abstract: This paper presents the necessary and sufficient conditions for the solvability of two integral equations of convolution type; the first equation generalizes from integral equations with the Gaussian kernel, and the second one contains the Toeplitz plus Hankel kernels. Furthermore, the paper shows that the normed rings on $$L^1(\mathbb R^d)$$ are constructed by using the obtained convolutions, and an arbitrary Hermite function and appropriate linear combination of those functions are the weight-function of four generalized convolutions associating $$F$$ and $$\check F$$. The open question about Hermitian weight-function of generalized convolution is posed at the end of the paper. ##### MSC: 45E10 Integral equations of the convolution type (Abel, Picard, Toeplitz and Wiener-Hopf type) 44A35 Convolution as an integral transform 46H05 General theory of topological algebras Full Text: ##### References: [1] Anderson, B.D.O.; Kailath, T., Fast algorithms for the integral equations of the inverse scatting problem, Integral equations operator theory, 1, 132-136, (1978) · Zbl 0378.65065 [2] Arfken, G., Mathematical methods for physicists, (1985), Academic Press · Zbl 0135.42304 [3] Chanda, K.; Sabatier, P.C., Inverse problems in quantum scattering theory, (1977), Springer-Verlag New York · Zbl 0363.47006 [4] Kailath, T., Some integral equations with “nonrational” kernels, IEEE trans. inform. theory, IT-12, 442-447, (1966) · Zbl 0199.22304 [5] J.N. Tsitsiklis, B.C. Levy, Integral equations and resolvents of Toeplitz plus Hankel kernels, Technical Report LIDS-P-1170, Laboratory for Information and Decision Systems, M.I.T., silver edition, December 1981 [6] Giang, B.T.; Tuan, N.M., Generalized convolutions for the Fourier integral transforms and applications, J. Siberian federal univ., 1, 4, 371-379, (2008) [7] Cho, P.S.; Kuterdem, H.G.; Marks, R.J., A spherical dose model for radio surgery plan optimization, Phys. med. biol., 43, 3145-3148, (1998) [8] Garcia-Vicente, F.; Delgado, J.M.; Peraza, C., Experimental determination of the convolution kernel for the study of the spatial response of a detector, Med. phys., 25, 202-207, (1998) [9] Garcia-Vicente, F.; Delgado, J.M.; Rodriguez, C., Exact analytical solution of the convolution integral equation for a general profile Fitting function and Gaussian detector kernel, Phys. med. biol., 45, 3, 645-650, (2000) [10] Kailath, T.; Levy, B.; Ljung, L.; Morf, M., Fast time-invariant implementations of Gaussian signal detectors, IEEE trans. inform. theory, IT-24, 4, 469-477, (1978) · Zbl 0389.94002 [11] Böttcher, A.; Silbermann, B., Analysis of Toeplitz operators, Springer monogr. math., (2006), Springer-Verlag Berlin [12] Hochstadt, H., Integral equations, (1973), John Wiley & Sons New York · Zbl 0137.08601 [13] Polyanin, A.D.; Manzhirov, A.V., Handbook of integral equations, (1998), CRC Press Boca Raton · Zbl 1021.45001 [14] Titchmarsh, E.C., Introduction to the theory of Fourier integrals, (1986), Chelsea New York · Zbl 0601.10026 [15] Giang, B.T.; Mau, N.V.; Tuan, N.M., Operational properties of two integral transforms of Fourier type and their convolutions, Integral equations operator theory, 65, 3, 363-386, (2009) · Zbl 1181.42009 [16] Rösler, M., Generalized Hermite polynomials and the heat equations for Dunkl operator, Comm. math. phys., 192, 519-542, (1998) · Zbl 0908.33005 [17] Tuan, N.M.; Tuan, P.D., Generalized convolutions relative to the Hartley transforms with applications, Sci. math. jpn., 70, 1, 77-89, (2009), (e2009, 351-363) · Zbl 1182.44005 [18] Britvina, L.E., A class of integral transforms related to the Fourier cosine convolution, Integral transforms spec. funct., 16, 5-6, 379-389, (2005) · Zbl 1085.42003 [19] Giang, B.T.; Tuan, N.M., Generalized convolutions and the integral equations of the convolution type, Complex var. elliptic equ., 55, 4, 331-345, (2010) · Zbl 1193.44029 [20] Silbermann, B.; Zabroda, O., Asymptotic behavior of generalized convolutions: an algebraic approach, J. integral equations appl., 18, 2, 169-196, (2006) · Zbl 1149.47020 [21] Thao, N.X.; Tuan, V.K.; Hong, N.T., Generalized convolution transforms and Toeplitz plus Hankel integral equation, Fract. calc. appl. anal., 11, 2, 153-174, (2008) · Zbl 1154.44002 [22] Nha, N.D.V.; Duc, D.T.; Tuan, V.K., Weighted $$l_p$$-norm inequalities for various convolution type transformations and their applications, Armen. J. math., 1, 4, 1-18, (2008) [23] Yakubovich, S.B.; Luchko, Y., The hypergeometric approach to integral transforms and convolutions, Math. appl., vol. 287, (1994), Kluwer Acad. Publ. Dordrecht/Boston/London · Zbl 0803.44001 [24] Rudin, W., Functional analysis, (1991), McGraw-Hill New York · Zbl 0867.46001 [25] Giang, B.T.; Tuan, N.M., Generalized convolutions for the integral transforms of Fourier type and applications, Fract. calc. appl. anal., 12, 3, 253-268, (2009) · Zbl 1180.42006 [26] Glaeske, H.; Tuan, V., Mapping properties and composition structure of multidimensional integral transform, Math. nachr., 152, 179-190, (1991) · Zbl 0729.44005 [27] Tomovski, Z.; Tuan, V.K., On Fourier transforms and summation formulas of generalized Mathieu series, Math. sci. res. J., 13, 1, 1-10, (2009) · Zbl 1178.33026 [28] Kirillov, A.A., Elements of the theory of representations, (1972), Nauka Moscow, (in Russian) · Zbl 0249.22012 [29] Naimark, M.A., Normed rings, (1959), P. Noordhoff Ltd. Groningen, The Netherlands · Zbl 0089.10102 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-01-15 18:57:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873925507068634, "perplexity": 8065.681808999192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00206.warc.gz"}
https://math.stackexchange.com/questions/17816/what-is-the-geometric-interpretation-behind-the-method-of-exact-differential-equ
What is the geometric interpretation behind the method of exact differential equations? Given an equation in the form $M(x)dx + N(y)dy = 0$ we test that the partial derivative of $M$ with respect to $y$ is equal to the partial derivative of $N$ with respect to $x$. If they are equal, then the equation is exact. What is the geometric interpretation of this? Further more to solve the equation we may integrate $M(x) dx$ or $N(y)dy$, whichever we like better, and then add a constant as a function in terms of the constant variable and solve this. e.g. If $f(x) = 3x^2$ then $F(x) = x^3 + g(y)$. After we have our integral we set its partial differential with respect to the other variable our other given derivative and solve for $g(y)$. I have done the entire homework assignment correctly, but I have no clue why I am doing these steps. What is the geometric interpretation behind this method, and how does it work? Great question. The idea is that $(M(x), N(y))$ defines a vector field, and the condition you're checking is equivalent (on $\mathbb{R}^2$) to the vector field being conservative, i.e. being the gradient of some scalar function $p$ called the potential. Common physical examples of conservative vector fields include gravitational and electric fields, where $p$ is the gravitational or electric potential. The differential equation $M(x) \, dx + N(y) \, dy = 0$ is then equivalent to the condition that $p$ is a constant, and since this is not a differential equation it is a much easier condition to work with. The analogous one-variable statement is that $M(x) \, dx = 0$ is equivalent to $\int M(x) \, dx = \text{const}$. Geometrically, the solutions to $M(x) \, dx + N(y) \, dy = 0$ are therefore the level curves of the potential, which are always orthogonal to its gradient. The most well-known example of this is probably the diagram of the electric field and the level curves of the electrostatic potential around a dipole. This is one way to interpret the expression $M(x) \, dx + N(y) \, dy = 0$; it is precisely equivalent to the "dot product" of $(M(x), N(y)$ and $(dx, dy)$ being zero, where you should think of $(dx, dy)$ as being an infinitesimal displacement along a level curve.
2020-01-24 08:35:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012077450752258, "perplexity": 78.466236644729}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00108.warc.gz"}
https://dukespace.lib.duke.edu/dspace/browse?type=subject&value=KNOTS
Now showing items 1-3 of 3 • #### A slicing obstruction from the $\frac {10}{8}$ theorem  (Proceedings of the American Mathematical Society, 2016-08-29) © 2016 American Mathematical Society. From Furuta’s 10/8 theorem, we derive a smooth slicing obstruction for knots in S3 using a spin 4-manifold whose boundary is 0-surgery on a knot. We show that this obstruction is able ... • #### Seifert surfaces distinguished by sutured Floer homology but not its Euler characteristic  (Topology and its Applications, 2015-04) © 2015 Elsevier B.V. In this paper we find a family of knots with trivial Alexander polynomial, and construct two non-isotopic Seifert surfaces for each member in our family. In order to distinguish the surfaces we study ... • #### The cardinality of the augmentation category of a Legendrian link  (Mathematical Research Letters, 2017) We introduce a notion of cardinality for the augmentation category associated to a Legendrian knot or link in standard contact R3. This ℓhomotopy cardinality' is an invariant of the category and allows for a weighted count ...
2019-12-14 20:30:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5445040464401245, "perplexity": 881.7892282326128}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00506.warc.gz"}
http://mathoverflow.net/questions/45646/trace-vs-state-in-von-neumann-algebras/45791
# trace vs state in von Neumann algebras The question may be not appropriate with the title, sine I do not know how to name it. I apologize. Let M be a finite $II_1$ factor, $\tau$ be the canonical trace. Let $p, q$ be two projections in M, if $\tau(p+q)>1$, we know that there exists a nonzero projection $r$, such that $r < p$ and $r < q$ (r=p$\wedge$q for example). If we are given an arbitrary state, but not trace, is the statement also true? - Since there is a counter example if you replace the II$_1$ factor' by $M_2$, there is also a counter example in any II$_1$ factor. –  Makoto Yamashita Nov 11 '10 at 4:55 I think that still requires an argument (not a very difficult one, though), because it is not immediately obvious (to me, at least) that two projections with zero intersection in $M_2(\mathbb{C})$ will have zero intersection when viewed in a II$_1$ factor. –  Martin Argerami Nov 11 '10 at 22:48 Thanks for your answers. I am still not clear about that... –  Paul Z Nov 12 '10 at 3:27 No problem. I wrote most of the details in the answer below. –  Martin Argerami Nov 12 '10 at 5:25 Thank you, Martin –  Paul Z Nov 13 '10 at 5:44 First, let us do the $M_2(\mathbb{C})$ case. Let $t\in(0,1)$, and define $p=\begin{bmatrix}1&0\\0&0\end{bmatrix},\ \ \ q=\begin{bmatrix}t&\sqrt{t-t^2}\\ \sqrt{t-t^2}&1-t\end{bmatrix}.$ Note that $p\wedge q=0$, since their ranges are two distinct lines through the origin. Define a (faithful) state $\varphi$ by $\varphi\left(\begin{bmatrix}a&b\\ c&d\end{bmatrix}\right)=\frac{2a+b}3$ (any convex combination $ra+sd$ with $r>s$ will do). Now $\varphi\left(p+q\right)=\frac{2(1+t)+1-t}3=1+\frac{t}3>1.$ So that's the counterexample in $M_2(\mathbb{C})$ If now $M$ is a II$_1$ factor, we can use the same idea in the following way: let $p$ be any projection of trace 1/2. Then $p\sim(1-p)$ and there exists a partial isometry $v\in M$ with $v^*v=p$, $vv^*=1-p$. The four operators $p,v^*,v,1-p$ behave exactly as the matrix units $e_{11},e_{12},e_{21},e_{22}$. So we define $q=tp+\sqrt{t-t^2}(v+v^*)+(1-t)(1-p)$, which is a projection; it is easy to check that $\tau(v)=0$, and that $\tau(q)=1/2$. Let $\varphi$ be the (faithful) state $\varphi(x)=2\tau(2px+(1-p)x)/3$. Then $2p(p+q)+(1-p)(p+q)=2p+2pq+p+q-p-pq=2p+pq+q,$ and $\varphi(p+q)=\frac23\,\tau(2p+pq+q)>\frac23\,\tau(2p+q)=\frac23\,\left(1+\frac12\right)=1.$ It remains to see that $p\wedge q=0$. Represent $M$ faithfully on a Hilbert space $H$. Suppose that $\xi\in pH\cap qH$. Then $\xi=p\xi=q\xi$. In particular, $(1-p)\xi=0$. Then $v^*\xi=0$, and so $\xi=q\xi=tp\xi+\sqrt{t-t^2}v\xi.$ The last piece of information we need is that $v=(1-p)v$. Then $pv\xi=0$, and $\xi=p\xi=pq\xi=tp\xi=t\xi.$` Since $t\ne1$, this forces $\xi=0$. It's easy to see from von Neumann's bimommutant theorem that $p \wedge q$ always lives in the von Neumann algebra generated by $p$ and $q$. So $p \wedge q$ does not change if you consider a larger von Neumann algebra. –  Jesse Peterson Apr 3 '11 at 3:59
2015-07-29 20:30:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678468704223633, "perplexity": 145.57751458612665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986625.58/warc/CC-MAIN-20150728002306-00284-ip-10-236-191-2.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/81921/what-is-the-grey-film-black-sludge-from-mercury-how-to-remove-it
# What is the grey film/black sludge from mercury? How to remove it? How to remove it? Any other simple method apart from distillation. I have a small container of mercury that I’ve collected over the past decade. I opened the container after a few years now and I noticed that there is a grey coating on the surface of mercury. Also there is a black powdery deposit at the bottom. It sticks to the bottom and sides of the plastic container. • There is also chance that the mercury you stored was contaminated with other metals. In this case they might react with air, forming oxide powder. If you are lucky, they might dissolve in acid. However, the resulting solution must be assumed to contain mercury, i.e. considered a toxic waste. Sep 3 '17 at 7:40 • Out of idle curiosity, what did you decide to do with the mercury? No need to answer if you prefer not! – Ed V Apr 29 '20 at 18:47 • @EdV No problem sure I can tell you. I eventually figured out what was the issue. So, about 2 months after I had posted this, I got another small batch of mercury and in the same type of glass bottle container. The bottle used to store both the contaminated one and the new one was the same and its laboratory type. The new one is exactly how I had stored it. Well maybe a little less shiny due to mild oxidation. But the other one still sticky and It coats on the glass walls and the base of the container...contd Apr 29 '20 at 19:31 • The reason was my friend who was playing around with it while I was showing him, had transferred it to a watch glass to take a better look. But unknown to me he had used some aluminium rod to touch its surface and then later we had transferred it back to the bottle. He didn't know about the amalgamation process. I guess I might distil it off someday to get pure Hg. Apr 29 '20 at 19:32 • Ah, aluminum reacts vigorously with mercury, once the oxide coating on the aluminum is breached. In WW II, the allies actually considered doing a covert operation to sabotage German aircraft by putting mercury amalgam on the fuselages. So that explains the stuff that that you are seeing. Thanks for the information! – Ed V Apr 29 '20 at 19:40 ## 1 Answer First, I would advise against using mercury in a home lab due to the ease which scattered droplets fall into cracks and evaporate. That said, if distillation is out, then try bulb-pipetting from the center of the mass of mercury to avoid some of the surface contamination. Avoid dropping the mercury when lifting the pipette, since $\ce{Hg}$ is so dense it tends to run out as soon as lifted. Ideally, work in a hood with a large tray with raised sides underneath the apparatus. If purity is important, electrolysis will yield a higher-grade product... at the risk of working with even more toxic compounds! Better, get some gallium if you want to play with a liquid metal, and don't need $\ce{Hg}$ to make Grignard reagent or other esoteric substance.
2022-01-27 00:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46079039573669434, "perplexity": 1212.5850425858978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00641.warc.gz"}
https://proxies-free.com/tag/substitution/
## Solve For the expression by substitution If $$x=a(b-c),y=b(c-a),z=c(a-b)$$Find $$(frac{x}{a})^3+(frac{y}{b})^3+(frac{z}{c})$$ Options: $$text{1: }frac{xyz}{abc}$$ $$text{2: }frac{xyz}{3abc}$$ $$text{3: }frac{abc}{xyz}$$ $$text{4: }frac{3abc}{xyz}$$ What I tried: Try 1. $$(frac{a(b-c)}{a})^3+(frac{b(c-a)}{b})^3+(frac{c(a-b)}{c}$$ But that leaves us with $$(b-c)^3+(c-a)^3+(a-b)^3$$ That doesn’t match with any options given above. Try 2 Since $$frac{x}{(b-c)}=a,frac{y}{(c-a)})=b,frac{z}{(a-b)}=c$$ $$(frac{a(b-c)(b-c)}{x})^3+(frac{b(c-a)(c-a)}{y})^3+(frac{c(a-b)(a-b)}{z}$$ But that leaves us with: $$(frac{acdot (b^2-2bc+c^2)}{x})^3+(frac{bcdot (c^2-2ac+a^2)}{y})^3+(frac{ccdot (a^2-2ab+b^2)}{z}$$ So is there any way where I can get the correct answer? If I have missed something obvious, please be gentle. ## complex analysis – Computing a substitution with matrices Consider the system $$begin{pmatrix}frac{d}{dx} & -q\ -q & -frac{d}{dx}end{pmatrix}Psi=-ixiPsi.$$ Now, it is said that substituting $$Psi=e^{ixi x}Phi$$ with $$Phi=begin{pmatrix}Phi_1\Phi_2end{pmatrix}$$ results in $$begin{pmatrix}frac{partial}{partial x} & -q\ -q & -frac{partial}{partial x}end{pmatrix}Phi=-2ixibegin{pmatrix}Phi_1\0end{pmatrix}.$$ I do not see it. If I substitute, I get, on the left hand side, $$begin{pmatrix}frac{d}{dx} & -q\ -q & -frac{d}{dx}end{pmatrix}begin{pmatrix}e^{ixi x}Phi_1\e^{ixi x}Phi_2end{pmatrix}=begin{pmatrix}frac{d}{dx}e^{ixi x}Phi_1-qe^{ixi x}Phi_2\ -qe^{ixi x}Phi_1-frac{d}{dx}e^{ixi x}Phi_2end{pmatrix}$$ ## equation solving – Write expressions in simpler form with substitution for part of the epxression I have faced this situation many times where I get values like this as solutions: ``````x = (a1 + a2 + a3 - a4 - a5*a6 + a7) * (a1 - a2 - a3 - a4 - a5*a6 + a7) * m y = (a1 + a2 + a3 - a4 - a5*a6 + a7) * (a1 - a2 - a3 - a4 - a5*a6 - a7) * n `````` Now, the value for `x` and `y` has `(a1 + a1 + a3 - a4 - a5*a6 + a7)` and `a1 - a2 - a3 - a4 - a5*a6` as common parts. Is there any built-in way to detect these common parts in Mathematica and write solutions like: ``````x = p * (q + a7) * m y = p * (q - a7) * n where p = (a1 + a2 + a3 - a4 - a5*a6 + a7) and q = a1 - a2 - a3 - a4 - a5*a6 `````` If there is no automatic way of doing this, can I tell Mathematica to substitute these values manually? ## induction – Substitution Method to Solve Recurrences One approach to solve recurrences is the so called substitution method. While practicing I encountered some recurrences, where non integer arguments can occur, e.g. T(n) = 2*T(n/2) + n, if n is not a power of 2. My understanding of the substitution method, is that it only works, if all arguments are integers. Is this correct? Is is it reasonable to apply the substitution method to lower and upper bounds instead, e.g. T(n) ≤ T1(n), where T1(n) = 2*T1(ceil(n/2)) + n, in order to draw conclusions regarding T? ## undecidability – Is this string substitution problem decidable? Take as input a finite set of string pairs. Each pair represents a substitution. Replace exactly one instance of the left with the right. A substitution can only be performed on x if the left is a substring of x. For example $$01rightarrow 10$$ means replace one 01 with 10, and can only be applied if 01 is in the string. The algorithm should decide if for the given set there exists a string such that applying a non – zero number of the substitutions yields the initial string. I am wondering if this is a decidable task. It seems like it should be possible to establish an upper bound on the length of the string and number of substitutions, but I haven’t been able to. And from the other end I tried to build a test based on invariants, since invariants can be used to show a lot of sets can’t loop. For example $${011rightarrow 101, 101rightarrow 110}$$ Can never produce a loop. We can show this since each rule decreases the average position of 1s in the string when used. Thus it can never produce a string with the same average position as the start thus it can never produce the start. But there are some cases that I can’t think of a clever invariant. For example $${1001rightarrow 0110}$$ Which clearly can’t form a loop, but I can’t think of a property which it always increases. Is this problem decidable? ## Closure of context-sensitive languages under inverse language substitution Fix a universal Turing machine $$T$$. Let $$L_1$$ be the language of all sequences of configurations $$c_1 # c_2 # cdots # c_ell$$ which describe a valid computation of $$T$$, starting with an arbitrary initial configuration $$c_1 neq epsilon$$, and ending with a halting state; we do not allow any further configurations to appear. This language is clearly context-sensitive (it can be accepted in linear space even deterministically). Let $$L_2$$ be the language of all sequences of configurations $$c_1 # c_1 # c_2 # cdots # c_ell$$ such that $$c_1 # c_2 # cdots # c_ell in L_1$$ (notice the initial configuration is repeated twice). This language is also context-sensitive. Define a function $$f$$ from $$Sigma$$ to CSLs by $$f(sigma) = { sigma }$$ for $$sigma neq #$$ and $$f(#) = { #w : w in L_1 }$$. Notice that $$f^{-1}(L_2)$$ consists of strings $$c_1#$$, where $$c_1$$ is an initial configuration of $$T$$ on which $$T$$ eventually halts. Thus $$f^{-1}(L_2)$$ is not computable. ## Context-Sensitive Grammars are closed under language substitution, and string homomorphism, but are they closed under inverse language substitution? Wikipedia note on closure properties. We define language substitution for a Context-Sensitive Language (CSL) $$S$$ over an alphabet $$Sigma$$ is a map from $$Sigma$$ into CSL’s, for example: $$f(abc) = L_1(a) L_2(b) L_3(c)$$ such that (I guess) the union of all $$L(s)$$ for $$s in f(S)$$ is defined to be $$L(f(S))$$ and $$L(f(S))$$ is known to be a CSL itself. That is my interpretation of language substitution for CSL’s. Well languages are also closed under inverse of string homomorphisms, homomorphisms $$f$$ being a pecial case of language subtitution in which each $$a in Sigma$$ gets mapped to a singleton language $$L_1(a) = {f(a)}$$. So my question is simple, yet probably hard or interesting to prove. That is, are CSL’s closed under inverse of language substitution? Let $$f$$ be a language substitution taking $$S$$ to $$f(S)$$. Then $$f^{-1}(S) := bigcup f^{-1}(s)$$ I’m assuming. Is that a CSL? ## How to integrate without using trigonometric substitution: \$int{dfrac{x}{sqrt{1-x^4}}}dx\$? How can I integrate the following without using Trigonometric substitution? $$int{dfrac{x}{sqrt{1-x^4}}}dx$$ I tried substituting, $$t = 1 – x^4$$ but that didn’t work. The solution according to my book is $$dfrac{1}{2}arcsin{left(x^2right)}+C$$ ## natural language processing – Compute the edit distance between two words in which substitution is not allowed This is a straighforward modification of the classical dynamic programming algorithm. Let the first string be $$s = s_1 s_2 dots s_n$$ and the second string be $$t = t_1 t_2 dots t_m$$. In the dynamic programming algorithm you define $$D(i)(j)$$ as the edit distance between $$s_1 s_2 dots s_i$$ and $$t_1 t_2 dots t_j$$. The bases cases are $$D(0)(j)=j$$ and $$D(i)(0)=i$$ (for any $$i = 0, dots, n$$ and $$j=0, dots, m$$), and the recursive formula (for $$i>0$$ and $$j>0$$) is: $$D(i)(j) = min begin{cases} 1 + D(i-1)(j) & mbox{ deletion of s_i};\ 1 + D(i)(j-1) & mbox{ insertion of t_j};\ 1 + D(i-1)(j-1) & mbox{ substitution of s_i with t_j}. end{cases}$$ You can simply modify the above formula by not contemplating the last case, i.e.: $$D(i)(j) = min begin{cases} 1 + D(i-1)(j) & mbox{ deletion of s_i};\ 1 + D(i)(j-1) & mbox{ insertion of t_j}. end{cases}$$ By definition of $$D(cdot)(cdot)$$, the edit distance between $$s$$ and $$t$$ is $$D(n)(m)$$. ## calculus and analysis – Algorithmically imposing a substitution in a difficult integral Consider the following integral: $$I = frac{1}{pi c^2} intlimits_{r=0}^c 2 pi r e^{-frac{ left( sqrt{a^2 – r^2} -sqrt{b^2 – r^2} right)}{lambda}} dr$$ under the conditions $$a>b>c>0$$ and $$lambda > 0$$ are all in $$mathbb{R}$$. This problem is too difficult for Mathematica (v. 11.3) to solve directly: ``````Assuming(a > b > c > 0 && (Lambda) > 0, 1/((Pi) c^2) Integrate( 2 (Pi) r Exp(- (Sqrt(a^2 - r^2) - Sqrt(b^2 - r^2))/(Lambda)), {r, 0, c}) ) `````` However, if one makes the substitution $$k = sqrt{a^2 – r^2} – sqrt{b^2 – r^2}$$, then one gets the following integral: $$frac{2}{c^2} intlimits_{k = a – b}^{sqrt{a^2 – c^2} – sqrt{b^2 – c^2}} left( frac{(a^2 – b^2)^2}{k^3} – k right) e^{-k/lambda} dk$$ This integral can be broken up and solved analytically, where Mathematica employs the $$E_q (x) – intlimits_1^infty frac{e^{-x t}}{t^q} dt$$ which Mathematica implements as `ExpIntegralE(q,x)`. I accept that finding this $$k$$ substitution requires “intelligence” that Mathematica does not yet have. But assume the user has this insight and wants to give it as a hint or condition to Mathematica. Hence the core of my question: Question In the integral for $$I$$, defined above, how would the user impose the $$k$$ substitution as a “hint” and have Mathematica perform all the substitutions (including differentials and limits) and produce an analytic solution for $$I$$?
2021-07-24 21:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 84, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198124766349792, "perplexity": 436.5379759124501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00206.warc.gz"}
https://papers.nips.cc/paper/2011/hash/7e889fb76e0e07c11733550f2a6c7a5a-Abstract.html
Rémi Munos Abstract We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget of n evaluations. The function f is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric . We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of . We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric under which f is smooth, and whose performance is almost as good as DOO optimally-fitted.
2022-11-30 03:44:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279287815093994, "perplexity": 427.84564264257017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00234.warc.gz"}
http://blog.mikael.johanssons.org/archive/2007/02/more-silly-random-text/
Skip to Content » Michi’s blog » More silly random text ## More silly random text • February 9th, 2007 • 5:34 pm Syntaxfree writes over at his blog about a silly little toy he wrote, using the PFP library, to generate random text. Now, his text is unreadable. I mean, it’s even unpronounceable. Why? Because he’s looking at bigram distributions of letter. Great, I thought, I’ll do him one better. Random text using bigram distributions on words must surely be a LOT better than random text using bigram distributions on letters. At least the words come out readable, and they may even come out in a decent order. So I sat down with his code, and hacked, tweaked, and monadized it to this module Test where import Probability import Data.Char import Control.Monad filename="kjv.genesis" bigram t = zip ws (tail ws) where ws = (words . map toLower . filter (\x -> isAlpha x || isSpace x)) t distro = uniform . bigram goal = readFile filename goalD = fmap distro goal one = do gD <- goalD (a,b) <- pick gD return [a,b] many n = fmap unwords $fmap concat$ sequence $take n$ repeat one So, I have some corpus — I just pulled the King James Genesis to have some sort of body of text to work it, and saved it into the kjv.genesis file. Then, I can pop over to my beloved GHCi and execute Prelude> :l Test [1 of 4] Compiling ListUtils        ( ListUtils.hs, interpreted ) [2 of 4] Compiling Show             ( Show.hs, interpreted ) [3 of 4] Compiling Probability      ( Probability.hs, interpreted ) [4 of 4] Compiling Test             ( Test.hs, interpreted ) Ok, modules loaded: Show, Test, Probability, ListUtils. *Test> many 30 Loading package haskell98 … linking … done. "house and it to circumcised all cool of these things daughters from ye go dreamed for stead and in unto god of for wives done to i give god shall bowed himself and shem tower of small and said lord he was called his the days these are thither therefore cainan and and he rachel and hear my sons born" The first execution will take a while, since it has to, y’know, digest the actual text, calculate distributions, and set everything up. Subsequent executions also take quite some time, and I’m not at all certain why. An explanation would be nice, if someone has it. And for some sample poetry, I give you house and it to circumcised all cool of these things daughters from ye go dreamed for stead and in unto god of for wives done to i give god shall bowed himself and shem tower of small and said lord he was called his the days these are thither therefore cainan and and he rachel and hear my sons born when isaac dry land unto him isaac and have i children struggled lord hath had eaten goods which morning was ye shall by the all the me this naphtali and years old was her to see of the his brother out of names after they feed thy mothers of ephron god said he put him into after his the tent unto us i will to him i will the presence hands to his right name was possession of days journey down in that thou perizzites and and he they for god made that is of the twentys sake jacob so itself after thou and of anah years and in isaac pillar of and the builded a of canaan noah and ### 4 People had this to say... • Dan P • February 9th, 2007 • 22:22 When I was a lad we didn’t have this Internets thing so I had to type in the text myself when I wrote my first Markov chain generator. Instead of the Bible I used this book which I typed in in its entirety. The output made about the same amount of sense however. • Moira • February 28th, 2007 • 18:57 William S. Burroughs would have loved this. He did it by hand, with pages of text and ribbons of reel-to-reel audio tape and scissors. • Johan • March 9th, 2007 • 21:35 Or he would have felt it obsolete. Random text is no fun when there’s no challenge to it. //JJ • kris • June 1st, 2007 • 2:53 how long did it take to generate the text in subsequent runs? ### Want your say? * Required fields. Your e-mail address will not be published on this site You can use the following XHTML tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
2015-03-04 04:18:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867217540740967, "perplexity": 14508.395596410119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00313-ip-10-28-5-156.ec2.internal.warc.gz"}
https://mersenneforum.org/showthread.php?s=2cdcb882cf479af0850c929aa982f360&t=18029&goto=nextnewest
mersenneforum.org Eurocrackpot User Name Remember Me? Password Register FAQ Search Today's Posts Mark Forums Read 2015-05-16, 09:46 #1 ATH Einyen     Dec 2003 Denmark C6716 Posts Eurocrackpot The lottery Eurojackpot which runs in 16 european countries just gave its maximum jackpot of €90 million ($101 million or £64 million) to a single person in the Czech Republic last night: http://news.newsdirectory1.com/euro-...zech-republic/ http://en.wikipedia.org/wiki/Eurojackpot It is crazy so many people (including me sometimes) pay €2 ($2.24 ~ £1.43) for each lottery ticket with a chance of just 1:95.344.200 of winning the biggest pot. They post the number of winners of each prize category each week and from the chance of 1:37.33 of the smallest prize you know the approximate number of tickets bought each week. This time it took 12 weeks and a total of ~ 214 million tickets to beat the 1:95.344.200 odds. Even though it is those insane amounts of money that draw people to this, €90 million euro is way too much for 1 person to win. It would be much better if 90 people won €1 million each. Last fiddled with by ATH on 2015-05-16 at 09:49 2015-05-18, 00:03   #2 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 10,007 Posts Quote: Originally Posted by ATH It is crazy so many people (including me sometimes) pay €2 (\$2.24 ~ £1.43) for each lottery ticket with a chance of just 1:95.344.200 of winning the biggest pot. When our local jackpot gets large, some of our co-workers gang up and purchase a number of tickets. They then abuse the company photostat to make duplicates, so that all contributers can see the numbers and be disappointed at the same time. Every so often I repeat my offer to them. They pay me the same amount as they contributed to the lottery. Then, if they win the big prize (and only it), I will double their winnings (I will pay them the same as the lottery does.) They look at me like I am crazy (mumble under their breath similar thoughts) and refuse to pay me. 2015-05-18, 01:16 #3 ATH Einyen     Dec 2003 Denmark 52·127 Posts I think of lotteries in different "quirky" ways: If the many world interpretation of quantum mechanics is correct, then there is a universe if not many where I won those €90 million or where I won other lottery millions So maybe I exist in one of those where I eventually will win big. I do not believe in any God but sometimes I wonder if there is a "fate" that has determined what will happen. I mean according to physics there is no moving "now" moment and past and present, there is only 1 big fixed 4-dimensional space-time (read Brian Greene's books or watch the tv shows made from them). So somehow the future already exist somewhere ahead of us in the space-time (so free will is an illusion?) and maybe I'm "fated" to win the lottery, so I better buy a ticket (which is stupid since if I'm fated to win, then fate should make sure I get a ticket.....but maybe that's exactly what fate is doing by making me think these thoughts ) If all there are is math and statistics and no god, fate or many worlds then I should never ever buy those tickets. I will never hit 1:95 million. I have been playing the local state lottery almost every week for over 20 years without winning anything big (odds 1:8.3 million). I should definitely be playing games with better odds (but then also smaller winnings). But on the other hand *some* people do win millions every week around the world (unless all lotteries are scams and no one actually ever receives any money ) What it comes down to is hope. If I do not play then I'm sure I will not win, but IF I play there is always hope. Hope that I live in the right universe or that I'm fated to win, and hope I think is a good thing. (Just don't spend *too* much money gambling) Last fiddled with by ATH on 2015-05-18 at 01:20 2015-05-18, 13:52 #4 chappy     "Jeff" Feb 2012 St. Louis, Missouri, USA 13·89 Posts There are three kinds of people in the world: Those who are good at math and those who aren't. 2015-05-18, 17:25   #5 davar55 May 2004 New York City 5×7×112 Posts Quote: Originally Posted by chappy There are three kinds of people in the world: Those who are good at math and those who aren't. There are two kinds of people in the world. Those who think ninety mil is a lot of money, and those who don't. 2015-05-19, 02:23   #6 LaurV Romulan Interpreter Jun 2011 Thailand 230608 Posts Quote: Originally Posted by davar55 There are two kinds of people in the world. Those who think ninety mil is a lot of money, and those who don't. Well, actually, they are still three. The third group is the PCB layouters (or at least Altium/Protel users). All my colleagues in the office think that 90 mil is a freaking thick PCB track... What do you want to put through it? 300 amperes? Last fiddled with by LaurV on 2015-05-19 at 02:25 2015-05-19, 18:30   #7 jyb Aug 2005 Seattle, WA 178210 Posts Quote: Originally Posted by Uncwilly When our local jackpot gets large, some of our co-workers gang up and purchase a number of tickets. They then abuse the company photostat to make duplicates, so that all contributers can see the numbers and be disappointed at the same time. Every so often I repeat my offer to them. They pay me the same amount as they contributed to the lottery. Then, if they win the big prize (and only it), I will double their winnings (I will pay them the same as the lottery does.) They look at me like I am crazy (mumble under their breath similar thoughts) and refuse to pay me. Perhaps they refuse because they are making a judgment as to the likelihood that you would actually hold up your end of the bargain if they won. 2015-05-19, 20:17   #8 wblipp "William" May 2003 New Haven 1001010000002 Posts Quote: Originally Posted by Uncwilly They pay me the same amount as they contributed to the lottery. Then, if they win the big prize (and only it), I will double their winnings (I will pay them the same as the lottery does.) I'm reminded if the history of Insurance. In particular the young industry was fraught with fraud and scandal. These ranged from issuing companies that did not actually have the capital to pay claims, running instead like fragile Ponzi schemes 2015-05-19, 23:50   #9 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 10,007 Posts Quote: Originally Posted by jyb Perhaps they refuse because they are making a judgment as to the likelihood that you would actually hold up your end of the bargain if they won. If anyone asked, I told them that I would take out an insurance policy to make the payment. All times are UTC. The time now is 01:57. Thu Oct 21 01:57:44 UTC 2021 up 89 days, 20:26, 1 user, load averages: 1.22, 1.27, 1.31
2021-10-21 01:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1825842559337616, "perplexity": 2873.511689037886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00231.warc.gz"}
https://www.deepdyve.com/lp/ou_press/erratum-zdW7DqZ6Cn
# Erratum Erratum In the version of “De Facto Seniority, Credit Risk, and Corporate Bond Prices” by Jack Bao and Kewei Hou (https://doi.org/10.1093/rfs/hhx082) that printed in issue 30(11), the figures were printed in grayscale. The figures were meant to be printed in color, in order to align with the captions. Please find the color figures and their related captions: Figure 1 View largeDownload slide Yield spreads and hedge ratios for the Merton and extended Merton models. The cyan surfaces represent the case in which a bond is late in its firm’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its firm’s maturity structure (B and D). Figure 1 View largeDownload slide Yield spreads and hedge ratios for the Merton and extended Merton models. The cyan surfaces represent the case in which a bond is late in its firm’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its firm’s maturity structure (B and D). Figure A.1 View largeDownload slide Yield spreads and hedge ratios for the Geske (1977) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces represent the case in which a bond is late in its issuer’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its issuer’s maturity structure. Figure A.1 View largeDownload slide Yield spreads and hedge ratios for the Geske (1977) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces represent the case in which a bond is late in its issuer’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its issuer’s maturity structure. Figure A.2 View largeDownload slide Dashed line is the amount that equityholders need to pay-in to continue the firm for different levels of $$\alpha$$ in the extended Geske model. Solid curve is the value of the call option that equityholders would hold if they choose to continue the firm. Figure A.2 View largeDownload slide Dashed line is the amount that equityholders need to pay-in to continue the firm for different levels of $$\alpha$$ in the extended Geske model. Solid curve is the value of the call option that equityholders would hold if they choose to continue the firm. Figure A.3 View largeDownload slide Yield spreads for the the extended Geske model. The panels plot differences in yield spreads between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.3 View largeDownload slide Yield spreads for the the extended Geske model. The panels plot differences in yield spreads between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.4 View largeDownload slide Hedge Ratios for the the extended Geske model. The panels plot differences in hedge ratios between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.4 View largeDownload slide Hedge Ratios for the the extended Geske model. The panels plot differences in hedge ratios between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.5 View largeDownload slide Yield spreads and hedge ratios for the Leland and Toft (1996) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces with circular markers represent cases where a bond is due late in its issuer’s maturity structure and the white surfaces with point markers represent cases where a bond is due early in its issuer’s maturity structure. Figure A.5 View largeDownload slide Yield spreads and hedge ratios for the Leland and Toft (1996) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces with circular markers represent cases where a bond is due late in its issuer’s maturity structure and the white surfaces with point markers represent cases where a bond is due early in its issuer’s maturity structure. The color figures that appear in the online publication are correct. The publisher regrets this error. Published by Oxford University Press on behalf of The Society for Financial Studies 2018. This work is written by US Government employees and is in the public domain in the US. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Review of Financial Studies Oxford University Press # Erratum , Volume Advance Article (6) – May 10, 2018 5 pages /lp/ou_press/erratum-zdW7DqZ6Cn Publisher Oxford University Press Published by Oxford University Press on behalf of The Society for Financial Studies 2018. This work is written by US Government employees and is in the public domain in the US. ISSN 0893-9454 eISSN 1465-7368 D.O.I. 10.1093/rfs/hhy035 Publisher site See Article on Publisher Site ### Abstract In the version of “De Facto Seniority, Credit Risk, and Corporate Bond Prices” by Jack Bao and Kewei Hou (https://doi.org/10.1093/rfs/hhx082) that printed in issue 30(11), the figures were printed in grayscale. The figures were meant to be printed in color, in order to align with the captions. Please find the color figures and their related captions: Figure 1 View largeDownload slide Yield spreads and hedge ratios for the Merton and extended Merton models. The cyan surfaces represent the case in which a bond is late in its firm’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its firm’s maturity structure (B and D). Figure 1 View largeDownload slide Yield spreads and hedge ratios for the Merton and extended Merton models. The cyan surfaces represent the case in which a bond is late in its firm’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its firm’s maturity structure (B and D). Figure A.1 View largeDownload slide Yield spreads and hedge ratios for the Geske (1977) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces represent the case in which a bond is late in its issuer’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its issuer’s maturity structure. Figure A.1 View largeDownload slide Yield spreads and hedge ratios for the Geske (1977) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces represent the case in which a bond is late in its issuer’s maturity structure and the red surfaces with point markers represent the case in which a bond is early in its issuer’s maturity structure. Figure A.2 View largeDownload slide Dashed line is the amount that equityholders need to pay-in to continue the firm for different levels of $$\alpha$$ in the extended Geske model. Solid curve is the value of the call option that equityholders would hold if they choose to continue the firm. Figure A.2 View largeDownload slide Dashed line is the amount that equityholders need to pay-in to continue the firm for different levels of $$\alpha$$ in the extended Geske model. Solid curve is the value of the call option that equityholders would hold if they choose to continue the firm. Figure A.3 View largeDownload slide Yield spreads for the the extended Geske model. The panels plot differences in yield spreads between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.3 View largeDownload slide Yield spreads for the the extended Geske model. The panels plot differences in yield spreads between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.4 View largeDownload slide Hedge Ratios for the the extended Geske model. The panels plot differences in hedge ratios between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.4 View largeDownload slide Hedge Ratios for the the extended Geske model. The panels plot differences in hedge ratios between bonds due late in a firm’s maturity structure and bonds due early in a firm’s maturity structure. $$\alpha$$ represents the proportion of maturing debt that is paid by liquidating firm assets. Figure A.5 View largeDownload slide Yield spreads and hedge ratios for the Leland and Toft (1996) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces with circular markers represent cases where a bond is due late in its issuer’s maturity structure and the white surfaces with point markers represent cases where a bond is due early in its issuer’s maturity structure. Figure A.5 View largeDownload slide Yield spreads and hedge ratios for the Leland and Toft (1996) model. Yield spreads are reported in basis points and hedge ratios in percentage. The cyan surfaces with circular markers represent cases where a bond is due late in its issuer’s maturity structure and the white surfaces with point markers represent cases where a bond is due early in its issuer’s maturity structure. The color figures that appear in the online publication are correct. The publisher regrets this error. Published by Oxford University Press on behalf of The Society for Financial Studies 2018. This work is written by US Government employees and is in the public domain in the US. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) ### Journal The Review of Financial StudiesOxford University Press Published: May 10, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
2018-06-20 13:59:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34831807017326355, "perplexity": 3790.968443733258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863519.49/warc/CC-MAIN-20180620124346-20180620144346-00154.warc.gz"}
http://www.emathematics.net/posrelrectaplano.php
User: • Matrices • Algebra • Geometry • Graphs and functions • Trigonometry • Coordinate geometry • Combinatorics Suma y resta Producto por escalar Producto Inversa Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations 2-D Shapes Areas Pythagorean Theorem Distances Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product Planes in the Space # Line-Plane Intersection In analytic geometry, the intersection of a line and a plane can be the empty set, a point or a line: No Intersection Point Intersection Line Intersection How to find the relationship between a line and a plane. If the line is A1x+B1y+C1z+D1=0 A2x+B2y+C2z+D2=0 and the plane is $\pi\;\equiv\;A_3x+B_3y+C_3z+D_3=0\;$ Form a system with the equations and calculate the ranks. $\left{A_1x+B_1y+C_1z+D_1=0\\A_2x+B_2y+C_2z+D_2=0\\A_3x+B_3y+C_3z+D_3=0$ r = rank of the coefficient matrix r'= rank of the augmented matrix The relationship between the line and the plane can be described as follow: Case 1. Point intersection r=3 and r'=3 Case 2. No Intersection r=2 and r'=3 Case 3. Line Intersection r=2 and r'=2 State the relationship between the line: $r\;\equiv\;\frac{x+1}{2}=\frac{y}{1}=\frac{z}{-1}\;$ and the plane $\pi\;\equiv\;x-2y+3z+1\;=\;0\;$ Solution: Form the system of equations and calculate the ranks. $\left{x-2y\;=\;-1\\y+z\;=\;0\\x-2y+3z\;=\;-1$ $M_1=\left(\begin{matrix}1&-2&0\\0&1&1\\1&-2&3\\nd{matrix}\right)$ 1 -2 0 0 1 1 1 -2 3 $\neq\;0$ r=3 $M_2=\left(\begin{matrix}1&-2&0&-1\\0&1&1&0\\1&-2&3&-1\\nd{matrix}\right)$ 1 -2 0 0 1 1 1 -2 3 $\neq\;0$ r'=3 Point Intersection. State the relationship between the line: $r\;\equiv\;\frac{x-1}{5}=\frac{y}{1}=\frac{z+2}{1}\;$ and the plane $\pi\;\equiv\;-x+3y+2z+5\;=\;0\;$ Solution: Form the system of equations and calculate the ranks. $\left{x-5y=1\\y-z=2\\-x+3y+2z=-5$ $M_1=\left(\begin{matrix}1&-5&0\\0&1&-1\\-1&3&2\\nd{matrix}\right)$ 1 -5 0 0 1 -1 -1 3 2 =0 r=2 $M_2=\left(\begin{matrix}1&-5&0&1\\0&1&-1&2\\-1&3&2&-5\\nd{matrix}\right)$ 1 -5 1 0 1 2 -1 3 -5 $\neq\;0$ r'=3 The line and plane are parallel. There is no intersection. State the relationship between the plane : 3x-3y-9z=6 and the line 2x-y+3z=1 x-y-3z=2 1. Point Intersection 2. No Intersection 3. Line Intersection
2020-02-20 13:29:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6259163022041321, "perplexity": 1938.5520138548682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00303.warc.gz"}
https://socratic.org/precalculus/polar-coordinates/limacon-curves
# Limacon Curves ## Key Questions • Limaçon curves look like circles. They have various types depending on the values in their equations. In the image below, for example, the types of limaçon curves are: dimpled, cardioid, and looped (respectively). It's easiest to watch a video of this. Here's one I made a while back #### Explanation: ${r}^{2} = {a}^{2} \cos 2 \theta$. The graph is the fallen eight, looking like the symbol $\infty$... #### Explanation: For the double loop of ${r}^{2} = {a}^{2} \cos 2 \theta , \cos 2 \theta \ge 0$. So, the range for $\theta \in \left[0 , 2 \pi\right]$ excludes $\left(\frac{\pi}{4} , \frac{3 \pi}{4}\right)$ and $\left(\frac{5 \pi}{4} , \frac{7 \pi}{4}\right)$, Here, $\cos 2 \theta < 0$...
2022-10-07 13:26:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901868939399719, "perplexity": 3887.7828878789537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00374.warc.gz"}
https://brilliant.org/problems/waves-2/
Waves A string of length l is fixed at both ends. It is vibrating in its third overtone with maximum amplitude of a. The amplitude at a distance \frac {l}{5} from an end is equal to \frac {\ sqrt {x} \cdot a}{y}. Find x + y. ×
2018-03-25 03:15:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986857771873474, "perplexity": 1340.1608124330385}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00665.warc.gz"}
http://eprints.maths.ox.ac.uk/48/
Laplace transforms, non-analytic growth bounds and -semigroups Srivastava, S. (2002) Laplace transforms, non-analytic growth bounds and -semigroups. PhD thesis, University of Oxford. Preview PDF 654Kb Abstract In this thesis, we study a non-analytic growth bound associated with an exponentially bounded measurable function which measures the extent to which can be approximated by holomorphic functions. This growth bound is related to the location of the domain of holomorphy of the Laplace transform of far from the real axis. We study the properties of as well as two associated abscissas, namely the non-analytic abscissa of convergence, and the non-analytic abscissa of absolute convergence . These new bounds may be considered as non-analytic analogues of the exponential growth bound and the abscissas of convergence and absolute convergence of the Laplace transform of and . Analogues of several well known relations involving the growth bound and abscissas of convergence associated with and abscissas of holomorphy of the Laplace transform of are established. We examine the behaviour of under regularisation of by convolution and obtain, in particular, estimates for the non-analytic growth bound of the classical fractional integrals of . The definitions of and extend to the operator-valued case also. For a -semigroup of operators, is closely related to the critical growth bound of . We obtain a characterisation of the non-analytic growth bound of in terms of Fourier multiplier properties of the resolvent of the generator. Yet another characterisation of is obtained in terms of the existence of unique mild solutions of inhomogeneous Cauchy problems for which a non-resonance condition holds. We apply our theory of non-analytic growth bounds to prove some results in which does not appear explicitly; for example, we show that all the growth bounds of a -semigroup coincide with the spectral bound , provided the pseudo-spectrum is of a particular shape. Lastly, we shift our focus from non-analytic bounds to sun-reflexivity of a Banach space with respect to -semigroups. In particular, we study the relations between the existence of certain approximations of the identity on the Banach space and that of -semigroups on which make sun-reflexive. Item Type: Thesis (PhD) D - G > Functional analysis Functional Analysis Group 48 Eprints Administrator 11 Mar 2004 20 Jul 2009 14:18 Repository Staff Only: item control page
2013-12-12 07:26:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368136286735535, "perplexity": 420.7043452995459}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164567400/warc/CC-MAIN-20131204134247-00008-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1116160
MathSciNet bibliographic data MR1116160 33D80 (17B37) Manocha, H. L. On models of irreducible $q$$q$-representations of ${\rm sl}(2,{\bf C})$${\rm sl}(2,{\bf C})$. Appl. Anal. 37 (1990), no. 1-4, 19–47. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-05-22 16:04:31
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949983358383179, "perplexity": 10851.300012051039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00449.warc.gz"}
https://faq.i3wm.org/question/1701/how-does-emacs-work-in-i3/index.html%3Fanswer=1702.html
The i3 FAQ has migrated to https://github.com/i3/i3/discussions. All content here is read-only. How does emacs work in i3? I was considering trying out i3. After see the docs and videos, I am wondering how emacs works in i3. Since $mod is bound to the Alt key, will it interfere with applications that also bind to the Alt key? edit retag close merge delete 2 answers Sort by » oldest newest most voted You can choose another key, in fact many people rather prefer using 'Super' than 'Alt'. When you launch i3 for the first time, you will be asked what key to use as$mod. more It just works awesome. I used Win-key (mod4) for my meta key, and everything is fine ;-) more
2021-09-24 11:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35113203525543213, "perplexity": 13397.866174262734}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00213.warc.gz"}
https://planetmath.org/IsotopeOfAGroupoid
# isotope of a groupoid Let $G,H$ be groupoids (http://planetmath.org/Groupoid). An isotopy $\phi$ from $G$ to $H$ is an ordered triple: $\phi=(f,g,h)$, of bijections from $G$ to $H$, such that $f(a)g(b)=h(ab)\qquad\mbox{for all }a,b\in G.$ $H$ is called an isotope of $G$ (or $H$ is isotopic to $G$) if there is an isotopy $\phi:G\to H$. Some easy examples of isotopies: 1. 1. If $f:G\to H$ is an isomorphism, $(f,f,f):G\to H$ is an isotopy. By abuse of language, we write $f=(f,f,f)$. In particular, $(1_{G},1_{G},1_{G}):G\to G$ is an isotopy. 2. 2. If $\phi=(f,g,h):G\to H$ is an isotopy, then so is $\phi^{-1}:=(f^{-1},g^{-1},h^{-1}):H\to G,$ for if $f^{-1}(a)=c$ and $g^{-1}(b)=d$, then $ab=f(c)g(d)=h(cd)$, so that $f^{-1}(a)g^{-1}(b)=cd=h^{-1}(ab)$ 3. 3. If $\phi=(f,g,h):G\to H$ and $\gamma=(r,s,t):H\to K$ are isotopies, then so is $\gamma\circ\phi:=(r\circ f,s\circ g,t\circ h):G\to K,$ for $(r\circ f)(a)(s\circ g)(b)=r(f(a))s(g(b))=t(f(a)g(b))=t(h(ab))=(t\circ h)(ab)$. From the examples above, it is easy to see that “groupoids being isotopic” on the class of groupoids is an equivalence relation, and that an isomorphism class is contained in an isotopic class. In fact, the containment is strict. For an example of non-isomorphic isotopic groupoids, see the reference below. However, if $G$ is a groupoid with unity and $G$ is isotopic to a semigroup $S$, then it is isomorphic to $S$. Other conditions making isotopic groupoids isomorphic can be found in the reference below. An isotopy of the form $(f,g,1_{H}):G\to H$ is called a principal isotopy, where $1_{H}$ is the identity function on $H$. $H$ is called a principal isotope of $G$. If $H$ is isotopic to $G$, then $H$ is isomorphic to a principal isotope $K$ of $G$. ###### Proof. Suppose $(f,g,h):G\to H$ is an isotopy. To construct $K$, start with elements of $G$, which will form the underlying set of $K$. The binary operation on $K$ is defined by $a\cdot b:=(f^{-1}\circ h)(a)(g^{-1}\circ h)(b).$ Then $\cdot$ is well-defined, since $f,g$ are bijective, for all pairs of elements of $G$. Hence $K$ is a groupoid. Furthermore, $(f^{-1}\circ h,g^{-1}\circ h,1_{K}):G\to K$ is an isotopy by definition, so that $K$ is a principal isotope of $G$. Finally, $h(a\cdot b)=h(f^{-1}(h(a))g^{-1}(h(b)))=f(f^{-1}(h(a)))g(g^{-1}(h(b)))=h(a)h(b)$, showing that $h:K\to H$ is a bijective homomorphism, and hence an isomorphism. ∎ Remark. In the literature, the definition of an isotope is sometimes limited to quasigroups. However, this is not necessary, as the follow proposition suggests: ###### Proposition 1. Any isotope of a quasigroup is a quasigroup. ###### Proof. Suppose $(f,g,h):G\to H$ is an isotopy, and $G$ a quasigroup. Pick $x,z\in H$. Let $a,c\in G$ be such that $f(a)=x$ and $h(c)=z$. Let $b\in G$ be such that $ab=c$. Set $y=g(b)\in H$. Then $xy=f(a)g(b)=h(ab)=h(c)=z$. Similarly, there is $t\in H$ such that $tx=z$. Hence $H$ is a quasigroup. ∎ On the other hand, an isotope of a loop may not be a loop. Nevertheless, we sometimes say that an isotope of a loop $L$ as a loop isotopic to $L$. ## References • 1 R. H. Bruck: A Survey of Binary Systems.  Springer-Verlag. New York (1966). Title isotope of a groupoid Canonical name IsotopeOfAGroupoid Date of creation 2013-03-22 18:35:54 Last modified on 2013-03-22 18:35:54 Owner CWoo (3771) Last modified by CWoo (3771) Numerical id 8 Author CWoo (3771) Entry type Definition Classification msc 20N02 Classification msc 20N05 Synonym isotopism Synonym homotopism Defines isotopy Defines isotope Defines homotopy Defines homotope Defines isotopic Defines homotopic Defines principal isotopy Defines principal isotope
2020-08-14 17:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 71, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583163261413574, "perplexity": 355.83413582600144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00393.warc.gz"}
https://math.stackexchange.com/questions/626204/inequality-for-expected-value-of-product
# Inequality for Expected Value of Product Let $(\Omega, \mathbb{P}, \mathcal{F})$ be a probability space, and let $\mathbb{E}$ denote the expected value operator. Consider the random variables $f: \Omega \rightarrow \{0,1,2\}$ and $g: \Omega \rightarrow [0,1]$, where $\Omega \subseteq \mathbb{R}^n$. Consider the expected value of the product, namely $$\mathbb{E}\left[ f(\cdot) g(\cdot) \right] := \int_{\Omega} f(\omega) g(\omega) \mathbb{P}(d \omega).$$ Now if $f$ and $g$ are independent random variables, then $\mathbb{E}[f(\cdot) g(\cdot)] = \mathbb{E}[f(\cdot)] \mathbb{E}[g(\cdot)]$. I am wondering about conditions under which $\mathbb{E}[f(\cdot) g(\cdot)] \geq \mathbb{E}[f(\cdot)] \mathbb{E}[g(\cdot)]$. Additional Assumption: $\mathbb{E}[g(\cdot)] = \frac{1}{2}\mathbb{E}[f(\cdot)]$. • What about covariance being positive? – Dilip Sarwate Jan 3 '14 at 19:05 • Of course, because $\mathbb{E}[ f g ] = \mathbb{E}[f] \mathbb{E}[g] + \text{Cov}(x,y)$. But this is just a definition, not an additional condition on $f$ and $g$. – user693 Jan 3 '14 at 19:09 • And what are $x$ and $y$ in your response to my comment? – Dilip Sarwate Jan 3 '14 at 19:39 • Sorry, I meant $f$ and $g$. – user693 Jan 3 '14 at 20:35 A sufficient condition is that, for every $\omega$ and $\omega'$ in $\Omega$, $$(f(\omega)-f(\omega'))\cdot(g(\omega)-g(\omega'))\geqslant0.$$ • Thanks for the answer. Do you think that assuming $\mathbb{E}[g] = \frac{1}{2}\mathbb{E}[f]$ may be useful? – user693 Jan 4 '14 at 21:02 • This is quite unrelated since one can always shift $f$ or $g$ by a constant without changing the direction of the inequality. – Did Jan 4 '14 at 21:35
2019-05-21 00:51:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746246933937073, "perplexity": 261.2617953009512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00402.warc.gz"}
https://www.physicsforums.com/threads/relativity-using-the-bondi-k-calculus-comments.910850/
# FeaturedInsights Relativity using the Bondi k-Calculus - Comments Tags: 1. Apr 10, 2017 ### robphy 2. Apr 10, 2017 ### houlahound Great insight. 3. Apr 11, 2017 ### houlahound To the author, you have exposed different approaches to this body of knowledge. What approach or ways would you use and sequence say for undergraduate instruction. 4. Apr 11, 2017 ### robphy Any approach I use must use the spacetime diagram because I think it is difficult to represent the relativity-of-simultaneity using boxcars as "moving frames of reference". Any approach I use must use radar methods to motivate measurements and the assignment of coordinates. I think radar methods are more straightforward than lattices of "clocks" and "rods". (For inertial motions in special relativity, they are equivalent. However, for more general motions in special and general relativity, they may differ.... In my opinion, the Bondi k-calculus method (with its emphasis on radar measurements) is the best starting point, especially for algebra-based physics. With the k-calculus methods, the standard textbook formulas are straightforward to derive and fall out naturally. A related but even less well known approach by Geroch (in his General Relativity from A to B) is also a good starting point. Geroch uses radar methods to emphasize the square-interval and give operational interpretations of the geometry of spacetime (e.g., what simultaneity means to an observer) in both Special Relativity and Galilean Relativity. My AJP article (which inspired the Insight https://www.physicsforums.com/insights/relativity-rotated-graph-paper/ ) was my attempt to combine Bondi's and Geroch's approaches. From here, I would go on to develop the geometry of Minkowski spacetime, while comparing and contrasting with Euclidean geometry, using the [unappreciated] geometry of Galilean spacetime (e.g., https://www.desmos.com/calculator/ti58l2sair ... play with the E-slider) ...something I call "Spacetime Trigonometry", a large ongoing project with many aspects which generates lots of posters for me at AAPT meetings. (I should really write this up soon... but it would have to be broken into a series of AJP articles.) These are examples of Cayley-Klein geometries, which includes the deSitter spacetimes.. This "unification" can help formalize the numerous analogies mentioned in the literature. In addition, I can develop vector and tensorial methods (algebraically, graphically, and geometrically) in order to make contact with traditional intermediate and advanced presentations of relativity. 5. Apr 11, 2017 ### houlahound 6. Apr 12, 2017 ### pervect Staff Emeritus Thanks for posting this. A lot of times I want to refer people to Bondi's approach, as I also feel it's one of the best elementary treatments for the person new to relativity. I can and do refer interested people to his book, but it's nice to have a more accessible source. 7. Apr 12, 2017 ### robphy Thanks. I was torn between making it as elementary as possible for a beginner (which would only be a tweak on Bondi or just the equations already provided by Wikipedia) or making clarifications and connections to geometry (the logical next step). 8. May 3, 2017 ### skanskan Why does he assume the velocity of light is the same for all inertial observers? 9. May 3, 2017 ### Ibix About forty years before Einstein, Maxwell published equations describing electromagnetism. One solution to the equations was a wave, which turned out to have the properties of light. One weird thing was that the speed of the wave came out the same always. Naturally everyone assumed that the equations weren't quite right and the hunt was on to find the problem. The next forty years were a bit confusing as no one could find anthing wrong. Experiments that were expected to help (e.g. Michelson and Morley) didn't work as predicted, but did provide some ad hoc patches. Einstein had the insight that if the (apparently daft) prediction that light always travels at the same speed for all inertial observers was correct then he could explain all of the confusion. So he made the assumption. 10. May 3, 2017 ### bahamagreen Alice's movie is seen by Bob to be in slow motion, and Bob's movie is seen by Alice to be in slow motion. That is similar to SR in which Alice's and Bob's clocks would measure to each other to run slow ... but all your diagrams are presenting the case of increasing separation of the inertial travelers. To the degree that the diagram tends to suggest that movie duration is a proxy for time dilation... it looks like it only works with cases of increasing separation, not cases of approach. Students would notice this... If these movies were youtube videos, there would be a time indicator rolling at the bottom of the screen, so for example, both Alice and Bob could see that Alice's movie indicates that it starts at 00:00:00 and increments to 00:60:00 at the end. Although Bob can't necessarily "view Alice's clock" , he can see by the video time index that in comparison to his own clock her video is running slow... suggesting that her time is slower relative to his (and likewise his to hers when he sends video to her). When Alice and Bob approach each other, it looks like Bob is going to see Alice's movie running faster (shorter time), and Bob's movie will be seen by Alice to be running faster... so this is not similar to SR which would maintain that each measure each others clocks running slow. 11. May 4, 2017 ### Mister T That's difference between "see" and "observe". We see Doppler shifted light as it enters our eyes in the same way as we see the movie running slow as its images enter our eyes. But if you want to observe what is really happening you have to allow for the light travel time. That will lead you to time dilation. Note that even for the case of increasing separation the time dilation factor is not the same as the Doppler factor. If the relative speed is $\beta$ then the time dilation factor is $(1-\beta^2)^{\frac{1}{2}}$ whereas the Doppler factor is $\big(\frac{1+\beta}{1-\beta}\big)^{\pm\frac{1}{2}}$. 12. May 4, 2017 ### robphy These viewings of movies are not proxies for time-dilation... they are descriptions of the Doppler effect for light. For observers receding from each other, each observes a "redshift" (or, in the case for sound, a lowering of frequency). For observers approaching each other, each observes a "blueshift" (or, in the case for sound, a raising of frequency). In some sense, the Doppler Effect needs the time-dilation factor in order to satisfy the principles of relativity. Indeed, in the derivation of receding sources and receding receivers, one gets expressions involving the Galilean-Doppler factor and the time-dilation factor: $\gamma(1+\beta)=\left(\frac{1}{\sqrt{(1-\beta)(1+\beta)}}\right)(1+\beta)=\sqrt{\frac{1+\beta}{1-\beta}}=k$ and $\frac{1}{\gamma}\left(\frac{1}{1-\beta}\right)=\left(\sqrt{(1-\beta)(1+\beta)}\right)\left(\frac{1}{1-\beta}\right)=\sqrt{\frac{1+\beta}{1-\beta}}=k$. It might be useful to point out a distinction between time-dilation and the Doppler effect for light. For two inertial observers Alice and Bob that met at event O, • time-dilation involves two spacelike-related events, say "event P on Alice's worldline" and "event Q on Bob's worldline that Alice says is simultaneous with P" (so, $\vec{PQ}$ is a purely-spatial displacement vector according to Alice... it is Minkowski-perpendicular to $\vec{OP}$). The time-dilation factor measured by Alice is $\gamma=\frac{OP}{OQ}$. • Doppler-effect involves two lightlike-related events, say "event P on Alice's worldline" and "event S on Bob's worldline which is in the lightlike-future of P" (so, $\vec{PS}$ is a future-lightlike displacement vector). The Doppler factor measured by Alice is $k=\frac{OS}{OP}$. In the case of approaching, one has a diagram like this [based on reflecting the original diagram from the Insight]: where I have used a "factor" $\kappa$ (kappa). So, as you said, Bob would view Alice's T-hour broadcast "sped up", in only $\kappa T$ hours (where $\kappa<1$). By similar triangles, $\displaystyle\frac{\kappa T}{T}=\frac{kT}{k^2T}$, which implies that $\kappa=\frac{1}{k}$. Note that since $k=\sqrt{\frac{1+\beta}{1-\beta}}$, we have $\kappa=\frac{1}{k}=\sqrt{\frac{1-\beta}{1+\beta}}$, which is the original expression for "$k$" with "velocity $-\beta$". Thus, there's no need to use $\kappa$... "receding and approaching" are handled by $k$. Last edited: May 4, 2017 13. May 19, 2017 The theory of special relativity was derived from a simple fact based on the right triangle as follows : Imagine a light signal is sent from a point to an observer moving with a velocity "v". This signal will be received by this observer moving with velocity "v" after a time delay with respect to the initial position of this observer that forms the hypotenus of the right triangle on which the velocity of light "c" is the same as the one of the right sides of the right triangle while for the other right side the velocity of the observer is "v". If you multiply these velocities with the same time difference "dt", addition of squares of the two right sides would be greater than the square of the hypotenus that would violate the pytogoran theorem for which it becomes necessary to denominate the time intervals with different indices as : : (cdt*)^2 = (vdt*)^2 + (cdt)^2 which after a simple algebra becomes dt* = dt / [ 1 - (v/c)^2 ]^1/2
2017-05-27 06:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5800126194953918, "perplexity": 1217.5926166288862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608870.18/warc/CC-MAIN-20170527055922-20170527075922-00199.warc.gz"}
https://www.physicsforums.com/threads/finding-current-of-a-circuit.245381/
# Homework Help: Finding Current of a Circuit 1. Jul 16, 2008 ### kdrobey 1. The problem statement, all variables and given/known data A circuit consists of a 215 resistor and a 0.200 H inductor. These two elements are connected in series across a generator that has a frequency of 120 Hz and a voltage of 235 V. (a) What is the current in the circuit? (b) Determine the phase angle between the current and the voltage of the generator. 2. Relevant equations Xl=2(pi)fL Irms=Vrms/Xl 3. The attempt at a solution I used the equation to get 150.796 for Xl, then i plugged that into the equation Irms=Vrms/Xl to find current, but that gave me 1.558, which was not the right answer 2. Jul 16, 2008 ### Staff: Mentor Your second equation in #2 above is incomplete. The resistor and inductor are in series, so you must use their total impedance in the V = I * Z equation. Does that fix it for you? 3. Jul 16, 2008 ### kdrobey I'm still not getting it. I used V=IZ. I have V, which is 235 volts right? still, i did not have I or Z. So i used Z=(square root of)R^2 +(Xl-Xc)^2, and I got .89 for Z. Then plugging back into V=IZ, (235v)=I(.89)=262.49A for current? 4. Jul 17, 2008 ### alphysicist Hi kdrobey, For these series RLC problems the impedance is $$Z=\sqrt{R^2 + (X_L - X_C)^2}$$ and so the impedance cannot be smaller than the resistance, so something is wrong there. What were the actual numbers you used to calculate Z?
2018-12-19 04:14:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7277621030807495, "perplexity": 1076.8073136117516}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00503.warc.gz"}
https://math.stackexchange.com/questions/4369123/about-the-commutatvity-of-a-bounded-self-adjoint-operator-with-an-unbounded-symm
# About the commutatvity of a bounded self-adjoint operator with an unbounded symmetric one? Let $$B\in B(H)$$ be self-adjoint and let $$A$$ be a densely defined symmetric (and closed if needed) operator such that $$A^2$$ is densely defined. If $$BA^2\subset A^2B$$ say, is there a result which gives $$BA\subset AB$$? Notice that I already have a counterexample when $$A^2$$ is not densely defined. Cheers, Hichem • I don't know what kind of result you are looking for, but without further conditions, this can already fail for bounded operators ($2\times 2$ matrices even). Jan 29 at 18:28 • you may add a positive $A$ to avoid trivialities. Jan 29 at 19:10
2022-10-03 14:34:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584738373756409, "perplexity": 202.0193241333667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00009.warc.gz"}
http://perry.alexander.name/eecs662/project/2017/02/14/Project-1-Predicting-Failure.html
Index Blog # Project 1 - Predicting Failure Mini Project 1 - Predicting Failure EECS 662 - Programming Languages The objective of this miniproject is to develop your first type checker. You will start with the ABE language presented in class and develop techniques for predicting failure. ## Exercise 1 Write a parser and interpreter for the ABE language discussed in class and presented in our text extended to include multiplication and division. Work with the parser that is defined in the example from class. ABE ::= number | boolean ABE + ABE | ABE - ABE | ABE * ABE | ABE / ABE | ABE && ABE | ABE <= ABE | isZero ABE | if ABE then ABE else ABE 1. Define a type for representing the abstract syntax of the extended ABE language using data. 2. Using Parsec, write a function parseABE :: String -> ABE that accepts the concrete syntax of ABE and generates an ABE data structure representing it. 3. Write a function, eval :: ABE -> (Either String ABE), that takes a ABE data structure and interprets it and returns an ABE value or an error message. Your eval function should check for divide-by-zero errors at runtime. 4. Write a function, typeof :: ABE -> (Either String TABE), that returns either a String representing an error message or an TABE structure. Your typeof function should return an error message if it encounters a constant 0 in the denominator of a division operator. 5. Write a function, interp that combines your parser, type checker and evaluator into a single operation that parses, type checks, and evaluates an ABE expression. Take advantage of the Either type to ensure eval is not called when typeof fails. ## Exercise 2 And now, something completely different. Remembering that programs are just data structures, write a new function called optimize :: ABE -> ABE that does two things: 1. If the expression x + 0 appears in an expression, replace it with x. 2. If the expression if true then x else y appears in an expression, replace it with x. Similarly for false and y. Integrate this new optimize into your ABE interpreter by calling it right before eval. ## Super Cool Optional Exercise 3 Reimplement the ABE interpreter using a Either monad. Both typeof and eval return Either constructs, you simply need to modify the eval and typeof functions. This is not particularly difficult and you should be able to find plenty of examples both in the class text and online. ## Notes Most if not all the code for the ABE eval and typeof functions can be found in our text. Again, I would encourage you to try to write as much of them as possible without referring to the textbook examples. To give you an idea of the effort required for this mini-project, my code is about 150 lines long and took me roughly an hour to write and debug. I view this as a reasonably easy project at this point in the semester. Do not put it off as many of you are still becoming friends with Haskell. Hopefully the previous project shook out any difficulty with Haskell tools.
2017-09-24 12:02:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4138314723968506, "perplexity": 2392.348172716416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00381.warc.gz"}
https://www.eduzip.com/ask/question/by-using-tracing-paper-compare-the-angles-in-the-pairs-given-in-f-521012
Mathematics # By using tracing paper compare the angles in the pairs given in figure. ##### SOLUTION $\angle BOA< \angle PQR$ You're just one step away Subjective Medium Published on 09th 09, 2020 Questions 120418 Subjects 10 Chapters 88 Enrolled Students 87 #### Realted Questions Q1 Subjective Medium Find the value of $x$ Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020 Q2 TRUE/FALSE Medium State True or False. In figure, $\angle PQR=\angle PRQ$, then $\angle PQS=\angle PRT$. • A. False • B. True Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020 Q3 Multiple Correct Medium In figure, three lines p, q and r are concurrent at O. If $a=50^o$ and $b=90^o$, find $c, d, e$ and $f$. • A. $c=40^o$ • B. $d=50^o$ • C. $e=90^o$ • D. $f=40^o$ Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 23rd 09, 2020 Q4 Subjective Medium Gloria is walking along the path joining $(-2,3)$ and $(-2,2)$, while Suresh is walking along the path joining $(0,5)$ and $(4,0)$. Represent this situation graphically. Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020 Q5 Subjective Medium The sum of two vertically opposite angles is $166^o$. Find each of the angles. Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020
2022-01-25 14:55:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384028077125549, "perplexity": 8943.769580647548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00267.warc.gz"}
http://www.maa.org/news/on-this-day?qt-most_read_most_recent=1
# On This Day • ### 12-5-1610 Benedetto Castelli, a former student of Galileo, wrote him, that if Copernicus was correct, Venus should sometimes appear "horned" and sometimes not. Benedetto Castelli Galileo Galilei • ### 12-5-1825 Abel wrote how delighted he was that Crelle was starting a new mathematics journal, for this meant he would now have a place to publish his research. The first volume contained seven papers by Abel.
2013-12-06 03:18:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111319541931152, "perplexity": 11539.03448204159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049340/warc/CC-MAIN-20131204131729-00072-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.khanacademy.org/computing/computer-programming/programming-games-visualizations/programming-scenes/a/animated-scenes
# Animated scenes We've seen how to make multiple simple scenes - but our scenes were what we call "static" - they weren't animated, nor did they have any response to user interaction. As we'll see, it requires a bit more finesse to handle fancier scenes. But hey, let's get fancy! Let's talk about animation first. What if we wanted to show Winston in his rock star phase, drumming hard to the beat? We'd normally do that by defining the draw function to contain code that draws shapes that move position slightly each frame. Here's an example, where the position of the drumming hands is based on the current millis() value, the number of ellapsed milliseconds: What if we add that as scene 4 to our previous example? We'll move the code into a drawScene4() function, and modify our mouseClicked logic. var drawScene4 = function() { currentScene = 4; background(194, 255, 222); var x = cos(millis()*1); var y = cos(millis()+98); ... }; mouseClicked = function() { if (currentScene === 1) { drawScene2(); } else if (currentScene === 2) { drawScene3(); } else if (currentScene === 3) { drawScene4(); } else if (currentScene === 4) { drawScene1(); } }; Try it out below - click through a few times: Notice something? It worked, but only kind of. We could see Winston with his drum set, but his drum sticks weren't moving. How sad! It's hard to make music when you're frozen in time. Perhaps you've already caught onto the issue: we're no longer calling the drumsticks-drawing code from within draw(), so it's only getting called once--not repeatedly--and thus only rendering the sticks at the moment in time at which it's first called. Perhaps you've also already guessed the solution: define a draw() function, and call drawScene4() when appropriate. draw = function() { if (currentScene === 4) { drawScene4(); } }; Let's just think through that for a bit: whenever we define a draw() function in our code, it will then get called repeatedly (defaulting to 30 FPS), and whenever it's called, when the current scene has already been set to 4, then it'll call the function to draw scene 4. When it's any other value, it won't attempt to draw anything at all-- keeping whatever was already on the screen. We still need to do the initial scene drawing in mouseClicked, this logic just takes care of animating every frame after. Some of you might be thinking: why don't we just have logic that calls every scene drawing function inside draw()? Well, you certainly could, and that'd mean that if you added animation to the other scenes, then they would just work immediately. But assuming you don't animate your other scenes, that means you're making the computer re-draw those scenes repeatedly for no reason. From a performance perspective, that's not good. If we know we can easily save the computer unnecessary work, we should. It will make our programs faster and users happier. Alright, now that we've discussed all that, here's the story in its clickable, animated glory. You can almost hear the beats coming out of scene 4!
2017-03-26 03:40:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31639525294303894, "perplexity": 3312.7447412743086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00589-ip-10-233-31-227.ec2.internal.warc.gz"}
http://biomechanical.asmedigitalcollection.asme.org/article.aspx?articleid=1475495
0 Research Papers # Measurement of the Dynamic Shear Modulus of Mouse Brain Tissue In Vivo by Magnetic Resonance Elastography [+] Author and Article Information Stefan M. Atay, Arash Sabet Department of Mechanical and Aerospace Engineering, Washington University, 1 Brookings Drive, Box 1185, St. Louis, MO 63130 Christopher D. Kroenke Advanced Imaging Research Center, Oregon Health and Science University, 3181 S. W. Sam Jackson Park Road, Portland, OR 97239-3098 Philip V. Bayly1 Department of Mechanical and Aerospace Engineering, Department of Biomedical Engineering, Washington University, 1 Brookings Drive, Box 1185, St. Louis, MO 63130pvb@me.wustl.edu 1 Corresponding author. J Biomech Eng 130(2), 021013 (Mar 31, 2008) (11 pages) doi:10.1115/1.2899575 History: Received November 08, 2006; Revised June 20, 2007; Published March 31, 2008 ## Abstract In this study, the magnetic resonance (MR) elastography technique was used to estimate the dynamic shear modulus of mouse brain tissue in vivo. The technique allows visualization and measurement of mechanical shear waves excited by lateral vibration of the skull. Quantitative measurements of displacement in three dimensions during vibration at $1200Hz$ were obtained by applying oscillatory magnetic field gradients at the same frequency during a MR imaging sequence. Contrast in the resulting phase images of the mouse brain is proportional to displacement. To obtain estimates of shear modulus, measured displacement fields were fitted to the shear wave equation. Validation of the procedure was performed on gel characterized by independent rheometry tests and on data from finite element simulations. Brain tissue is, in reality, viscoelastic and nonlinear. The current estimates of dynamic shear modulus are strictly relevant only to small oscillations at a specific frequency, but these estimates may be obtained at high frequencies (and thus high deformation rates), noninvasively throughout the brain. These data complement measurements of nonlinear viscoelastic properties obtained by others at slower rates, either ex vivo or invasively. <> ## Figures Figure 1 The MR elastography pulse sequence. A standard spin-echo MR imaging sequence was modified by the addition of sinusoidal motion-sensitizing gradients that oscillate at the frequency of vibration. The basic spin-echo sequence consists of rf excitation in conjunction with gradients in the slice-select (GSS), readout (GRO), and phase-encode (GPE) directions. This figure depicts harmonic motion-sensitizing gradients in the PE direction. The dashed lines indicate that motion-sensitizing gradients could also be applied in the RO and SS directions. Figure 12 Dynamic shear modulus estimates (mean±std.dev.) in the cortical gray matter of the anterior, middle, and posterior mouse brain sections for six animals in vivo. Each bar shading represents a single mouse. The average estimates of the shear modulus of all six mice in the anterior, middle, and posterior sections were 14,800±2030N∕m2, 13,800±1490N∕m2, and 12,600±1990N∕m2, respectively. Figure 13 Dynamic shear modulus estimates (mean±std.dev.) in the subcortical gray matter in the anterior, middle, and posterior mouse brain sections for six animals in vivo. Each bar shading represents one mouse. The average estimates of the shear modulus of all six mice in the anterior, middle, and posterior sections were 18,700±2080N∕m2, 15,300±1480N∕m2, and 16,500±3060N∕m2, respectively. Figure 14 Dynamic shear modulus estimates (mean±std.dev.) in the anterior, middle, and posterior mouse brain sections for two animals postmortem. Each bar represents a single mouse. (a) Cortical gray matter. The average estimates of the shear modulus of both mice in the anterior, middle, and posterior sections were 14,600±50N∕m2, 14,100±1290N∕m2, and 13,900±70N∕m2, respectively. (b) Subcortical gray matter. The average estimates of the shear modulus of both mice in the anterior, middle, and posterior sections were 15,400±2180N∕m2, 14,200±800N∕m2, and 15,400±410N∕m2, respectively. No estimate of shear modulus was significantly different from the corresponding value observed in the living tissue (Student’s t-test). Figure 2 Illustration of phase accumulation using MRE. The three rows of circles represent three individual “spin packets,” and the portion filled in represents the phase of a spin at a particular time. The five columns represent a complete cycle of vibration as well as gradient modulation of period T=2π∕ω, where the ω is the frequency of vibration measured in rad/s. The amount of phase that a spin accumulates at a given time is directly proportional to the magnetic field strength at that point. Thus, at t2, the upper and lower spins accrue more phase than the middle spin because they have been displaced by vibration into a higher magnetic field. At t4, the spins are displaced in the opposite direction; however, the gradient field has also switched direction and the upper and lower spins again accrue more phase than the middle spin. The net result is an image whose phase is proportional to displacement at a particular time during 1cycle, as seen on the right. An image of the displacements at a different point in the cycle can be obtained by shifting the motion-sensitizing gradients temporally. Time series of periodic displacements (and animations of wave propagation) can be obtained by incrementally varying this temporal delay between the mechanical excitation and the imaging gradients. Figure 3 (a) Top view of the wave-generating actuator. When a sinusoidal current i is sent through the coil in the longitudinal magnetic field Bo, an electromagnetic torque Tmag is developed, causing the actuator arm to vibrate back and forth. (b) A side view of the shaker apparatus showing the connection between the arm and a plastic machine screw nut glued to the skull. The coronal imaging plane is perpendicular to both views. Figure 4 MRE displacement images of a gel phantom (Gel 2) showing four time points in a complete cycle of wave motion at 400Hz. Waves can be most clearly seen in the PE (lateral) direction, which was the direction of excitation. The maximum amplitude in the PE direction was 33μm. Each frame is 18×17.5mm2. Directions are RO, inferior-superior; PE, lateral; and SS, anterior-posterior. Figure 11 Dynamic shear modulus in anatomical sections. (a) Spin-echo “scout” images of anterior, middle, and posterior coronal sections of the mouse brain. (b) Representative images of dynamic shear modulus. Areas with residual error higher than 0.5 are masked out (dark blue). (c) Residual errors from the fit to the wave equation. Each frame is 11.25×7.5mm2. Figure 5 Images of displacement from a FE simulation of shear wave propagation in a 3D viscoelastic solid. Parameters: shear modulus μ=1600N∕m2, loss factor η=0.1, and excitation frequency 400Hz. Image size is 25×25×6.25mm3; displacements were interpolated onto an array of 64×64×16 “voxels.” Figure 6 Shear modulus estimates and residual error for data from 400Hz excitation of gel phantom and FE simulation. Panel (a): Shear modulus estimate for Gel 2 (Fig. 4). The mean(±std.dev.) estimate was 1560±70N∕m2. Panel (b): Shear modulus estimate for the 400Hz FE simulation (Fig. 5). The mean(±std.dev.) estimate is 1760±90N∕m2. Panel (c): Residual error of wave equation fit for the gel phantom. (d) Residual error of wave equation fit for the FE simulation. Figure 7 (a) Waves in Gel 2 at 200Hz; ∼2.5 wavelengths/domain. (b) Map of shear modulus estimates, illustrating edge artifacts. (c) Map of residual error of fit to the wave equation, including edge artifacts. Edge effects are attributable to truncation error in Helmholtz decomposition and Laplacian estimation. With edge voxels masked out as in Fig. 6 above, the mean(±std.dev.) estimate was 1460±20N∕m2 with the Laplacian estimated in the frequency domain (shown); 1680±140N∕m2 with Laplacian estimated by finite differences in space. Figure 8 Estimates of dynamic shear modulus for (a) gel phantoms and (b) FE simulations as a function of frequency. (a) Filled-in markers represent the shear moduli determined by shear plate rheometry at 80Hz for three gel materials. Open markers represent the shear modulus estimates determined using MRE at 80Hz, 200Hz, 400Hz, and 800Hz. The gels exhibit frequency-dependent viscoelastic behavior; the dynamic shear modulus determined using elastography increases with increasing frequency. (b) Elastography estimates of shear modulus from FE simulations. Parameters: μ0=1600N∕m2 and loss factor η=0.1. Elastography yielded similar results for all frequencies of the FE model. Estimates are within 10% of each other, and are consistent with approximate values obtained from an estimate of wavelength in the RO direction (see Eq. 11, Table 2). Figure 9 MRE images of displacement in the PE (lateral) direction in anterior, middle, and posterior mouse brain sections at four points in time during a cycle of wave propagation at 1200Hz. Excitation was in the PE direction. The maximum amplitude in the PE direction was approximately 10μm. Each frame is 11.25×7.5mm2. Figure 10 MRE images of displacement in a midcoronal mouse brain section showing four points in time during a cycle of wave propagation at 1200Hz with motion in all three directions. The maximum amplitude in the PE direction was ∼10μm. Each frame is 11.25×7.5mm2. Directions are RO, inferior-superior, PE, lateral; and SS, anterior-posterior. ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2018-06-24 12:47:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43075427412986755, "perplexity": 3529.1917704482375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866937.79/warc/CC-MAIN-20180624121927-20180624141927-00347.warc.gz"}
http://ronaldconnelly.blogspot.com/2015/06/problem-of-week-161-june-30-2015.html
## Tuesday, June 30, 2015 ### Problem of the Week #161 – June 30, 2015 Here is this week’s POTW: —– Let $n$ be an integer. Show that as $x \to \infty$ on the positive real axis, J_n(x) \sim \sqrt{\frac{2}{\pi…
2017-08-17 19:21:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27284592390060425, "perplexity": 2391.003147836176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00385.warc.gz"}
https://labs.tib.eu/arxiv/?author=Yu-Peng%20Yan
• ### PandaX-III: Searching for Neutrinoless Double Beta Decay with High Pressure $^{136}$Xe Gas Time Projection Chambers(1610.08883) Oct. 28, 2016 hep-ex, nucl-ex, physics.ins-det Searching for the Neutrinoless Double Beta Decay (NLDBD) is now regarded as the topmost promising technique to explore the nature of neutrinos after the discovery of neutrino masses in oscillation experiments. PandaX-III (Particle And Astrophysical Xenon Experiment III) will search for the NLDBD of $^{136}$Xe at the China Jin Ping underground Laboratory (CJPL). In the first phase of the experiment, a high pressure gas Time Projection Chamber (TPC) will contain 200 kg, 90% $^{136}$Xe enriched gas operated at 10 bar. Fine pitch micro-pattern gas detector (Microbulk Micromegas) will be used at both ends of the TPC for the charge readout with a cathode in the middle. Charge signals can be used to reconstruct tracks of NLDBD events and provide good energy and spatial resolution. The detector will be immersed in a large water tank to ensure $\sim$5 m of water shielding in all directions. The second phase, a ton-scale experiment, will consist of five TPCs in the same water tank, with improved energy resolution and better control over backgrounds. • ### Systematic study for particle transverse momentum asymmetry in minimum bias pp collisions at LHC energies(1210.4608) April 16, 2013 nucl-ex, nucl-th In PYTHIA6 (PYTHIA8) once the transverse momentum $p_T$ of a generated particle is randomly sampled, $p_x$ and $p_y$ are set on the circle with radius of $p_T$ randomly. This may largely suppress the development of the final hadronic state transverse momentum anisotropy from the initial state spatial asymmetry. We modify PYTHIA6.4 by randomly setting $p_x$ and $p_y$ on the circumference of an ellipse with the half major and minor axes being $p_T(1+\delta_p)$ and $p_T(1-\delta_p)$, respectively. The modified PYTHIA6.4 is then employed to systematically study the charged particle transverse momentum asymmetry in the minimum bias pp collisions at $\sqrt s$=0.9, 7, and 14 TeV. The ALICE data on the transverse sphericity as a function of charged multiplicity, $<S_T>(N_{ch})$, are well reproduced with the modified PYTHIA6.4. It is found that the predicted charged particle $v_2$ upper limit is a measurable value, $\sim 0.12$, in the minimum bias pp collisions at $\sqrt s$=7 TeV. We suggest a systematic measurement for the particle transverse momentum sphericity, eccentricity (ellipticity), and elliptic flow parameter. • ### Comparative study for non-statistical fluctuation of net- proton,baryon, and charge multiplicities(1212.2283) Dec. 11, 2012 nucl-th We calculate the real and non-statistical higher moment excitation functions ($\sqrt{s_{NN}}$ = 11.5 to 200 GeV) for the net-proton, net-baryon, and the net-charge number event distributions in the relativistic Au+Au collisions with the parton and hadron cascade model PACIAE. It turned out that because of the statistical fluctuation dominance it is very hard to see signature of the CP singularity in the real higher moment excitation functions. It is found that the property of higher moment excitation functions are significantly dependent on the window size, and hence the CP signatures may show only in a definite window for a given conserved observable non-statistical higher moments. But for a given widow size, the CP singularity may show only in the non-statistical higher moment excitation functions of a definite conserved observable. • ### Heavy quark transport at RHIC and LHC(1212.0696) Dec. 4, 2012 hep-ph, nucl-th We calculate the heavy quark evolution in heavy ion collisions and show results for the elliptic flow $v_2$ as well as the nuclear modification factor $R_{AA}$ at RHIC and LHC energies. For the calculation we implement a Langevin approach for the transport of heavy quarks in the UrQMD (hydrodynamics + Boltzmann) hybrid model. As drag and diffusion coefficients we use a Resonance approach for elastic heavy-quark scattering and assume a decoupling temperature of the charm quarks from the hot medium of $130\,\text{MeV}$. At RHIC energies we use a coalescence approach at the decoupling temperature for the hadronization of the heavy quarks to D-mesons and B-mesons and a sub-following decay to heavy flavor electrons using PYTHIA. At LHC we use an additional fragmentation mechanism to account for the higher transverse momenta reached at higher collision energies. • ### Higher moment singularities explored by the net proton non-statistical fluctuations(1205.5634) May 25, 2012 nucl-th We use the non-statistical fluctuation instead of the full one to explore the higher moment singularities of net proton event distributions in the relativistic Au+Au collisions at $\sqrt{s_{NN}}$ from 11.5 to 200 GeV calculated by the parton and hadron cascade model PACIAE. The PACIAE results of mean ($M$), variance ($\sigma^2$), skewness ($S$), and kurtosis ($\kappa$) are consistent with the corresponding STAR data. Non-statistical moments are calculated as the difference between the moments derived from real events and the ones from mixed events, which are constructed by combining particles randomly selected from different real events. An evidence of singularity at $\sqrt{s_{NN}}\sim$ 60 GeV is first seen in the energy dependent non-statistical $S$ and $S\sigma$. • ### Covariant kaon dynamics and kaon flow in heavy ion collisions(nucl-th/0211088) May 10, 2004 nucl-th The influence of the chiral mean field on the $K^+$ transverse flow in heavy ion collisions at SIS energy is investigated within covariant kaon dynamics. For the kaon mesons inside the nuclear medium a quasi-particle picture including scalar and vector fields is adopted and compared to the standard treatment with a static potential. It is confirmed that a Lorentz force from spatial component of the vector field provides an important contribution to the in-medium kaon dynamics and strongly counterbalances the influence of the vector potential on the $K^+$ in-plane flow. The FOPI data can be reasonably described using in-medium kaon potentials based on effective chiral models. The information on the in-medium $K^+$ potential extracted from kaon flow is consistent with the knowledge from other sources. • ### Energy dependence of string fragmentation function and $\phi$ meson production(nucl-th/0205078) May 30, 2002 nucl-th The $\phi$ meson productions in $Au+Au$ and/or $Pb+Pb$ collisions at AGS, SPS, RHIC, and LHC energies have been studied systematically with a hadron and string cascade model LUCIAE. After considering the energy dependence of the model parameter $\alpha$ in string fragmentation function and adjusting it to the experimental data of charged multiplicity to a certain extent, the model predictions for $\phi$ meson yield, rapidity, and/or transverse mass distributions are compatible with the experimental data at AGS, SPS and RHIC energies. A calculation for $Pb+Pb$ collisions at LHC energy is given as well. The obtained fractional variable in string fragmentation function shows a saturation in energy dependence. It is discussed that the saturation of fractional variable in string fragmentation function might be a qualitative representation of the energy dependence of nuclear transparency.
2020-02-20 13:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977121472358704, "perplexity": 1812.6682748905864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00110.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-02569266
A synthesis of worldwide sediment source tracing research including fallout radiocesium (Cs-137) - Archive ouverte HAL Access content directly Conference Papers Year : ## A synthesis of worldwide sediment source tracing research including fallout radiocesium (Cs-137) (1, 2) , (1) , (1, 3) , (1) , (4) 1 2 3 4 O. Evrard Anthony Foucher J. Patrick Laceby • Function : Author #### Abstract Quantifying the main sources delivering harmful sediment loads to river systems is required to improve our knowledge of soil erosion processes. Among these potential sources, quantifying the contributions of surface (e.g. cultivated topsoil) and subsurface (e.g. channel bank, gully, landslide) material to sediment transiting river systems is of particular interest. Radiocesium ($^{137}$Cs) that was emitted during the atmospheric bomb tests that took mainly place in the 1960s and nuclear accidents provides an effective tracer to distinguish between topsoil material exposed to the fallout and subsoil sheltered from this fallout. A global synthesis of research articles (n=123) that used radiocesium to fingerprint sediment sources indicated that the largest number of publications ($\sim$55% of the total) were found in the United Kingdom, Australia and the United States. On the contrary, very few studies ($\sim$9% of the total) were published for catchments located in Africa or South America. Given the low proportion of fallout recorded in regions located between 0-20°N and 0-20°S, the potential of this technique for quantifying sediment source contributions may be limited in this part of the world. A similar conclusion may be drawn for applying this method in agricultural areas exposed to several soil erosion during the last several decades, such as Chinese Loess Plateau and South Africa. Overall, 94% of studies incorporating $^{137}$Cs as a potential tracer included this property in mixing models. In the future, given the continuous decay of the initial radiocesium fallout that peaked in the 1960s, the access to ultra-low background gamma-ray spectrometry facilities will be increasingly necessary to measure this important sediment tracing property. In addition, more research should be devoted to develop surrogate tracers providing discrimination between surface and subsurface material. Based on this extensive study review, researchers are also recommended to systematically include basic catchment information, details on the soil/sediment sampling design and access to raw data to facilitate the dissemination of this information among the communities of scientists and catchment managers. ### Dates and versions cea-02569266 , version 1 (11-05-2020) ### Identifiers • HAL Id : cea-02569266 , version 1 • DOI : ### Cite O. Evrard, Pierre-Alexis Chaboche, Rafael Ramon, Anthony Foucher, J. Patrick Laceby. A synthesis of worldwide sediment source tracing research including fallout radiocesium (Cs-137). EGU General Assembly 2020, May 2020, Vienne, Austria. ⟨10.5194/egusphere-egu2020-8312⟩. ⟨cea-02569266⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 61 View
2023-02-05 21:34:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22741545736789703, "perplexity": 10202.973362779483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00759.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-r-elementary-algebra-review-r-5-polynomials-and-factoring-r-5-exercise-set-page-970/41
# Chapter R - Elementary Algebra Review - R.5 Polynomials and Factoring - R.5 Exercise Set: 41 $a=\left\{ -2,6 \right\}$ #### Work Step by Step Expressing the given equation in the form $ax^2+bx+c=0,$ the given equation is equivalent to \begin{array}{l}\require{cancel} (a+1)(a-5)=7 \\\\ a(a)+a(-5)+1(a)+1(-5)=7 \\\\ a^2-5a+a-5=7 \\\\ a^2-5a+a-5-7=0 \\\\ a^2-4a-12=0 .\end{array} Using the factoring of trinomials in the form $x^2+bx+c,$ the $\text{ equation }$ \begin{array}{l}\require{cancel} a^2-4a-12=0 \end{array} has $c= -12$ and $b= -4 .$ The two numbers with a product of $c$ and a sum of $b$ are $\left\{ -6,2 \right\}.$ Using these two numbers, the $\text{ equation }$ above is equivalent to \begin{array}{l}\require{cancel} (a-6)(a+2)=0 .\end{array} Equating each factor to zero (Zero Product Property), and then isolating the variable, the solutions are $a=\left\{ -2,6 \right\} .$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-18 12:12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576092958450317, "perplexity": 1806.989051521513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590169.5/warc/CC-MAIN-20180718115544-20180718135544-00435.warc.gz"}
https://kishoreathrasseri.wordpress.com/category/reports/
## Nothing Short of a Miracle! That’s what happened today. The occasion was a cultural night arranged by the Staff Club of NITC (no one knew one even existed, till today!). Some of the faculty got together and wrote a drama to be staged tonight. I learnt about it from Deepak sir’s blog a few days ago, and that in itself seemed like a miracle. But the actual performance was nothing short of unbelievable, to say the least! It was fantastic. About the drama itself, I keep that for another day. I’m just too dazed by that performance to analyze it critically. Besides, Deepak sir has promised to make the script and video available. I always knew that the faculty members were fantastic people outside the classroom, but hats off to them for this wonderful performance! ## DSP Lab – Week 1 ### Constructing the Complex Plane Suppose we have a sampled signal defined by the sequence $h(n)$, $n=0,1,2,...,N-1$ Its Z- transform is given by $H(z) = \sum_{n=0}^{N-1} h(n)z^{-n}$ . It maps the original sequence into a new domain, which is the complex plane $z=e^{sT}$ where $s=\sigma+j\omega$ is the parameter in the Laplace domain and $T$ is the sampling period. The $j\omega$ axis in the $s$-plane maps onto the unit circle with centre at the origin in the $z$-plane. So the value of $H(z)$ at different points on the unit circle actually gives the contribution of the frequency component given by $\angle z$, in the original signal. This, in effect, gives the Discrete Fourier Transform of the sequence. Consider the following example: %original sequence h = [1,2,3,4]; %number of chosen points on the unit circle N = 64; %define the chosen points z = complex(cos(2*pi/N*(0:N-1)),sin(2*pi/N*(0:N-1))); %evaluate H(z) at each point for i = 1:N H(i) = 1+2*z(i)^-1+3*z(i)^-2+4*z(i)^-3; end %plot the unit circle plot(z) %plot the value of H(z) along the unit circle figure plot(abs(H)) %plot the N-point DFT of h(n) figure plot(abs(fft(h,64))) This example computes the value of $H(z)$ at 64 uniformly spaced points on the unit circle and compares it with the 64 point DFT. We can see that both (fig. b & c) are identical. unit circle value of H(z) abs(fft(h)) ## Freedom Walk at NITC It was almost midnight when the Freedom Walkers arrived here on Wednesday. They were thouroughly worn out from the long long walk from Thamarassery. They had dinner from our mini canteen, and then I led them to the rooms in PG-2 hostel which Sandeep sir from the Electrical department had booked. Next morning, a few S3 guys and I went to meet them. We had a small gathering in their room and discussed what all we could do to spread Free Software here at NITC. Since all of us except one were from Electronics, Jemshid suggested that we could get started on some Embedded GNU/Linux work. We can think of conducting workshops to get people interested in it. Prasad talked about the Freedom Toaster they had made, and suggested that we could try to make one with a vending machine, as a project. They said we would have their support if someone is ready to take it up. It’s too bad we couldn’t organize a more elaborate meeting with the Freedom Walkers, because of the exams. But we’ve got some pointers to think about, when we sit down to make a concrete plan regarding the FOSS Cell activities. ## GNU/Linux Install Fest at NITC As the first activity of the upcoming FOSS Cell NITC, we organized a GNU/Linux install fest on the occasion of Software Freedom Day. Around twenty people turned up during the day. The only undesirable part was that a couple of laptops, after installing Ubuntu, couldn’t boot Windows. Got to sort out their issues soon. We’ve set up a technical support mailing list for people to post their problems.Considering that it was the first ever event by our FOSS Cell, it didn’t go too badly. ## Software Freedom Day at Kozhikode Software Freedom Day 2008 was celebrated today at Malabar Christian College, Kozhikode. The event was organized by Swatantra Malayalam Computing, in association with Malabar Christian College. The main attraction was a seminar on Language Computing, led by SMC. There was also a GNU/Linux install fest and demo in parallel. We installed GNU/Linux on around 6-7 systems, apart from the 5 in the computer lab of Malabar Christian College. There was also an installation demo for hardware technicians. I couldn’t attend the seminar, and I’m looking forward to reading other reports about it. One of the highlights of the day was the revival of Free Software Users’ Group Calicut, which had been dormant for over two years. Jemshid, of Ascent Engineers, the team from KSEB led by Mohammed Unais, have taken the initiative to kick start the community’s activities. There were a few representatives from GEC West Hill and AWH Engg. College, and we’ve decided to organize a few workshops, to get some people from those colleges involved in FOSS as well, and try to create a network of college FOSS communities. Shyam put forward the necessity of a common platform for engineering colleges throughout Kerala, based on Free Software. We also have to explore the possibility of encouraging students to take up Free Software development as their projects. Jemshid and his team had managed to contact the Malabar IT dealers’ association, and their representatives had turned up. They expressed genuine interest in migrating to GNU/Linux for the default installation in new systems they sell. They would thus be able to avoid distributing so called “pirated” software. We have proposed to arrange a basic GNU/Linux workshop for the hardware vendors. This move has the potential to start a revolution. If they can show their customers that they can do almost anything on GNU/Linux that they normally use a computer for, they’ll be encouraged to switch to it. And the customers will have someone to turn to for support. On the whole, the event was a great success. Tomorrow, we have a small event planned in our campus. More on that later. See other blogs and photos of the event: Hiran ## Software Freedom Day at NITC We are planning to celebrate the Software Freedom Day through an install fest and demos. There was a meeting today to get some volunteers for the event, and around twenty S3 students turned up. Only some of them have used GNU/Linux before, and they have been given the task of familiarising the others with it before the event. We are also hopeful of launching our FOSS Cell officially on that day. More about the event as it materializes… ## Liberation of Environment Knowledge Repository The Centre for Science and Environment, in association with the National Knowledge Commission, has set up a National Portal on Environment. Read more here I became a keen reader of CSE’s Down to Earth magazine, while I was at IUAC. It’s very informative and covers stories of development and environment from a rural perspective- many things which never appear in the mainstream media. It is great to know that the Environment Portal will make the whole Down to Earth archive freely available.
2018-12-12 06:29:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3776330351829529, "perplexity": 2133.675659652066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823738.9/warc/CC-MAIN-20181212044022-20181212065522-00199.warc.gz"}
https://mycrystalgrove.com/products/bumblebee-jasper-lotus-cabochons
# Bumblebee Jasper Lotus Cabochons Regular price \$12.00 Sale price \$12.00 Regular price Sold out Unit price per Shipping calculated at checkout. We have 5 in stock. Carved from bumblebee jasper, these unique cabochons are flat-backed with uneven tops. They are approximately 25.5x20mm to 29.5x25.5mm in size and are sold per randomly selected piece. These cabochons are limited, I don't know if I'll be able to get these again in the future. Please note that these cabochons feature natural pitting and inclusions due to the nature of this stone.
2022-10-01 14:54:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412132263183594, "perplexity": 12552.754335666152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00398.warc.gz"}
https://openstax.org/books/introductory-statistics/pages/10-formula-review
Skip to Content Introductory Statistics # Formula Review ### 10.1Two Population Means with Unknown Standard Deviations Standard error: SE = $( s 1 ) 2 n 1 + ( s 2 ) 2 n 2 ( s 1 ) 2 n 1 + ( s 2 ) 2 n 2$ Test statistic (t-score): t = $( x ¯ 1 − x ¯ 2 )−( μ 1 − μ 2 ) ( s 1 ) 2 n 1 + ( s 2 ) 2 n 2 ( x ¯ 1 − x ¯ 2 )−( μ 1 − μ 2 ) ( s 1 ) 2 n 1 + ( s 2 ) 2 n 2$ Degrees of freedom: where: s1 and s2 are the sample standard deviations, and n1 and n2 are the sample sizes. $x ¯ 1 x ¯ 1$ and $x ¯ 2 x ¯ 2$ are the sample means. Cohen’s d is the measure of effect size: $d= x ¯ 1 − x ¯ 2 s pooled d= x ¯ 1 − x ¯ 2 s pooled$ where $s pooled = ( n 1 −1) s 1 2 +( n 2 −1) s 2 2 n 1 + n 2 −2 s pooled = ( n 1 −1) s 1 2 +( n 2 −1) s 2 2 n 1 + n 2 −2$ ### 10.2Two Population Means with Known Standard Deviations Normal Distribution: $X ¯ 1 − X ¯ 2 ∼N[ μ 1 − μ 2 , ( σ 1 ) 2 n 1 + ( σ 2 ) 2 n 2 ] X ¯ 1 − X ¯ 2 ∼N[ μ 1 − μ 2 , ( σ 1 ) 2 n 1 + ( σ 2 ) 2 n 2 ]$. Generally µ1µ2 = 0. Test Statistic (z-score): $z= ( x ¯ 1 − x ¯ 2 )−( μ 1 − μ 2 ) ( σ 1 ) 2 n 1 + ( σ 2 ) 2 n 2 z= ( x ¯ 1 − x ¯ 2 )−( μ 1 − μ 2 ) ( σ 1 ) 2 n 1 + ( σ 2 ) 2 n 2$ Generally µ1 - µ2 = 0. where: σ1 and σ2 are the known population standard deviations. n1 and n2 are the sample sizes. $x ¯ 1 x ¯ 1$ and $x ¯ 2 x ¯ 2$ are the sample means. μ1 and μ2 are the population means. ### 10.3Comparing Two Independent Population Proportions Pooled Proportion: pc = Distribution for the differences: $p ′ A − p ′ B ∼N[ 0, p c (1− p c )( 1 n A + 1 n B ) ] p ′ A − p ′ B ∼N[ 0, p c (1− p c )( 1 n A + 1 n B ) ]$ where the null hypothesis is H0: pA = pB or H0: pApB = 0. Test Statistic (z-score): $z= ( p ′ A − p ′ B ) p c (1− p c )( 1 n A + 1 n B ) z= ( p ′ A − p ′ B ) p c (1− p c )( 1 n A + 1 n B )$ where the null hypothesis is H0: pA = pB or H0: pApB = 0. where p′A and p′B are the sample proportions, pA and pB are the population proportions, Pc is the pooled proportion, and nA and nB are the sample sizes. ### 10.4Matched or Paired Samples Test Statistic (t-score): t = $x ¯ d − μ d ( s d n ) x ¯ d − μ d ( s d n )$ where: $x ¯ d x ¯ d$ is the mean of the sample differences. μd is the mean of the population differences. sd is the sample standard deviation of the differences. n is the sample size. Citation/Attribution Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute OpenStax. Attribution information • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at https://openstax.org/books/introductory-statistics/pages/1-introduction • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at https://openstax.org/books/introductory-statistics/pages/1-introduction Citation information © Sep 19, 2013 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License 4.0 license. The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
2019-12-12 03:29:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31229716539382935, "perplexity": 3947.066623499564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00143.warc.gz"}
http://tex.stackexchange.com/questions/36343/multiline-text-under-over-arrows
# Multiline text under/over arrows [duplicate] I want to write some multi-line text under a long arrow. Normally I use \xrightarrow[text below]{text over}, but if I try to add a \\ in one of them (say, I want "text below" to have two lines), I get a compilation error. What to do? I'm sure it's solvable. Thanks, John. - ## marked as duplicate by Gonzalo Medina, Stefan Kottwitz♦Nov 27 '11 at 23:35 Welcome to TeX Stack Exchange! It would be helpful if you provided a full MWE (minimum working example) showing what you're trying to achieve. In the present case, information on the information you're trying to set below the \xrightarrow would be very useful, as would be details on the "compilation error" you report getting. –  Mico Nov 27 '11 at 23:12 @John: change l (left) to c (center) in the argument of subarray: \begin{subarray}{c} a \\ b \end{subarray}. –  Gonzalo Medina Nov 27 '11 at 23:41
2015-01-27 12:43:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974318742752075, "perplexity": 3406.990041633588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00098-ip-10-180-212-252.ec2.internal.warc.gz"}
http://taggedwiki.zubiaga.org/new_content/18d9d89eed375b6352f0f41903a276d4
# Covariance Jump to: navigation, search In probability theory and statistics, covariance is a measure of how much two variables change together (variance is a special case of the covariance when the two variables are identical). If two variables tend to vary together (that is, when one of them is above its expected value, then the other variable tends to be above its expected value too), then the covariance between the two variables will be positive. On the other hand, if one of them tends to be above its expected value when the other variable is below its expected value, then the covariance between the two variables will be negative. ## Definition The covariance between two real-valued random variables X and Y, with expected values $\scriptstyle E(X)\,=\,\mu$ and $\scriptstyle E(Y)\,=\,\nu$ is defined as $\operatorname{Cov}(X, Y) = \operatorname{E}((X - \mu) (Y - \nu)), \,$ where E is the expected value operator. This can also be written: $\operatorname{Cov}(X, Y) = \operatorname{E}(X \cdot Y - \mu Y - \nu X + \mu \nu), \,$ $\operatorname{Cov}(X, Y) = \operatorname{E}(X \cdot Y) - \mu \operatorname{E}(Y) - \nu \operatorname{E}(X) + \mu \nu, \,$ $\operatorname{Cov}(X, Y) = \operatorname{E}(X \cdot Y) - \mu \nu. \,$ Random variables whose covariance is zero are called uncorrelated. If X and Y are independent, then their covariance is zero. This follows because under independence, $E(X \cdot Y)=E(X) \cdot E(Y)=\mu\nu.$ Recalling the final form of the covariance derivation given above, and substituting, we get $\operatorname{Cov}(X, Y) = \mu \nu - \mu \nu = 0.$ The converse, however, is generally not true: Some pairs of random variables have covariance zero although they are not independent. Under some additional assumptions, covariance zero sometimes does entail independence, as for example in the case of multivariate normal distributions. The units of measurement of the covariance Cov(X, Y) are those of X times those of Y. By contrast, correlation, which depends on the covariance, is a dimensionless measure of linear dependence. ## Properties If X, Y, W, and V are real-valued random variables and a, b, c, d are constant ("constant" in this context means non-random), then the following facts are a consequence of the definition of covariance: $\operatorname{Cov}(X, a) = 0 \,$ $\operatorname{Cov}(X, X) = \operatorname{Var}(X)\,$ $\operatorname{Cov}(X, Y) = \operatorname{Cov}(Y, X)\,$ $\operatorname{Cov}(aX, bY) = ab\, \operatorname{Cov}(X, Y)\,$ $\operatorname{Cov}(X+a, Y+b) = \operatorname{Cov}(X, Y)\,$ $\operatorname{Cov}(aX+bY, cW+dV) = ac\,\operatorname{Cov}(X,W)+ad\,\operatorname{Cov}(X,V)+bc\,\operatorname{Cov}(Y,W)+bd\,\operatorname{Cov}(Y,V)\,$ For sequences X1, ..., Xn and Y1, ..., Ym of random variables, we have $\operatorname{Cov}\left(\sum_{i=1}^n {X_i}, \sum_{j=1}^m{Y_j}\right) = \sum_{i=1}^n{\sum_{j=1}^m{\operatorname{Cov}\left(X_i, Y_j\right)}}.\,$ For a sequence X1, ..., Xn of random variables, and constants a1, ..., an, we have $\operatorname{Var}\left(\sum_{i=1}^n a_iX_i \right) = \sum_{i=1}^n a_i^2\operatorname{Var}(X_i) + 2\sum_{i,j\,:\,i ### Incremental computation Covariance can be computed efficiently from incrementally available values using a generalization of the computational formula for the variance: $\operatorname{Cov}(X_i, X_j) = \operatorname{E}\left((X_i-\operatorname{E}(X_i))(X_j-\operatorname{E}(X_j))\right) = \operatorname{E}(X_iX_j) -\operatorname{E}(X_i)\operatorname{E}(X_j)$ ### Relationship to inner products Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product: (1) bilinear: for constants a and b and random variables X, Y, and U, Cov(aX + bY, U) = a Cov(X, U) + b Cov(Y, U) (2) symmetric: Cov(X, Y) = Cov(Y, X) (3) positive semi-definite: Var(X) = Cov(X, X) ≥ 0, and Cov(X, X) = 0 implies that X is a constant random variable (K). It can be shown that the covariance is an inner product over some subspace of the vector space of random variables with finite second moment. ## Covariance matrix, operator, bilinear form, and function For column-vector valued random variables X and Y with respective expected values μ and ν, and respective scalar components m and n, the covariance is defined to be the m×n matrix called the covariance matrix: $\operatorname{Cov}(X, Y) = \operatorname{E}((X-\mu)(Y-\nu)^\top).\,$ For vector-valued random variables, Cov(XY) and Cov(YX) are each other's transposes. More generally, for a probability measure P on a Hilbert space H with inner product $\langle \cdot,\cdot\rangle$, the covariance of P is the bilinear form Cov: H × H → H given by $\mathrm{Cov}(x, y) = \int_{H} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \mathbf{P} (z)$ for all x and y in H. The covariance operator C is then defined by $\mathrm{Cov}(x, y) = \langle Cx, y \rangle$ (from the Riesz representation theorem, such operator exists if Cov is bounded). Since Cov is symmetric in its arguments, the covariance operator is self-adjoint (the infinite-dimensional analogy of the transposition symmetry in the finite-dimensional case). When P is a centred Gaussian measure, C is also a nuclear operator. In particular, it is a compact operator of trace class, that is, it has finite trace. Even more generally, for a probability measure P on a Banach space B, the covariance of P is the bilinear form on the algebraic dual $B^\#$, defined by $\mathrm{Cov}(x, y) = \int_{B} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \mathbf{P} (z)$ where $\langle x, z \rangle$ is now the value of the linear functional x on the element z. Quite similarly, the covariance function of a function-valued random element (in special cases called random process or random field) z is $\mathrm{Cov}(x, y) = \int z(x) z(y) \, \mathrm{d} \mathbf{P} (z) = E(z(x) z(y)),$ where z(x) is now the value of the function z at the point x, i.e., the value of the linear functional $u \mapsto u(x)$ evaluated at z. ## Comments The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation matrix. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence. ## See also Look up covariance in Wiktionary, the free dictionary.
2020-01-19 19:27:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582743048667908, "perplexity": 431.1792508255127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00031.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/1999/docId/846
On Center Cycles in Grid Graphs • Finding "good" cycles in graphs is a problem of great interest in graph theory as well as in locational analysis. We show that the center and median problems are NP hard in general graphs. This result holds both for the variable cardinality case (i.e. all cycles of the graph are considered) and the fixed cardinality case (i.e. only cycles with a given cardinality p are feasible). Hence it is of interest to investigate special cases where the problem is solvable in polynomial time. In grid graphs, the variable cardinality case is, for instance, trivially solvable if the shape of the cycle can be chosen freely. If the shape is fixed to be a rectangle one can analyse rectangles in grid graphs with, in sequence, fixed dimension, fixed cardinality, and variable cardinality. In all cases a com plete characterization of the optimal cycles and closed form expressions of the optimal objective values are given, yielding polynomial time algorithms for all cases of center rectangle problems. Finally, it is shown that center cycles can be chosen as rectangles for small cardinalities such that the center cycle problem in grid graphs is in these cases completely solved. $Rev: 13581$
2015-11-27 15:32:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7902109026908875, "perplexity": 414.7463691902346}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449258.99/warc/CC-MAIN-20151124205409-00212-ip-10-71-132-137.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1532420/whats-the-relation-between-stirling-numbers-and-the-generating-functions
# What's the relation between Stirling numbers and the generating functions? I just started studying higher combinatorics, but until now in the combinatorial sense I had only seen binomial theorem and coefficients. Therefore, I'm having a lot of difficulty in grasping the materials. I recently studies Stirling numbers of the first kind and second kind, though I mostly concentrated on the second kind, and I learned that something like $S_{n,k}$ let's us partition $n$ elements into exactly $k$-partitions. For example, if $n=4$, and $k=2$, then we can partition the $4$ element set into two partitions, either making one partition contain $1$ element and the other $3$ element, or making both of the partitions contain $2$ elements, where I also saw that $S{n,2} = 2^{n-1}-1$. Anyway, now I entered into the realm of generating functions, and I don't exactly understand what are they, why we need them, and particularly what are their relation with Stirling numbers that I studied lately? I know that this seems like a broad question, but I would like to hear something about this connection. I have already been reading book, and checking Wikipedia, but it is still not clear to me.
2021-05-11 02:08:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017891645431519, "perplexity": 164.38408841803383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991553.4/warc/CC-MAIN-20210510235021-20210511025021-00143.warc.gz"}
https://logic-library.berkeley.edu/catalog/index/T?page=5
Takeuti, Gaisi, On Hierarchies of Perdicates of Ordinal Numbers Takeuti, Gaisi, On the Weak Definability of Set Theory Takeuti, Gaisi, Quantum Set Theory Takeuti, Gaisi, Transcendency of Cardinals Takeuti, Gaisi ; Maehara, Shoji, The First-Order Predicate Logic with Infinitely Long Expressions with Equality (1962) Takeuti, Gaisi ; Yasugi, Mariko, Fundamental Sequences of Ordinal Diagrams (1976) Tamaka, Hisao, On the Axiom of Determinacy (1977) Tanaka, Hisao, On Limits of Sequences of Hyperarithmetical Functionals and Predicates (1966) Tanaka, Hisao, On Limits of Sequences of Recursive Functions (1966) Tanaka, Hisao, On Theories Formalized in the First-Order Predicate Logic with Infinitely Long Expressions (1966)
2022-05-26 14:29:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788004755973816, "perplexity": 11421.329865492062}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00643.warc.gz"}
https://indico.math.cnrs.fr/event/3052/contributions
Les personnes qui possèdent un compte PLM-Mathrice sont invitées à l'utiliser. # Transitions de phase et équations non locales 25-27 avril 2018 Institut de mathématique Simion Stoilow de l'Académie Roumaine Europe/Bucharest timezone Accueil > Contribution List ## Liste des contributions Affichage17 contributions sur 17 Dans cet exposé, je présenterai des résultats de régularité partielle pour les applications harmoniques fractionnaires. L’équation sous-jacente est l’analogue du système des applications harmoniques à valeurs dans une variété où le Laplacien est ici remplacé par le Laplacien fractionnaire. J’expliquerai également leur lien avec les surfaces minimales à frontière libre et les ... Plus Présenté par Vincent MILLOT on 27 avr. 2018 à 16:30 Dans cette lecture nous présentons des résultats concernant deux problèmes distincts, obtenus en collaboration avec Marian Bocea. Premièrement, nous étudions la famille d'équations aux dérivées partielles $-\varepsilon\Delta u-2\Delta_\infty u = 0$ ($\varepsilon >0$) dans un domaine $\Omega$ avec une condition aux limites de Dirichlet. Dans le cas où $\varepsilon = 1,$ qui est étro ... Plus Présenté par Mihai MIHĂILESCU on 27 avr. 2018 à 15:00 When $sp\ge N$ the space $W^{s,p}(S^N,S^N)$ can be decomposed into homotopy classes according to the degree of the maps. We consider two natural distances between different classes. We prove estimates, and in some cases even explicit formulas, for these distances. Most of the work is joint with Haim Brezis (Rutgers and Technion) and Petru Mironescu (Lyon 1). Présenté par Itai SHAFRIR on 27 avr. 2018 à 10:00 We will present recent results obtained in collaboration with S. Conti (U. Bonn), G. Francfort (U. Paris-Nord), V. Crismale (E. Polytechnique, Palaiseau) and F. Iurlano (U. Pierre et Marie Curie, Paris) on the brittle fracture model of Francfort and Marigo (1998), which is a variational version of Griffith's classical model to predict crack growth. We will discuss existence of minimizers for the s ... Plus Présenté par Antonin CHAMBOLLE on 25 avr. 2018 à 09:00 We reconsider the proof of uniqueness of isometric immersions of two-dimensional spheres with positive Gauss curvature, with derivatives in a certain Hölder class. We observe that an understanding of the integrability properties of the Brouwer degree is crucial to extend the range of validity for the uniqueness statement. We take this as a motivation to state and prove a theorem about the integra ... Plus Présenté par Heiner OLBERMANN on 26 avr. 2018 à 16:30 The lecture will discuss the classical Oseen-Frank theory of nematic liquid crystals, and some results with Epifanio Virga on energy-minimizing properties of universal solutions, and with Lu Liu on exterior problems. Présenté par John BALL on 25 avr. 2018 à 16:30 Consider a two-dimensional domain shaped like a wire, not necessarily of uniform cross section. Let $V$ denote an electric potential driven by a voltage drop between the conducting surfaces of the wire. We consider the operator $A_h=-h^2\Delta+iV$ in the semi-classical limit $h\to0$. We obtain both the asymptotic behaviour of the left margin of the spectrum, as well as resolvent es ... Plus Présenté par Bernard HELFFER on 26 avr. 2018 à 14:00 On presente des résultats obtenus en collaboration avec Horia Cornean, Bernard Helffer et Viorel Iftimie concernant l'utilisation du calcul pseudodifférentiel magnétique pour la construction des hamiltoniens effectifs de Peierls - Onsager pour l'étude des électrons dans un potentiel périodique et un champ magnétique faible et lisse. Présenté par Radu PURICE on 26 avr. 2018 à 15:00 Nematic liquid crystals are matter in an intermediate phase between the solid and the liquid ones. The constituent molecules, while isotropically distributed in space, retain long-range orientational order. The classical variational theories for nematic liquid crystals are quadratic in the gradient and as a consequence, configurations with a singular line have infinite energy within these theories ... Plus Présenté par Giacomo CANEVARI on 27 avr. 2018 à 11:30 on 25 avr. 2018 à 08:45 The class of entropy solutions to the eikonal equation arises in connection with the asymptotics of the Aviles-Giga energy, a model related to smectic liquid crystals, thin film elasticity and micromagnetism. We prove, using a new simple form of the kinetic formulation, that this class coincides with the class of solutions which enjoy a certain Besov regularity. Présenté par Xavier LAMY on 26 avr. 2018 à 11:30 Motivated by a conjecture of De Giorgi on the Allen-Cahn Equation and classification results for some its solutions, we will describe recent results related to one-dimensional symmetry for solutions of nonlocal equations involving possibly nonlinear nonlocal operators. We will concentrate mainly in low dimensions and present several ways to attack this problem. We will then describe open problems ... Plus Présenté par Yannick SIRE on 27 avr. 2018 à 14:00 It is nowadays classical that phase transition models such as the Cahn-Hilliard energy can be used to regularize some more delicate functionals of geometric nature such as the Perimeter functional or more generally the $(N-1)$-Hausdorff measure. This procedure is sometimes called a Phase-Field method in numerical analysis and has been used in order to approximate some classical shape optimization ... Plus Présenté par Antoine LEMENANT on 25 avr. 2018 à 11:30 We consider energy minimizing configurations of a nematic liquid crystal, as described by the Landau-de Gennes model. We focus on an important model problem concerning a nematic surrounding a spherical colloid particle, with normal anchoring at the surface. For topological reasons, the nematic director must exhibit a defect (singularity), which may take the form of a point or line defect. We co ... Plus Présenté par Lia BRONSARD on 27 avr. 2018 à 09:00 The Heisenberg groups are examples of sub-Riemannian manifolds homeomorphic, but not diffeomorphic to the Euclidean space. Their metric is derived from curves which are only allowed to move in so-called horizontal directions. When one considers approximation or extension problems for Sobolev maps into the Riemannian manifolds it is known that topological properties of the target manifold play a r ... Plus Présenté par Armin SCHIKORRA on 26 avr. 2018 à 10:00 The aim of this talk is to present quantitative estimates for transport equations with rough, i.e. non-smooth, velocity fields. The final goal is to use those estimates to obtain new global existence results à la Leray on complex systems where the transport equations is coupled to other PDEs for instance as in fluid mechanics. We will explain for instance how it helps to treat phase transiti ... Plus Présenté par Didier BRESCH on 26 avr. 2018 à 09:00 Nous considérons un problème en calcul des variations pour des fonctions définies sur un ouvert borné et à valeurs scalaires, et pour un intégrande convexe qui n'est ni régulier ni strictement convexe. Nous décrivons les propriétés de régularité et d'unicité des solutions. Il s'agit d'un travail en collaboration avec Guy Bouchitté. We present a scalar problem in the multiple ... Plus Présenté par Pierre BOUSQUET on 25 avr. 2018 à 10:00
2018-04-26 11:14:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323121190071106, "perplexity": 3469.7145688542532}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00034.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/3/lesson/3.3.1/problem/3-94
### Home > APCALC > Chapter 3 > Lesson 3.3.1 > Problem3-94 3-94. Show that if $f^\prime$ is an even function and $f(0) = 0$, then $f$ is odd. Demonstrate this fact with a graph. Sketch different examples of possible $f^\prime(x)$ functions that are both even and go through the origin. Then sketch their antiderivatives $f(x)$. $f'(x)=x^{2}................f(x)=\frac{1}{3}x^{3}+C$ $f'(x)=x^{4}................f(x)=\frac{1}{5}x^{5}+C$ $f'(x)=\text{sin}x.............f(x)=-\text{cos}x+C$ Make a conjecture about why this will work ONLY if the even derivative goes through the origin? For example: consider even function $f^\prime (x) = x^2 + 1$.
2021-01-27 20:08:55
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861419141292572, "perplexity": 657.2279093276811}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00738.warc.gz"}
https://hub.syn.tools/rook-ceph/runbooks/CephPGNotDeepScrubbed.html
Please consider opening a PR to improve this runbook if you gain new information about causes of the alert, or how to debug or resolve the alert. Click "Edit this Page" in the top right corner to create a PR directly on GitHub. ## Overview One or more PGs haven’t been deep scrubbed recently. Deep scrub is a data integrity feature, protecting against bit-rot. It compares the contents of objects and their replicas for inconsistency. When PGs miss their deep scrub window, it may indicate that the window is too small or PGs weren’t in a 'clean' state during the deep-scrub window. ## Steps for debugging ### Initiate a deep scrub ``````$ceph_cluster_ns=syn-rook-ceph-cluster$ kubectl -n ${ceph_cluster_ns} exec -it deploy/rook-ceph-tools -- ceph health detail$ kubectl -n \${ceph_cluster_ns} exec -it deploy/rook-ceph-tools -- ceph pg deep-scrub <PG_ID_FROM_ALERT>``````
2023-04-01 13:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24813717603683472, "perplexity": 12175.344253925794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00101.warc.gz"}
http://www.theinfolist.com/html/ALL/s/commutative_ring.html
TheInfoList In ring theory, a branch of abstract algebra, a commutative ring is a Ring (mathematics), ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of noncommutative rings where multiplication is not required to be commutative. Definition and first examples Definition A ''ring'' is a Set (mathematics), set $R$ equipped with two binary operations, i.e. operations combining any two elements of the ring to a third. They are called ''addition'' and ''multiplication'' and commonly denoted by "$+$" and "$\cdot$"; e.g. $a+b$ and $a \cdot b$. To form a ring these two operations have to satisfy a number of properties: the ring has to be an abelian group under addition as well as a monoid under multiplication, where multiplication distributive law, distributes over addition; i.e., $a \cdot \left\left(b + c\right\right) = \left\left(a \cdot b\right\right) + \left\left(a \cdot c\right\right)$. The identity elements for addition and multiplication are denoted $0$ and $1$, respectively. If the multiplication is commutative, i.e. $a \cdot b = b \cdot a,$ then the ring ''$R$'' is called ''commutative''. In the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. First examples An important example, and in some sense crucial, is the integer, ring of integers $\mathbb$ with the two operations of addition and multiplication. As the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted $\mathbb$ as an abbreviation of the German language, German word ''Zahlen'' (numbers). A field (mathematics), field is a commutative ring where $0 \not = 1$ and every 0 (number), non-zero element $a$ is invertible; i.e., has a multiplicative inverse $b$ such that $a \cdot b = 1$. Therefore, by definition, any field is a commutative ring. The rational number, rational, real number, real and complex numbers form fields. If ''$R$'' is a given commutative ring, then the set of all polynomials in the variable $X$ whose coefficients are in ''$R$'' forms the polynomial ring, denoted $R \left\left[ X \right\right]$. The same holds true for several variables. If ''$V$'' is some topological space, for example a subset of some $\mathbb^n$, real- or complex-valued continuous functions on ''$V$'' form a commutative ring. The same is true for differentiable function, differentiable or holomorphic functions, when the two concepts are defined, such as for ''$V$'' a complex manifold. Divisibility In contrast to fields, where every nonzero element is multiplicatively invertible, the concept of divisibility (ring theory), divisibility for rings is richer. An element $a$ of ring ''$R$'' is called a unit (algebra), unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i.e. an element $a$ such that there exists a non-zero element $b$ of the ring such that $ab = 0$. If ''$R$'' possesses no non-zero zero divisors, it is called an integral domain (or domain). An element $a$ satisfying $a^n = 0$ for some positive integer $n$ is called nilpotent element, nilpotent. Localizations The ''localization'' of a ring is a process in which some elements are rendered invertible, i.e. multiplicative inverses are added to the ring. Concretely, if ''$S$'' is a multiplicatively closed subset of ''$R$'' (i.e. whenever $s,t \in S$ then so is $st$) then the ''localization'' of ''$R$'' at ''$S$'', or ''ring of fractions'' with denominators in ''$S$'', usually denoted $S^R$ consists of symbols subject to certain rules that mimic the cancellation familiar from rational numbers. Indeed, in this language ''$\mathbb$'' is the localization of ''$\mathbb$'' at all nonzero integers. This construction works for any integral domain ''$R$'' instead of ''$\mathbb$''. The localization $\left\left(R\backslash \left\\right\right)^R$ is a field, called the quotient field of ''$R$''. Ideals and modules Many of the following notions also exist for not necessarily commutative rings, but the definitions and properties are usually more complicated. For example, all ideals in a commutative ring are automatically two-sided ideal, two-sided, which simplifies the situation considerably. Modules and ideals For a ring ''$R$'', an ''$R$''-''module'' ''$M$'' is like what a vector space is to a field. That is, elements in a module can be added; they can be multiplied by elements of ''$R$'' subject to the same axioms as for a vector space. The study of modules is significantly more involved than the one of vector spaces in linear algebra, since several features of vector spaces fail for modules in general: modules need not be free module, free, i.e., of the form $M= \bigoplus_ R.$ Even for free modules, the rank of a free module (i.e. the analog of the dimension of vector spaces) may not be well-defined. Finally, submodules of finitely generated modules need not be finitely generated (unless ''$R$'' is Noetherian, see #submodules of f g modules, below). Ideals ''Ideals'' of a ring ''$R$'' are the submodules of ''$R$'', i.e., the modules contained in ''$R$''. In more detail, an ideal ''$I$'' is a non-empty subset of ''$R$'' such that for all ''$r$'' in ''$R$'', ''$i$'' and ''$j$'' in ''$I$'', both ''$ri$'' and ''$i+j$'' are in ''$I$''. For various applications, understanding the ideals of a ring is of particular importance, but often one proceeds by studying modules in general. Any ring has two ideals, namely the 0 (number), zero ideal ''$\left\$'' and ''$R$'', the whole ring. These two ideals are the only ones precisely if ''$R$'' is a field. Given any subset ''$F=\left\_$'' of ''$R$'' (where ''$J$'' is some index set), the ideal ''generated by $F$'' is the smallest ideal that contains ''$F$''. Equivalently, it is given by finite linear combinations ''$r_1 f_1 + r_2 f_2 + \dots + r_n f_n .$'' Principal ideal domains If ''$F$'' consists of a single element ''$r$'', the ideal generated by ''$F$'' consists of the multiples of ''$r$'', i.e., the elements of the form ''$rs$'' for arbitrary elements ''$s$''. Such an ideal is called a principal ideal. If every ideal is a principal ideal, ''$R$'' is called a principal ideal ring; two important cases are ''$\mathbb$'' and ''$k \left\left[X\right\right]$'', the polynomial ring over a field ''$k$''. These two are in addition domains, so they are called principal ideal domains. Unlike for general rings, for a principal ideal domain, the properties of individual elements are strongly tied to the properties of the ring as a whole. For example, any principal ideal domain ''$R$'' is a unique factorization domain (UFD) which means that any element is a product of irreducible elements, in a (up to reordering of factors) unique way. Here, an element ''a'' in a domain is called irreducible element, irreducible if the only way of expressing it as a product ''$a=bc ,$'' is by either ''$b$'' or ''$c$'' being a unit. An example, important in Field (mathematics), field theory, are irreducible polynomials, i.e., irreducible elements in ''$k \left\left[X\right\right]$'', for a field ''$k$''. The fact that ''$\mathbb$'' is a UFD can be stated more elementarily by saying that any natural number can be uniquely decomposed as product of powers of prime numbers. It is also known as the fundamental theorem of arithmetic. An element ''$a$'' is a prime element if whenever ''$a$'' divides a product ''$bc$'', ''$a$'' divides ''$b$'' or ''$c$''. In a domain, being prime implies being irreducible. The converse is true in a unique factorization domain, but false in general. The factor ring The definition of ideals is such that "dividing" ''$I$'' "out" gives another ring, the ''factor ring'' ''$R$'' / ''$I$'': it is the set of cosets of ''$I$'' together with the operations ''$\left(a+I\right)+\left(b+I\right)=\left(a+b\right)+I$'' and ''$\left\left(a+I\right\right) \left\left(b+I\right\right)=ab+I$''. For example, the ring $\mathbb/n\mathbb$ (also denoted $\mathbb_n$), where ''$n$'' is an integer, is the ring of integers modulo ''$n$''. It is the basis of modular arithmetic. An ideal is ''proper'' if it is strictly smaller than the whole ring. An ideal that is not strictly contained in any proper ideal is called maximal ideal, maximal. An ideal ''$m$'' is maximal if and only if ''$R$'' / ''$m$'' is a field. Except for the zero ring, any ring (with identity) possesses at least one maximal ideal; this follows from Zorn's lemma. Noetherian rings A ring is called ''Noetherian'' (in honor of Emmy Noether, who developed this concept) if every ascending chain condition, ascending chain of ideals ''$0 \subseteq I_0 \subseteq I_1 \subseteq \dots \subseteq I_n \subseteq I_ \dots$'' becomes stationary, i.e. becomes constant beyond some index ''$n$''. Equivalently, any ideal is generated by finitely many elements, or, yet equivalent, submodules of finitely generated modules are finitely generated. Being Noetherian is a highly important finiteness condition, and the condition is preserved under many operations that occur frequently in geometry. For example, if ''$R$'' is Noetherian, then so is the polynomial ring ''$R \left\left[X_1,X_2,\dots,X_n\right\right]$'' (by Hilbert's basis theorem), any localization ''$S^R$'', and also any factor ring ''$R$'' / ''$I$''. Any non-Noetherian ring ''$R$'' is the union (set theory), union of its Noetherian subrings. This fact, known as Noetherian approximation, allows the extension of certain theorems to non-Noetherian rings. Artinian rings A ring is called Artinian ring, Artinian (after Emil Artin), if every descending chain of ideals ''$R \supseteq I_0 \supseteq I_1 \supseteq \dots \supseteq I_n \supseteq I_ \dots$'' becomes stationary eventually. Despite the two conditions appearing symmetric, Noetherian rings are much more general than Artinian rings. For example, ''$\mathbb$'' is Noetherian, since every ideal can be generated by one element, but is not Artinian, as the chain ''$\mathbb \supsetneq 2\mathbb \supsetneq 4\mathbb \supsetneq 8\mathbb \dots$'' shows. In fact, by the Hopkins–Levitzki theorem, every Artinian ring is Noetherian. More precisely, Artinian rings can be characterized as the Noetherian rings whose Krull dimension is zero. The spectrum of a commutative ring Prime ideals As was mentioned above, ''$\mathbb$'' is a unique factorization domain. This is not true for more general rings, as algebraists realized in the 19th century. For example, in $\mathbb\left[\sqrt\right]$ there are two genuinely distinct ways of writing 6 as a product: $6 = 2 \cdot 3 = \left(1 + \sqrt\right)\left(1 - \sqrt\right).$ Prime ideals, as opposed to prime elements, provide a way to circumvent this problem. A prime ideal is a proper (i.e., strictly contained in ''$R$'') ideal ''$p$'' such that, whenever the product ''$ab$'' of any two ring elements ''$a$'' and ''$b$'' is in ''$p$'', at least one of the two elements is already in ''$p$''. (The opposite conclusion holds for any ideal, by definition.) Thus, if a prime ideal is principal, it is equivalently generated by a prime element. However, in rings such as $\mathbb\left\left[\sqrt\right\right]$, prime ideals need not be principal. This limits the usage of prime elements in ring theory. A cornerstone of algebraic number theory is, however, the fact that in any Dedekind ring (which includes $\mathbb\left\left[\sqrt\right\right]$ and more generally the algebraic integers, ring of integers in a number field) any ideal (such as the one generated by 6) decomposes uniquely as a product of prime ideals. Any maximal ideal is a prime ideal or, more briefly, is prime. Moreover, an ideal ''$I$'' is prime if and only if the factor ring ''$R$'' / ''$I$'' is an integral domain. Proving that an ideal is prime, or equivalently that a ring has no zero-divisors can be very difficult. Yet another way of expressing the same is to say that the Complement (set theory), complement ''$R \backslash p$'' is multiplicatively closed. The localisation ''$\left\left(R \backslash p\right\right)^R$''(''R'' \ ''p'')−1''R'' is important enough to have its own notation: ''$R_p$''. This ring has only one maximal ideal, namely ''$pR_p$''. Such rings are called local ring, local. The spectrum The ''spectrum of a ring $R$'',This notion can be related to the Spectrum of an operator, spectrum of a linear operator, see Spectrum of a C*-algebra and Gelfand representation. denoted by ''$\text\ R$'', is the set of all prime ideals of ''$R$''. It is equipped with a topology, the Zariski topology, which reflects the algebraic properties of ''$R$'': a basis of open subsets is given by ''$D\left(f\right) = \left\$'', where ''$f$'' is any ring element. Interpreting ''$f$'' as a function that takes the value ''f'' mod ''p'' (i.e., the image of ''f'' in the residue field ''R''/''p''), this subset is the locus where ''f'' is non-zero. The spectrum also makes precise the intuition that localisation and factor rings are complementary: the natural maps ''R'' → ''R''''f'' and ''R'' → ''R'' / ''fR'' correspond, after endowing the spectra of the rings in question with their Zariski topology, to complementary open immersion, open and closed immersions respectively. Even for basic rings, such as illustrated for ''R'' = Z at the right, the Zariski topology is quite different from the one on the set of real numbers. The spectrum contains the set of maximal ideals, which is occasionally denoted mSpec (''R''). For an algebraically closed field ''k'', mSpec (k[''T''1, ..., ''T''''n''] / (''f''1, ..., ''f''''m'')) is in bijection with the set Thus, maximal ideals reflect the geometric properties of solution sets of polynomials, which is an initial motivation for the study of commutative rings. However, the consideration of non-maximal ideals as part of the geometric properties of a ring is useful for several reasons. For example, the minimal prime ideals (i.e., the ones not strictly containing smaller ones) correspond to the irreducible components of Spec ''R''. For a Noetherian ring ''R'', Spec ''R'' has only finitely many irreducible components. This is a geometric restatement of primary decomposition, according to which any ideal can be decomposed as a product of finitely many primary ideals. This fact is the ultimate generalization of the decomposition into prime ideals in Dedekind rings. Affine schemes The notion of a spectrum is the common basis of commutative algebra and algebraic geometry. Algebraic geometry proceeds by endowing Spec ''R'' with a sheaf (mathematics), sheaf $\mathcal O$ (an entity that collects functions defined locally, i.e. on varying open subsets). The datum of the space and the sheaf is called an affine scheme. Given an affine scheme, the underlying ring ''R'' can be recovered as the global sections of $\mathcal O$. Moreover, this one-to-one correspondence between rings and affine schemes is also compatible with ring homomorphisms: any ''f'' : ''R'' → ''S'' gives rise to a continuous map in the opposite direction The resulting equivalence of categories, equivalence of the two said categories aptly reflects algebraic properties of rings in a geometrical manner. Similar to the fact that manifold (mathematics), manifolds are locally given by open subsets of R''n'', affine schemes are local models for scheme (mathematics), schemes, which are the object of study in algebraic geometry. Therefore, several notions concerning commutative rings stem from geometric intuition. Dimension The ''Krull dimension'' (or dimension) dim ''R'' of a ring ''R'' measures the "size" of a ring by, roughly speaking, counting independent elements in ''R''. The dimension of algebras over a field ''k'' can be axiomatized by four properties: * The dimension is a local property: dim ''R'' = supp ∊ Spec ''R'' dim ''R''''p''. * The dimension is independent of nilpotent elements: if ''I'' ⊆ ''R'' is nilpotent then dim ''R'' = dim ''R'' / ''I''. * The dimension remains constant under a finite extension: if ''S'' is an ''R''-algebra which is finitely generated as an ''R''-module, then dim ''S'' = dim ''R''. * The dimension is calibrated by dim ''k''[''X''1, ..., ''X''''n''] = ''n''. This axiom is motivated by regarding the polynomial ring in ''n'' variables as an algebraic analogue of affine space, ''n''-dimensional space. The dimension is defined, for any ring ''R'', as the supremum of lengths ''n'' of chains of prime ideals For example, a field is zero-dimensional, since the only prime ideal is the zero ideal. The integers are one-dimensional, since chains are of the form (0) ⊊ (''p''), where ''p'' is a prime number. For non-Noetherian rings, and also non-local rings, the dimension may be infinite, but Noetherian local rings have finite dimension. Among the four axioms above, the first two are elementary consequences of the definition, whereas the remaining two hinge on important facts in commutative algebra, the going-up theorem and Krull's principal ideal theorem. Ring homomorphisms A ''ring homomorphism'' or, more colloquially, simply a ''map'', is a map ''f'' : ''R'' → ''S'' such that These conditions ensure ''f''(0) = 0. Similarly as for other algebraic structures, a ring homomorphism is thus a map that is compatible with the structure of the algebraic objects in question. In such a situation ''S'' is also called an ''R''-algebra, by understanding that ''s'' in ''S'' may be multiplied by some ''r'' of ''R'', by setting The ''kernel'' and ''image'' of ''f'' are defined by ker (''f'') = and im (''f'') = ''f''(''R'') = . The kernel is an ring ideal, ideal of ''R'', and the image is a subring of ''S''. A ring homomorphism is called an isomorphism if it is bijective. An example of a ring isomorphism, known as the Chinese remainder theorem, is $\mathbf Z/n = \bigoplus_^k \mathbf Z/p_i$ where ''n'' = ''p''1''p''2...''p''''k'' is a product of pairwise distinct prime numbers. Commutative rings, together with ring homomorphisms, form a category (mathematics), category. The ring Z is the initial object in this category, which means that for any commutative ring ''R'', there is a unique ring homomorphism Z → ''R''. By means of this map, an integer ''n'' can be regarded as an element of ''R''. For example, the binomial formula $(a+b)^n = \sum_^n \binom n k a^k b^$ which is valid for any two elements ''a'' and ''b'' in any commutative ring ''R'' is understood in this sense by interpreting the binomial coefficients as elements of ''R'' using this map. Given two ''R''-algebras ''S'' and ''T'', their tensor product of algebras, tensor product is again a commutative ''R''-algebra. In some cases, the tensor product can serve to find a ''T''-algebra which relates to ''Z'' as ''S'' relates to ''R''. For example, Finite generation An ''R''-algebra ''S'' is called finitely generated algebra, finitely generated (as an algebra) if there are finitely many elements ''s''1, ..., ''s''''n'' such that any element of ''s'' is expressible as a polynomial in the ''s''''i''. Equivalently, ''S'' is isomorphic to A much stronger condition is that ''S'' is finitely generated module, finitely generated as an ''R''-module, which means that any ''s'' can be expressed as a ''R''-linear combination of some finite set ''s''1, ..., ''s''''n''. Local rings A ring is called local ring, local if it has only a single maximal ideal, denoted by ''m''. For any (not necessarily local) ring ''R'', the localization at a prime ideal ''p'' is local. This localization reflects the geometric properties of Spec ''R'' "around ''p''". Several notions and problems in commutative algebra can be reduced to the case when ''R'' is local, making local rings a particularly deeply studied class of rings. The residue field of ''R'' is defined as Any ''R''-module ''M'' yields a ''k''-vector space given by ''M'' / ''mM''. Nakayama's lemma shows this passage is preserving important information: a finitely generated module ''M'' is zero if and only if ''M'' / ''mM'' is zero. Regular local rings The ''k''-vector space ''m''/''m''2 is an algebraic incarnation of the cotangent space. Informally, the elements of ''m'' can be thought of as functions which vanish at the point ''p'', whereas ''m''2 contains the ones which vanish with order at least 2. For any Noetherian local ring ''R'', the inequality holds true, reflecting the idea that the cotangent (or equivalently the tangent) space has at least the dimension of the space Spec ''R''. If equality holds true in this estimate, ''R'' is called a regular local ring. A Noetherian local ring is regular if and only if the ring (which is the ring of functions on the tangent cone) $\bigoplus_n m^n / m^$ is isomorphic to a polynomial ring over ''k''. Broadly speaking, regular local rings are somewhat similar to polynomial rings. Regular local rings are UFD's. Discrete valuation rings are equipped with a function which assign an integer to any element ''r''. This number, called the valuation of ''r'' can be informally thought of as a zero or pole order of ''r''. Discrete valuation rings are precisely the one-dimensional regular local rings. For example, the ring of germs of holomorphic functions on a Riemann surface is a discrete valuation ring. Complete intersections By Krull's principal ideal theorem, a foundational result in the dimension theory (algebra), dimension theory of rings, the dimension of is at least ''r'' − ''n''. A ring ''R'' is called a complete intersection ring if it can be presented in a way that attains this minimal bound. This notion is also mostly studied for local rings. Any regular local ring is a complete intersection ring, but not conversely. A ring ''R'' is a ''set-theoretic'' complete intersection if the reduced ring associated to ''R'', i.e., the one obtained by dividing out all nilpotent elements, is a complete intersection. As of 2017, it is in general unknown, whether curves in three-dimensional space are set-theoretic complete intersections. Cohen–Macaulay rings The depth (ring theory), depth of a local ring ''R'' is the number of elements in some (or, as can be shown, any) maximal regular sequence, i.e., a sequence ''a''1, ..., ''a''''n'' ∈ ''m'' such that all ''a''''i'' are non-zero divisors in For any local Noetherian ring, the inequality holds. A local ring in which equality takes place is called a Cohen–Macaulay ring. Local complete intersection rings, and a fortiori, regular local rings are Cohen–Macaulay, but not conversely. Cohen–Macaulay combine desirable properties of regular rings (such as the property of being universally catenary rings, which means that the (co)dimension of primes is well-behaved), but are also more robust under taking quotients than regular local rings. Constructing commutative rings There are several ways to construct new rings out of given ones. The aim of such constructions is often to improve certain properties of the ring so as to make it more readily understandable. For example, an integral domain that is integral element#Equivalent definitions, integrally closed in its field of fractions is called normal ring, normal. This is a desirable property, for example any normal one-dimensional ring is necessarily Regular local ring, regular. Rendering a ring normal is known as ''normalization''. Completions If ''I'' is an ideal in a commutative ring ''R'', the powers of ''I'' form neighborhood (topology), topological neighborhoods of ''0'' which allow ''R'' to be viewed as a topological ring. This topology is called the I-adic topology, ''I''-adic topology. ''R'' can then be completed with respect to this topology. Formally, the ''I''-adic completion is the inverse limit of the rings ''R''/''In''. For example, if ''k'' is a field, ''k''''X'', the formal power series ring in one variable over ''k'', is the ''I''-adic completion of ''k''[''X''] where ''I'' is the principal ideal generated by ''X''. This ring serves as an algebraic analogue of the disk. Analogously, the ring of p-adic number, ''p''-adic integers is the completion of Z with respect to the principal ideal (''p''). Any ring that is isomorphic to its own completion, is called complete ring, complete. Complete local rings satisfy Hensel's lemma, which roughly speaking allows extending solutions (of various problems) over the residue field ''k'' to ''R''. Homological notions Several deeper aspects of commutative rings have been studied using methods from homological algebra. lists some open questions in this area of active research. Projective modules and Ext functors Projective modules can be defined to be the direct summands of free modules. If ''R'' is local, any finitely generated projective module is actually free, which gives content to an analogy between projective modules and vector bundles. The Quillen–Suslin theorem asserts that any finitely generated projective module over ''k''[''T''1, ..., ''T''''n''] (''k'' a field) is free, but in general these two concepts differ. A local Noetherian ring is regular if and only if its global dimension is finite, say ''n'', which means that any finitely generated ''R''-module has a resolution (homological algebra), resolution by projective modules of length at most ''n''. The proof of this and other related statements relies on the usage of homological methods, such as the Ext functor. This functor is the derived functor of the functor The latter functor is exact if ''M'' is projective, but not otherwise: for a surjective map ''E'' → ''F'' of ''R''-modules, a map ''M'' → ''F'' need not extend to a map ''M'' → ''E''. The higher Ext functors measure the non-exactness of the Hom-functor. The importance of this standard construction in homological algebra stems can be seen from the fact that a local Noetherian ring ''R'' with residue field ''k'' is regular if and only if vanishes for all large enough ''n''. Moreover, the dimensions of these Ext-groups, known as Betti numbers, grow polynomially in ''n'' if and only if ''R'' is a local complete intersection ring. A key argument in such considerations is the Koszul complex, which provides an explicit free resolution of the residue field ''k'' of a local ring ''R'' in terms of a regular sequence. Flatness The tensor product is another non-exact functor relevant in the context of commutative rings: for a general ''R''-module ''M'', the functor is only right exact. If it is exact, ''M'' is called flat module, flat. If ''R'' is local, any finitely presented flat module is free of finite rank, thus projective. Despite being defined in terms of homological algebra, flatness has profound geometric implications. For example, if an ''R''-algebra ''S'' is flat, the dimensions of the fibers (for prime ideals ''p'' in ''R'') have the "expected" dimension, namely dim ''S'' − dim ''R'' + dim (''R'' / ''p''). Properties By Wedderburn's little theorem, Wedderburn's theorem, every finite division ring is commutative, and therefore a finite field. Another condition ensuring commutativity of a ring, due to Nathan Jacobson, Jacobson, is the following: for every element ''r'' of ''R'' there exists an integer such that . If, ''r''2 = ''r'' for every ''r'', the ring is called Boolean ring. More general conditions which guarantee commutativity of a ring are also known. Generalizations Simplicial commutative rings A simplicial commutative ring is a simplicial object in the category of commutative rings. They are building blocks for (connective) derived algebraic geometry. A closely related but more general notion is that of E-infinity ring, E-ring. * Almost ring, a certain generalization of a commutative ring. * Divisibility (ring theory): nilpotent element, example: dual numbers * Ideals and modules: Radical of an ideal, Morita equivalence * Ring homomorphisms: integral element: Cayley–Hamilton theorem, Integrally closed domain, Krull ring, Krull–Akizuki theorem * Primes: Prime avoidance lemma, Jacobson radical, Nilradical of a ring, Spectrum: Compact space, Connected ring, Differential calculus over commutative algebras, Banach–Stone theorem * Local rings: Gorenstein ring: Duality (mathematics), Eben Matlis; Dualizing module, Popescu's theorem, Artin approximation theorem. * "Applications" (commutative rings arising in mathematics): Holomorphic functions, Algebraic K-theory, Topological K-theory, Divided power structures, Witt vectors, Hecke algebra, Fontaine's period rings, Cluster algebra, Convolution algebra (of a commutative group), see also Fréchet algebra * * * * * * *
2022-10-07 06:09:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 153, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724691271781921, "perplexity": 581.0909642488009}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00381.warc.gz"}
http://mathoverflow.net/questions/135417/correspondence-between-operads-and-infty-operads-with-one-object
# Correspondence between operads and $\infty$-operads with one object Given a simplicial operad one can form its category of operators. This is a simplicial category with a functor to the category of finite pointed sets which is a bijection on objects and whose hom-spaces have a particular product decomposition. Assuming the spaces in the operad are fibrant we can apply the coherent nerve construction to obtain an $\infty$-operad with 'one-object' (by this I mean the $\infty$-category associated to the operad has a single equivalence class). This is Proposition 2.1.1.27 of Lurie's Higher Algebra. Q1: Does every $\infty$-operad with 'one-object' come from this construction (up to equivalence)? EDIT: To clarify, I mean an operad in the classical sense of May and a weak equivalence of such operads (a singly colored simplicial operad, for the multicolored analogue of this question Urs' response says the answer would be yes). So one side I'm allowing arbitrary weak equivalences of $\infty$-operads and on the other I'm only considering weak equivalences between simplicial singly-colored operads. Let $\mathcal{C}_\mathcal{O}$ be the category of operators associated to an operad $\mathcal{O}$. We know the unit of the adjunction $\mathfrak{C}N\mathcal{C}_\mathcal{O}\rightarrow \mathcal{C}_\mathcal{O}$ is a weak equivalence of simplicial categories. Q2: Is $\mathfrak{C}N\mathcal{C}_\mathcal{O}$ the category of operators associated to another simplicial operad $\mathcal{O}^\prime$? - The answer to Q1 is indeed yes. The construction you describe (category of operators followed by coherent nerve) gives a functor from the category of fibrant colored simplicial operads to the category of $\infty$-operads (in the sense of Lurie). This functor preserves weak equivalences and induces an equivalence on the level of homotopy categories. The category of simplicial operads (with one color, or object) is a full subcategory of the category of colored such guys, so (as long as we only care about things up to weak equivalence) we only have to identify the essential image of this subcategory. "Having one object" is a property that you can see on the level of underlying categories. The construction we're talking about is compatible with "taking underlying categories"; i.e. taking the underlying simplicial category of a simplicial operad and then taking the coherent nerve produces the same thing as first running the construction you describe and then taking the fiber over $\langle 1 \rangle$. The statement therefore reduces to "every $\infty$-category with one object is (up to equivalence) the coherent nerve of a fibrant simplicial category with one object", which is true. The answer to Q2 is no, at least if you're really asking it "up to isomorphism". For example, consider the space of morphisms lying over the inert morphism $\langle 3 \rangle \rightarrow \langle 1 \rangle$ which sends 2 and 3 to the basepoint and 1 to 1. In the category of operators of a simplicial operad, this space is isomorphic to the space of morphisms lying over the identity $\langle 1 \rangle \rightarrow \langle 1 \rangle$. This need not be the case in a category of the form $\mathfrak{C}N\mathcal{C}_{\mathcal{O}}$. Already the trivial operad gives a counterexample. In this case, the first space I mentioned has non-degenerate 1-simplices, corresponding to factorizations $\langle 3 \rangle \rightarrow \langle 2 \rangle \rightarrow \langle 1 \rangle$ of the given morphism into two inerts, whereas the space of morphisms lying over $\langle 1 \rangle \rightarrow \langle 1 \rangle$ is just a 0-simplex. - Welcome to MathOverflow! –  David White Jul 1 '13 at 17:24 Yes welcome Gijs! Thanks for the answer to Q2, I do mean up to isomorphism. Regarding Q1, I just want to be clear: If I take an $\infty$-operad with a single equivalence class of objects, is there some construction which will take that $\infty$-operad and produce a simplicial (singly-colored) operad (not something weakly equivalent to a (singly-colored) operad)? Of course, this operad should have the property that if I apply the above construction I obtain an $\infty$-operad weakly equivalent to the one I started with. –  Justin Noel Jul 1 '13 at 18:39 If I understand this right, you can take the multicolored operad coming out of your equivalence and take a sub singly colored operad which is weakly equivalent to it. This would give the desired simplicial operad. –  Justin Noel Jul 2 '13 at 6:57 re 1: Yes. Recently the equivalence between the Jacob Lurie's model for infinity-operads via "$\infty$-categories of operators" and Ieke Moerdijk's model (together with D.-C. Cisinski based on work of Weiss) in terms of dendroidal sets was established, in • Gijs Heuts, Vladimir Hinich, Ieke Moerdijk, The equivalence between Lurie's model and the dendroidal model for infinity-operads (arXiv:1305.3658) . Via the previously established equivalences of the model structure on dendroidal sets with various other models for homotopy operads, notably its equivalence to the model structure on simplicial operads this now also shows that Jacob Lurie's definition is equivalent to all these. - Hi Urs, thank you for the answer, but since the category of simplicial operads you are talking about is necessarily bigger than the classical category of simplicial operads (the one you mention models simplicial multicategories I believe, while the one I'm talking about is still the one object version), I think this just pushes the same question into a different framework. Namely these adjunctions will give me a simplicial multicategory corresponding to an $\infty$-operad. Does that multicategory have one object, i.e., is it an actual simplicial operad? –  Justin Noel Jul 1 '13 at 11:42 Up to equivalence, yes. As Gijs now said above (mathoverflow.net/a/135441/381, and he must know :) it is straightforward to see that these various Quillen equivalences respect the maps to the underlying infinity-categories and infinity-groupoids of the infinity-operads (which are the collections of colors and equivalences between them). –  Urs Schreiber Jul 1 '13 at 18:39 Right Urs. The question involves a subtlety about what kind of weak equivalences I am allowing and I think I have been vague (mostly due to conflicting nomenclature for operads). I'm allowing arbitrary equivalences of $\infty$-operads, but only weak equivalences between (singly-colored) simplicial operads. The point being if $\infty$-operads is indeed a framework where one can study classical (singly-colored) operads then I should be able to take my operad, turn it into an $\infty$-operad, and obtain a weakly equivalent (singly-colored) operad. –  Justin Noel Jul 1 '13 at 18:47 You and Gijs have definitely convinced me that things work out if I allow the essential image of singly-colored operads in the multi-colored operads. I would like to know if this fattening is necessary (since many existing questions concern the smaller category only). –  Justin Noel Jul 1 '13 at 18:48
2015-03-30 13:01:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712751865386963, "perplexity": 609.0766898211066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299339.12/warc/CC-MAIN-20150323172139-00095-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.hpmuseum.org/forum/thread-8287-post-72850.html
About calculator benchmark (8 queens) and fast devices. MS challenge #2 05-02-2017, 05:18 PM (This post was last modified: 05-02-2017 08:04 PM by pier4r.) Post: #1 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 About calculator benchmark (8 queens) and fast devices. MS challenge #2 While reading the general forum, I ended up rereading the first page of the HRAST basic, pointing to the calculator benchmark. (The HRAST basic produces quite neat results, in line with sysrpl versions) I thought that the benchmark, here, was not maintained since long time. Instead it has many new results! I wonder how the maintainer tracks the new results (Xerses seems registered here, just not so active). Kudos to him. Said that, I believe that executions that are getting under 1 seconds are increasingly limited by the overhead of just 'starting' the computation. Therefore I would suggest to expand the benchmark for faster devices, for example nqueens with n=9, to see how much the real computation takes on the "fast" device. For example, in a recent topic that I found here, a user reports a Free42 results that I believe are limited by the need to start the program (same for the HP prime application on similar android devices). I remember I had this idea already and I checked the benchmarks that I added on the wiki4hp in 2013 (digression1). In particular the middle square benchmark (that is not a benchmark, rather a challenge, more of it later) that I proposed after reading the article of the middle square method used by von Neumann. Here is the idea: Code: k:= a positive integer n:= 100^k For i:=(n/10) to (n-1) do {   old := i   sequenceIsConvergent := false   limitCyclicSequences := 0 //because we can end in cyclic sequences and we don't want to use a lot of memory!   while ( (NOT sequenceIsConvergent) AND (limitCyclicSequences < n) ) {     new = extract_middle_number from old^2 (see below)     if new == old {        sequenceIsConvergent = true     }     else {        old := new       limitCyclicSequences = limitCyclicSequences+1;     }       }//end while }//end for ->Result: the time spent to finish the computation given the starting k value. How does extract_middle_digits work? Given n, that is a power of 10 of the type 10^d,  its square is equal to 10^(d*2) and it has (d*2+1) digits. We consider only the last (d*2) digits.  That is: if n=10000, 10000*10000 = 100000000 we consider only 00000000 without the first 1. Then we 'consider' all the numbers lower than 10^(d*2) with (d*2) digits.  For example if d=4 and the number under process is 1542,  we consider 1542^2 as 02377764 instead of 2377764. Why d=4 ? Because we use as seed, with n=10000, numbers from 1000 to 9999, so numbers having 4 digits. After that we pick the d=4 middle digits, from 02377764 we pick 02[3777]64. So the middle number extracted is 3777. This type of benchmark is pretty scalable, changing 'k', and should expose also the relative timings between a fast device/implementation vs a slower one as soon as k changes. Anyway a user pointed out in my previous topic on the old forum that a benchmark is useful as comparison if similar set of instructions are used between calculators. Instead the middle square procedure can take advantage of instructions that some calculators does not have (Like IP and FP). Furthermore one is free to derive its own method to extract the middle digits, not exactly a fixed procedure. Therefore more than a benchmark it is a challenge, where every calculator (and programming languages for those) is welcomed. Since it is a challenge, feel free to find the most efficient method to compute the middle squares within the given constraints (k, n, i in the pseudocode above, the rest is optional). According to the results previously collected, with 100^1 : - 50g, ARM ASM / SATURN ASM under 1 second - same problem of the 8 queens benchmark - 50g sysRPL around 1 second - 50g userRPL around 10 seconds With 100^2 - 50g, ARM ASM 169 sec - 50g, saturn ASM 1445 secs - with k=2 one can appreciate the difference between saturn ASM and ARM ASM a bit more. - 50g userRPL (not optimized) estimated over 250000 seconds I don't think that calculators that handles the 8 queens benchmark in more than 10 minutes can end the challenge in little time ('little' according to your patience), but maybe others like the 48 series or those HP palmtops can do it. What about the newRPL? (I can attempt this on the PC version, but that does not count) Anyone willing to do it with HRAST BASIC? (any other stable language for the 50g? Hp lua AFAIK is not so stable) 42s? 48 variants? HP prime? Casio/Ti ? (I will ask on the respective communities if I collect some more results here) Is anyone willing to give it a shoot? I will try to implement it on the ti89, nspire and, why not, the free42 just as reference (also to learn a bit the RPN programming, although it looks like assembly) digression1: there are many more to add, a simple search like site:hpmuseum.org benchmark returns hundreds of results to check, with at least tens of different benchmarks. PS: The challenge itself could be even translated in a list processing problem now that I think about it, since one can work on the single digits. Although one has to implement operations for digits operations. Wikis are great, Contribute :) 05-02-2017, 07:15 PM Post: #2 toml_12953 Senior Member Posts: 1,221 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 05:18 PM)pier4r Wrote:  How does extract_middle_digits work? Given n, that is a power of 10 of the type 10^d, its square is equal to 10^(d*2) and it has (d*2+1) digits. We consider only the last (d*2) digits. That is: if n=10000, 10000*10000 = 100000000 we consider only 00000000 without the first 1. Then we 'consider' all the numbers lower than 10^(d*2) with (d*2) digits. For example if d=4 and the number under process is 1542, we consider 1542^2 as 02377764 instead of 2377764. After that we pick the d=4 middle digits, from 02377764 we pick 02[3777]64. So the middle number extracted is 3777. Sorry to be dense but I don't get it. Is d = log(n)? If so then log(1542) is 3.188... How do you get d=4? Tom L Tom L I think therefore I am-Descartes I think therefore you are-Gorgias You're not here to think-Army Sergeant 05-02-2017, 07:36 PM Post: #3 Claudio L. Senior Member Posts: 1,649 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 05:18 PM)pier4r Wrote:  What about the newRPL? (I can attempt this on the PC version, but that does not count) You have an RPL solution already, just copy/paste to an SD card with proper Unicode characters and run it on newRPL. It's about time you give it a shot on the real hardware, USB support will not be ready any time soon. 05-02-2017, 08:00 PM (This post was last modified: 05-02-2017 08:03 PM by pier4r.) Post: #4 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 07:15 PM)toml_12953 Wrote: (05-02-2017 05:18 PM)pier4r Wrote:  How does extract_middle_digits work? Given n, that is a power of 10 of the type 10^d, its square is equal to 10^(d*2) and it has (d*2+1) digits. We consider only the last (d*2) digits. That is: if n=10000, 10000*10000 = 100000000 we consider only 00000000 without the first 1. Then we 'consider' all the numbers lower than 10^(d*2) with (d*2) digits. For example if d=4 and the number under process is 1542, we consider 1542^2 as 02377764 instead of 2377764. After that we pick the d=4 middle digits, from 02377764 we pick 02[3777]64. So the middle number extracted is 3777. Sorry to be dense but I don't get it. Is d = log(n)? If so then log(1542) is 3.188... How do you get d=4? Tom L Nothing about "to be dense", it is just my explanation not clear. In the example I'm using 1542, therefore the numbers used as seed are from 1000 to 9999. So we need seeds with 4 digits (or considered as having 4 digits, in the case of leading zeroes). You can see it also from n. n is 100^2 or 10000 in the example, without considering the leading 1, those are 4 digits. @Claudio: I gave a shot on the real hw, but moving back and forth from the 2.15 hw (that is more comfortable to use with frequent changes) is not really feasible for me because the alternative is to frequently use the sd card. For that I need a second 50g. I will check ebay until I get one for a low price. Anyway thanks for the info, at least now I know. Wikis are great, Contribute :) 05-02-2017, 08:02 PM Post: #5 Claudio L. Senior Member Posts: 1,649 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 08:00 PM)pier4r Wrote:  @Claudio: I gave a shot on the real hw, but moving back and forth from the 2.15 hw (that is more comfortable to use with frequent changes) is not really feasible for me because the alternative is to frequently use the sd card. For that I need a second 50g. I will check ebay until I get one for a low price. Anyway thanks for the info, at least now I know. No worries, I actually looked at a possible USB library just for you, but there isn't much for this chipset. I did find one for an AVR but porting effort will be quite significant. 05-02-2017, 08:07 PM (This post was last modified: 05-02-2017 08:07 PM by pier4r.) Post: #6 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 08:02 PM)Claudio L. Wrote:  No worries, I actually looked at a possible USB library just for you, but there isn't much for this chipset. I did find one for an AVR but porting effort will be quite significant. I'm honored, but I would say "first things first", even if the newRPL would be highly useful "only" on the PC would be already great (I assume that the PC version is, like the 50g version, compiled with the right target architecture, so it is quite efficient). So do not bother about my requests. I wait and having an excuse for a 2nd 50g is never bad. Wikis are great, Contribute :) 05-03-2017, 02:20 PM (This post was last modified: 05-03-2017 04:15 PM by xerxes.) Post: #7 xerxes Member Posts: 99 Joined: Jun 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 05:18 PM)pier4r Wrote:  Said that, I believe that executions that are getting under 1 seconds are increasingly limited by the overhead of just 'starting' the computation. Therefore I would suggest to expand the benchmark for faster devices, for example nqueens with n=9, to see how much the real computation takes on the "fast" device. I noticed the issue of accurate timing for very fast results in the beginning of making the benchmark already, especially when starting with the assembly languages. The solution was to use an outer loop, that allows a much more accurate timing and making the overhead insignificant. I've used 100000 iterations for the SH-3 assembly version for example, what should be accurate enough, I guess. I've also tested larger chess boards for verifying the same speed factor. Thanks for pointing this out. Calculator Benchmark 05-03-2017, 04:31 PM Post: #8 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 Yes an idea is to solve the problem multiple times in a scalable way or to enlarge the problem. Actually just repeating the same problem multiple time ensures the most direct way of comparison. Like "this device solved the problem in 20 seconds, this other device solved the problem 1350 times in 20 seconds". I did not think of it, quite straightforward and effective. Wikis are great, Contribute :) 05-03-2017, 09:18 PM (This post was last modified: 05-04-2017 03:27 PM by Helix.) Post: #9 Helix Member Posts: 192 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 05:18 PM)pier4r Wrote:  I thought that the benchmark, here, was not maintained since long time. Instead it has many new results! I wonder how the maintainer tracks the new results (Xerses seems registered here, just not so active). Kudos to him. I agree that Xerxes has made a nice work in maintaining this page, and furthermore his presentation is very clean and simple to interpret at first glance. (05-02-2017 05:18 PM)pier4r Wrote:  Said that, I believe that executions that are getting under 1 seconds are increasingly limited by the overhead of just 'starting' the computation. Therefore I would suggest to expand the benchmark for faster devices, for example nqueens with n=9, to see how much the real computation takes on the "fast" device. The main limitation of this benchmark in my opinion is the use of integers exclusively, which greatly favors some languages. As soon as there are some calculations with reals, this benchmark is grossly misleading. For example, if I take the HP 50G with User RPL as a reference, then this benchmark gives the following increases of speed : HP Prime : 65x HP 200LX with Turbo Pascal : 213x HP 50G with newRPL : 219x Now, if I consider this very simple Calculator Performance Index, which uses calculations on reals, the results are dramatically different : HP 200LX with Turbo Pascal : 5.5x HP 50G with newRPL (12 digits) : 17.5x HP Prime : 48x And finally, the Savage Benchmark, which relies heavily on transcendental functions, gives the following results: HP 200LX with Turbo Pascal : 1.5x HP 50G with newRPL (12 digits) : 4.3x HP Prime : 114x I like a lot the "Calculator Performance Index", which is probably rather representative of usual scientific programs. The table of benchmarks is not as complete as the Xerxes Table, but it is instructive. Jean-Charles 05-03-2017, 09:46 PM Post: #10 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 Nice input. I saw such benchmarks with floats also in the old forum, we can integrate it on the wiki 4 hp and expand it. Wikis are great, Contribute :) 05-04-2017, 01:46 PM Post: #11 xerxes Member Posts: 99 Joined: Jun 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-03-2017 09:18 PM)Helix Wrote:  The main limitation of this benchmark in my opinion is the use of integers exclusively, which greatly favors some languages. As soon as there are some calculations with reals, this benchmark is grossly misleading. I see nothing misleading here, because it's a question of correct interpretation of the test and the results. The intention of choosing n-queens was to have an integer benchmark with array access to have a fairly realistic comparison for this type of programming problems and not testing the floating point funtions, what others did before. IMHO transcendental functions are not well suited for testing the efficiency of a programming language. It's true, that n-queens strongly favours languages with integer support, but that can be said for all types of integer only problems. Your examples show clearly that there cannot be one overall benchmark. Calculator Benchmark 05-04-2017, 02:07 PM Post: #12 toml_12953 Senior Member Posts: 1,221 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-04-2017 01:46 PM)xerxes Wrote: (05-03-2017 09:18 PM)Helix Wrote:  The main limitation of this benchmark in my opinion is the use of integers exclusively, which greatly favors some languages. As soon as there are some calculations with reals, this benchmark is grossly misleading. Your examples show clearly that there cannot be one overall benchmark. Exactly. In order for a benchmark to be meaningful, it has to reflect the type of work you typically do. For surveyors, pilots and some others transcendental functions would be of great importance. For gamers, integer functions might be more relevant. It's important to find benchmarks that test your own real-world scenarios. Tom L Tom L I think therefore I am-Descartes I think therefore you are-Gorgias You're not here to think-Army Sergeant 05-04-2017, 02:37 PM Post: #13 Helix Member Posts: 192 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-04-2017 01:46 PM)xerxes Wrote:  I see nothing misleading here, because it's a question of correct interpretation of the test and the results. My wording was perhaps inadequate, but I agree with your explanations. For those interested in floating point operations, the 8 queens benchmark is not well suited. But I'm not aware of many complete calculator benchmarks which use floating point functions. Perhaps this one : http://www.wiki4hp.com/doku.php?id=benchmarks:savage Jean-Charles 05-05-2017, 12:30 PM (This post was last modified: 05-09-2017 03:41 AM by HrastProgrammer.) Post: #14 HrastProgrammer Member Posts: 144 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-02-2017 05:18 PM)pier4r Wrote:  Anyone willing to do it with HRAST BASIC? This is the appropriate HRAST BASIC program based on the direct conversion of the above pseudocode for K=1: Code: WATCH INTEGER K=1,N=100^K,Q=SQR N FOR I=N/10 TO N-1 O=I FOR L=N R=MOD(^O/Q,N) IF O<>R@ O=R NEXT ELSE LEAVE NEXT ? TICKS I don't have a real calculator with me and I could only test it on Emu48 with "Authentic Calculator Speed" enabled. The execution time is around 4.6s. I will check it on the real HP-50G when I'll be back home. http://www.hrastprogrammer.com/hrastwerk/ http://hrastprogrammer.bandcamp.com/ 05-07-2017, 07:45 AM (This post was last modified: 05-09-2017 03:41 AM by HrastProgrammer.) Post: #15 HrastProgrammer Member Posts: 144 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 Real calculator execution time for the above HRAST BASIC program: HP-50G ... 1.94s HP-49G ... 4.81s (the time on HP-48GX should be the same) http://www.hrastprogrammer.com/hrastwerk/ http://hrastprogrammer.bandcamp.com/ 05-07-2017, 08:15 AM (This post was last modified: 05-07-2017 08:23 AM by pier4r.) Post: #16 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 Wikis are great, Contribute :) 05-07-2017, 11:12 PM Post: #17 Vtile Senior Member Posts: 384 Joined: Oct 2015 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 There is Pascal compiler floating around for 50g, but it were PC side IIRC. HP-Pascak or similar. 05-08-2017, 06:30 AM Post: #18 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 You mean this: http://hppascal.free.fr/pages/home.htm ? It seems a (pretty nice) project that transform the instruction in assembly for the Saturn CPU, not for ARM. I would believe the HRAST BASIC would do the same. Still impressive. Wikis are great, Contribute :) 05-08-2017, 11:44 AM Post: #19 Massimo Gnerucci Senior Member Posts: 1,801 Joined: Dec 2013 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-08-2017 06:30 AM)pier4r Wrote:  You mean this: http://hppascal.free.fr/pages/home.htm ? It seems a (pretty nice) project that transform the instruction in assembly for the Saturn CPU, not for ARM. I would believe the HRAST BASIC would do the same. Still impressive. Yes, that's it. Unfortunately development is frozen since so many years... Greetings, Massimo -+×÷ ↔ left is right and right is wrong 05-09-2017, 04:03 PM Post: #20 pier4r Senior Member Posts: 2,016 Joined: Nov 2014 RE: About calculator benchmark (8 queens) and fast devices. MS challenge #2 (05-07-2017 07:45 AM)HrastProgrammer Wrote:  Real calculator execution time for the above HRAST BASIC program: HP-50G ... 1.94s HP-49G ... 4.81s (the time on HP-48GX should be the same) Since I cannot send you a pm, remember that the wiki is open . Everyone can contribute. Added the new result and code to the page of the challenge. Wikis are great, Contribute :) « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2019-12-13 22:06:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39434510469436646, "perplexity": 3850.5045972171833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00151.warc.gz"}
http://inmabb.criba.edu.ar/revuma/revuma.php?p=onlinefirst
Revista de la Unión Matemática Argentina Home Editorial board For authors Latest issue In press Online first Prize New! Search OJS ## Online first articles Articles are posted here individually soon after proof is returned from authors, before the corresponding journal issue is completed. All articles are in their final form, including issue number and pagination. For recently accepted articles, see Articles in press. #### Vol. 61, no. 1 (2020) Local solvability of elliptic equations of even order with Hölder coefficients. María Amelia Muschietti and Federico Tournier We consider elliptic equations of order $2m$ with Hölder coefficients. We show local solvability of the Dirichlet problem with $m$ conditions on the boundary of the upper half space. First we consider local solvability in free space and then we treat the boundary case. Our method is based on applying the operator to an approximate solution and iterating in the Hölder spaces. A priori estimates for the approximate solution is the essential part of the paper. 1–47 On convergence of subspaces generated by dilations of polynomials. An application to best local approximation. Fabián E. Levis and Claudia V. Ridolfi We study the convergence of a net of subspaces generated by dilations of polynomials in a finite dimensional subspace. As a consequence, we extend the results given by Zó and Cuenya [Advanced Courses of Mathematical Analysis II (Granada, 2004), 193–213, World Scientific, 2007] on a general approach to the problems of best vector-valued approximation on small regions from a finite dimensional subspace of polynomials. 49–62 Perturbation of Ruelle resonances and Faure–Sjöstrand anisotropic space. Yannick Guedes Bonthonneau Given an Anosov vector field $X_0$, all sufficiently close vector fields are also of Anosov type. In this note, we check that the anisotropic spaces described by Faure and Sjöstrand and by Dyatlov and Zworski can be chosen adapted to any smooth vector field sufficiently close to $X_0$ in $C^1$ norm. 63–72 Generalized metallic structures. Adara M. Blaga and Antonella Nannicini We study the properties of a generalized metallic, a generalized product and a generalized complex structure induced on the generalized tangent bundle of a smooth manifold $M$ by a metallic Riemannian structure $(J,g)$ on $M$, providing conditions for their integrability with respect to a suitable connection. Moreover, by using methods of generalized geometry, we lift $(J,g)$ to metallic Riemannian structures on the tangent and cotangent bundles of $M$, underlying the relations between them. 73–86 A heat conduction problem with sources depending on the average of the heat flux on the boundary. Mahdi Boukrouche and Domingo A. Tarzia Motivated by the modeling of temperature regulation in some mediums, we consider the non-classical heat conduction equation in the domain $D=\mathbb{R}^{n-1}\times\mathbb{R}^{+}$ for which the internal energy supply depends on an average in the time variable of the heat flux $(y, s)\mapsto V(y,s)= u_{x}(0, y, s)$ on the boundary $S=\partial D$. The solution to the problem is found for an integral representation depending on the heat flux on $S$ which is an additional unknown of the considered problem. We obtain that the heat flux $V$ must satisfy a Volterra integral equation of the second kind in the time variable $t$ with a parameter in $\mathbb{R}^{n-1}$. Under some conditions on data, we show that a unique local solution exists, which can be extended globally in time. Finally in the one-dimensional case, we obtain the explicit solution by using the Laplace transform and the Adomian decomposition method. 87–101
2020-05-28 00:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6556733846664429, "perplexity": 395.4385915644267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00245.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2020047
# American Institute of Mathematical Sciences August  2020, 25(8): 2949-2967. doi: 10.3934/dcdsb.2020047 ## Global existence and convergence rates of solutions for the compressible Euler equations with damping 1 School of Mathematics and Statistics, Shaoguan University, 512005, Shaoguan, China 2 Department of Mathematics, Sun Yat-sen University, 510275, Guangzhou, China * Corresponding author: Yin Li Received  January 2019 Revised  September 2019 Published  February 2020 Fund Project: We would like to express our sincere thanks to Academician Boling Guo of institute of Applied Physics and Computational Mathematics in Beijing for their fruitful help and discussions. This work is partially supported by the National Natural Science Foundation of China(Nos.11926354,11701380 and 11971496),Natural Science Foundation of Guangdong Province (Nos.2019A1515011320,2017A030307022,2016A030310019 and 2016A030307042),the Education Research Platform Project of Guangdong Province(No.2018179). The Cauchy problem for the 3D compressible Euler equations with damping is considered. Existence of global-in-time smooth solutions is established under the condition that the initial data is small perturbations of some given constant state in the framework of Sobolev space $H^3(\mathbb{R}^{3})$ only, but we don't need the bound of $L^1$ norm. Moreover, the optimal $L^{2}$-$L^{2}$ convergence rates are also obtained for the solution. Our proof is based on the benefit of the low frequency and high frequency decomposition, here, we just need spectral analysis of the low frequency part of the Green function to the linearized system, so that we succeed to avoid some complicate analysis. Citation: Ruiying Wei, Yin Li, Zheng-an Yao. Global existence and convergence rates of solutions for the compressible Euler equations with damping. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 2949-2967. doi: 10.3934/dcdsb.2020047 ##### References: [1] R. Adams, Sobolev Spaes, Pure and Applied Mathematics, 65, Academic Press, New York-London, 1975.   Google Scholar [2] Q. Chen and Z. Tan, Time decay of solutions to the compressible Euler equations with damping, Kinet. Relat. Models, 7 (2014), 605-619.  doi: 10.3934/krm.2014.7.605.  Google Scholar [3] C. M. Dafermos, Can dissipation prevent the breaking of waves? in Transactions of the Twenty-Sixth Conference of Army Mathematicians, ARO Rep, 81, U. S. Army Res. Office, Research Triangle Park, N.C., 1981, 187–198.  Google Scholar [4] Y. Guo and Y. J. Wang, Decay of dissipative equations and negative Sobolev spaces, Comm. Partial Differential Equations, 37 (2012), 2165-2208.  doi: 10.1080/03605302.2012.696296.  Google Scholar [5] L. Hsiao, Quasilinear Hyperbolic Systems and Dissipative Mechanisms, World Scientific Publishing Co., Inc., River Edge, NJ, 1997. doi: 10.1142/3538.  Google Scholar [6] L. Hsiao and T. P. Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping, Comm. Math. Phys., 143 (1992), 599-605.  doi: 10.1007/BF02099268.  Google Scholar [7] L. Hsiao and D. Serre, Global existence of solutions for the system of compressible adiabatic flow through porous media, SIAM J. Math. Anal., 27 (1996), 70-77.  doi: 10.1137/S0036141094267078.  Google Scholar [8] L. Hsiao and R. H. Pan, The damped $p$-system with boundary effects, in Nonlinear PDE's, Dynamics and Continuum Physics, Contemp. Math., 255, Amer. Math. Soc., Providence, RI, 2000, 109–123. doi: 10.1090/conm/255/03977.  Google Scholar [9] F. M. Huang, P. Marcati and R. H. Pan, Convergence to Barenblatt solution for the compressible Euler equations with damping and vacuum, Arch. Ration. Mech. Anal., 176 (2005), 1-24.  doi: 10.1007/s00205-004-0349-y.  Google Scholar [10] F. M. Huang, R. H. Pan and Z. Wang, $L^1$ convergence to the Barenblatt solution for compressible Euler equations with damping, Arch. Ration. Mech. Anal., 200 (2011), 665-689.  doi: 10.1007/s00205-010-0355-1.  Google Scholar [11] T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems, Arch. Ration. Mech. Anal,, 58 (1975), 181-205.  doi: 10.1007/BF00280740.  Google Scholar [12] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Applied Mathematical Sciences, 53, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-1116-7.  Google Scholar [13] T. Nishida, Global solutions for an initial-boundary value problem of a quasilinear hyperbolic system, Proc. Japan Acad., 44 (1968), 642-646.  doi: 10.3792/pja/1195521083.  Google Scholar [14] T. Nishida, Nonlinear Hyperbolic Equations and Related Topics in Fluid Dynamics, Publications Mathématiques d'Orsay, Département de Mathématique, Université de Paris-Sud, Orsay, 1978.  Google Scholar [15] F. M. Huang and R. H. Pan, Convergence rate for compressible Euler equations with damping and vacuum, Arch. Ration. Mech. Anal., 166 (2003), 359-376.  doi: 10.1007/s00205-002-0234-5.  Google Scholar [16] R. H. Pan and K. Zhao, The 3D compressible Euler equations with damping in a bounded domain, J. Differential Equations, 246 (2009), 581-596.  doi: 10.1016/j.jde.2008.06.007.  Google Scholar [17] T. C. Sideris, B. Thomases and D. H. Wang, Long time behavior of solutions to the 3D compressible Euler equations with damping, Comm. Partial Differential Equations, 28 (2003), 795-816.  doi: 10.1081/PDE-120020497.  Google Scholar [18] Z. Tan and Y. Wang, Global solution and large-time behavior of the 3D compressible Euler equations with damping, J. Differential Equations, 254 (2013), 1686-1704.  doi: 10.1016/j.jde.2012.10.026.  Google Scholar [19] Z. Tan, Y. J. Wang and Y. Wang, Stability of steady states of the Navier-Stokes-Poisson equations with non-flat doping profile, SIAM J. Math. Anal., 47 (2015), 179-209.  doi: 10.1137/130950069.  Google Scholar [20] Z. Tan and G. C. Wu, Large time behavior of solutions for compressible Euler equations with damping in $\mathbb{R}^{3}$, J. Differential Equations, 252 (2012), 1546-1561.  doi: 10.1016/j.jde.2011.09.003.  Google Scholar [21] D. H. Wang, Global solutions and relaxation limits of Euler-Poisson equations, Z. Angew. Math. Phys., 52 (2001), 620-630.  doi: 10.1007/s00033-001-8135-2.  Google Scholar [22] W. K. Wang and T. Yang, The pointwise estimates of solutions for Euler equations with damping in multi-dimensions, J. Differential Equations, 173 (2001), 410-450.  doi: 10.1006/jdeq.2000.3937.  Google Scholar [23] W. J. Wang and W. K. Wang, Large time behavior for the system of a viscous liquid-gas two-phase flow model in $\mathbb{R}^{3}$, J. Differential Equations, 261 (2016), 5561-5589.  doi: 10.1016/j.jde.2016.08.013.  Google Scholar [24] Y. Wang, C. Liu and Z. Tan, Well-posedness on a new hydrodynamic model of the fluid with the dilute charged particles, J. Differential Equations, 262 (2017), 68-115.  doi: 10.1016/j.jde.2016.09.026.  Google Scholar [25] Y. N. Zeng, Gas dynamics in thermal nonequilibrium and general hyperbolic systems with relaxation, Arch. Ration. Mech. Anal., 150 (1999), 225-279.  doi: 10.1007/s002050050188.  Google Scholar [26] H. J. Zhao, Convergence to strong nonlinear diffusion waves for solutions of $p$-system with damping, J. Differential Equations, 174 (2001), 200-236.  doi: 10.1006/jdeq.2000.3936.  Google Scholar show all references ##### References: [1] R. Adams, Sobolev Spaes, Pure and Applied Mathematics, 65, Academic Press, New York-London, 1975.   Google Scholar [2] Q. Chen and Z. Tan, Time decay of solutions to the compressible Euler equations with damping, Kinet. Relat. Models, 7 (2014), 605-619.  doi: 10.3934/krm.2014.7.605.  Google Scholar [3] C. M. Dafermos, Can dissipation prevent the breaking of waves? in Transactions of the Twenty-Sixth Conference of Army Mathematicians, ARO Rep, 81, U. S. Army Res. Office, Research Triangle Park, N.C., 1981, 187–198.  Google Scholar [4] Y. Guo and Y. J. Wang, Decay of dissipative equations and negative Sobolev spaces, Comm. Partial Differential Equations, 37 (2012), 2165-2208.  doi: 10.1080/03605302.2012.696296.  Google Scholar [5] L. Hsiao, Quasilinear Hyperbolic Systems and Dissipative Mechanisms, World Scientific Publishing Co., Inc., River Edge, NJ, 1997. doi: 10.1142/3538.  Google Scholar [6] L. Hsiao and T. P. Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping, Comm. Math. Phys., 143 (1992), 599-605.  doi: 10.1007/BF02099268.  Google Scholar [7] L. Hsiao and D. Serre, Global existence of solutions for the system of compressible adiabatic flow through porous media, SIAM J. Math. Anal., 27 (1996), 70-77.  doi: 10.1137/S0036141094267078.  Google Scholar [8] L. Hsiao and R. H. Pan, The damped $p$-system with boundary effects, in Nonlinear PDE's, Dynamics and Continuum Physics, Contemp. Math., 255, Amer. Math. Soc., Providence, RI, 2000, 109–123. doi: 10.1090/conm/255/03977.  Google Scholar [9] F. M. Huang, P. Marcati and R. H. Pan, Convergence to Barenblatt solution for the compressible Euler equations with damping and vacuum, Arch. Ration. Mech. Anal., 176 (2005), 1-24.  doi: 10.1007/s00205-004-0349-y.  Google Scholar [10] F. M. Huang, R. H. Pan and Z. Wang, $L^1$ convergence to the Barenblatt solution for compressible Euler equations with damping, Arch. Ration. Mech. Anal., 200 (2011), 665-689.  doi: 10.1007/s00205-010-0355-1.  Google Scholar [11] T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems, Arch. Ration. Mech. Anal,, 58 (1975), 181-205.  doi: 10.1007/BF00280740.  Google Scholar [12] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Applied Mathematical Sciences, 53, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-1116-7.  Google Scholar [13] T. Nishida, Global solutions for an initial-boundary value problem of a quasilinear hyperbolic system, Proc. Japan Acad., 44 (1968), 642-646.  doi: 10.3792/pja/1195521083.  Google Scholar [14] T. Nishida, Nonlinear Hyperbolic Equations and Related Topics in Fluid Dynamics, Publications Mathématiques d'Orsay, Département de Mathématique, Université de Paris-Sud, Orsay, 1978.  Google Scholar [15] F. M. Huang and R. H. Pan, Convergence rate for compressible Euler equations with damping and vacuum, Arch. Ration. Mech. Anal., 166 (2003), 359-376.  doi: 10.1007/s00205-002-0234-5.  Google Scholar [16] R. H. Pan and K. Zhao, The 3D compressible Euler equations with damping in a bounded domain, J. Differential Equations, 246 (2009), 581-596.  doi: 10.1016/j.jde.2008.06.007.  Google Scholar [17] T. C. Sideris, B. Thomases and D. H. Wang, Long time behavior of solutions to the 3D compressible Euler equations with damping, Comm. Partial Differential Equations, 28 (2003), 795-816.  doi: 10.1081/PDE-120020497.  Google Scholar [18] Z. Tan and Y. Wang, Global solution and large-time behavior of the 3D compressible Euler equations with damping, J. Differential Equations, 254 (2013), 1686-1704.  doi: 10.1016/j.jde.2012.10.026.  Google Scholar [19] Z. Tan, Y. J. Wang and Y. Wang, Stability of steady states of the Navier-Stokes-Poisson equations with non-flat doping profile, SIAM J. Math. Anal., 47 (2015), 179-209.  doi: 10.1137/130950069.  Google Scholar [20] Z. Tan and G. C. Wu, Large time behavior of solutions for compressible Euler equations with damping in $\mathbb{R}^{3}$, J. Differential Equations, 252 (2012), 1546-1561.  doi: 10.1016/j.jde.2011.09.003.  Google Scholar [21] D. H. Wang, Global solutions and relaxation limits of Euler-Poisson equations, Z. Angew. Math. Phys., 52 (2001), 620-630.  doi: 10.1007/s00033-001-8135-2.  Google Scholar [22] W. K. Wang and T. Yang, The pointwise estimates of solutions for Euler equations with damping in multi-dimensions, J. Differential Equations, 173 (2001), 410-450.  doi: 10.1006/jdeq.2000.3937.  Google Scholar [23] W. J. Wang and W. K. Wang, Large time behavior for the system of a viscous liquid-gas two-phase flow model in $\mathbb{R}^{3}$, J. Differential Equations, 261 (2016), 5561-5589.  doi: 10.1016/j.jde.2016.08.013.  Google Scholar [24] Y. Wang, C. Liu and Z. Tan, Well-posedness on a new hydrodynamic model of the fluid with the dilute charged particles, J. Differential Equations, 262 (2017), 68-115.  doi: 10.1016/j.jde.2016.09.026.  Google Scholar [25] Y. N. Zeng, Gas dynamics in thermal nonequilibrium and general hyperbolic systems with relaxation, Arch. Ration. Mech. Anal., 150 (1999), 225-279.  doi: 10.1007/s002050050188.  Google Scholar [26] H. J. Zhao, Convergence to strong nonlinear diffusion waves for solutions of $p$-system with damping, J. Differential Equations, 174 (2001), 200-236.  doi: 10.1006/jdeq.2000.3936.  Google Scholar [1] Stefano Scrobogna. Global existence and convergence of nondimensionalized incompressible Navier-Stokes equations in low Froude number regime. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5471-5511. doi: 10.3934/dcds.2020235 [2] Daoyuan Fang, Bin Han, Matthias Hieber. Local and global existence results for the Navier-Stokes equations in the rotational framework. Communications on Pure & Applied Analysis, 2015, 14 (2) : 609-622. doi: 10.3934/cpaa.2015.14.609 [3] Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085 [4] Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations II: Global existence of small solutions. Evolution Equations & Control Theory, 2012, 1 (1) : 217-234. doi: 10.3934/eect.2012.1.217 [5] Jian Su, Yinnian He. The almost unconditional convergence of the Euler implicit/explicit scheme for the three dimensional nonstationary Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3421-3438. doi: 10.3934/dcdsb.2017173 [6] Yi Zhou, Zhen Lei. Logarithmically improved criteria for Euler and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2715-2719. doi: 10.3934/cpaa.2013.12.2715 [7] Michele Coti Zelati. Remarks on the approximation of the Navier-Stokes equations via the implicit Euler scheme. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2829-2838. doi: 10.3934/cpaa.2013.12.2829 [8] Carlo Morosi, Livio Pizzocchero. On the constants in a Kato inequality for the Euler and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 557-586. doi: 10.3934/cpaa.2012.11.557 [9] Roberta Bianchini, Roberto Natalini. Convergence of a vector-BGK approximation for the incompressible Navier-Stokes equations. Kinetic & Related Models, 2019, 12 (1) : 133-158. doi: 10.3934/krm.2019006 [10] Zhilei Liang. Convergence rate of solutions to the contact discontinuity for the compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1907-1926. doi: 10.3934/cpaa.2013.12.1907 [11] Yuming Qin, Lan Huang, Zhiyong Ma. Global existence and exponential stability in $H^4$ for the nonlinear compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1991-2012. doi: 10.3934/cpaa.2009.8.1991 [12] Joel Avrin. Global existence and regularity for the Lagrangian averaged Navier-Stokes equations with initial data in $H^{1//2}$. Communications on Pure & Applied Analysis, 2004, 3 (3) : 353-366. doi: 10.3934/cpaa.2004.3.353 [13] Jian-Guo Liu, Zhaoyun Zhang. Existence of global weak solutions of $p$-Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021051 [14] Michele Campiti, Giovanni P. Galdi, Matthias Hieber. Global existence of strong solutions for $2$-dimensional Navier-Stokes equations on exterior domains with growing data at infinity. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1613-1627. doi: 10.3934/cpaa.2014.13.1613 [15] Zhong Tan, Qiuju Xu, Huaqiao Wang. Global existence and convergence rates for the compressible magnetohydrodynamic equations without heat conductivity. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 5083-5105. doi: 10.3934/dcds.2015.35.5083 [16] Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. Conference Publications, 2011, 2011 (Special) : 1234-1243. doi: 10.3934/proc.2011.2011.1234 [17] Keyan Wang. On global regularity of incompressible Navier-Stokes equations in $\mathbf R^3$. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1067-1072. doi: 10.3934/cpaa.2009.8.1067 [18] Joelma Azevedo, Juan Carlos Pozo, Arlúcio Viana. Global solutions to the non-local Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021146 [19] Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 [20] Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 2019 Impact Factor: 1.27
2021-06-20 05:53:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7295563220977783, "perplexity": 3010.884529644094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00430.warc.gz"}
https://mattermodeling.stackexchange.com/questions/688/extended-hybrid-methods
# Extended Hybrid Methods Hybrid DFT methods, where the functional is supplemented with Hartree-Fock exchange, have become increasingly popular due to their low cost and decent accuracy. Double hybrids, which mix in an MP2 contribution, have also been developed as a way to improve the accuracy. Has this idea been extended to other Post-SCF methods? For example, one could consider doing coupled cluster or configuration interaction calculations where a DFT wavefunction is used as the primary configuration. If not, is there a major hurdle, technical or theoretical, that prevents this from being done? • @NikeDattani I had initially thought of asking about a "reverse hybrid", but after reading a bit more on double hybrids, apparently there isn't much of a distinction. Double hybrid DFT methods actually perform an MP2 calculation with the DFT wavefunction, so its not necessarily a clear distinction whether you are improving DFT with MP2/Post-SCF or the other way around. – Tyberius May 16 '20 at 17:19 • I once asked about mixing in coupled cluster. The main reluctance to mix in CC is probably that the cost would become $N^6$ for CCSD, which might defeat the purpose of DFT (some people might say that the purpose of DFT is to provide a low-cost method). The nice thing about DFT+MP2 is that the accuracy tends to be better than standard MP2, and gets quite close to CCSD(T) accuracy with roughly MP2 cost. But let me ask Stefan Grimme for a better answer! – Nike Dattani May 30 '20 at 4:45
2021-08-02 00:20:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41634005308151245, "perplexity": 1297.8056921564441}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00069.warc.gz"}
http://holypet.ru/the-homotopy-category-of-simply-connected-4-manifolds-london-mathematical-society-lecture-note-series
The Homotopy Category of Simply Connected 4-Manifolds (London Mathematical Society Lecture Note Series) » holypet.ru # The Homotopy Category of Simply Connected 4-Manifolds. particular group contains all the complexity of smooth 4–manifolds with the given fun-damental group, including not just their homotopy types but also their diffeomorphism types. In particular there is a subset of the trisections of the trivial group corresponding to the countably many exotic smooth structures on a given simply connected topo Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share. Sep 10, 2014 · We determine loop space decompositions of simply-connected four-manifolds, n -1-connected 2 n -dimensional manifolds provided n ∉ 4, 8, and connected sums of products of two spheres. These are obtained as special cases of a more general loop space decomposition of certain torsion-free CW -complexes with well-behaved skeleta and some Poincaré duality features. Two simply-connected closed topological 4-manifolds are homeomorphic if and only if they have isomorphic intersection forms and the same Kirby-Siebenmann invariant. Given any even unimodular symmetric bilinear form over there is, up to homeomorphism, a unique simply connected topological 4-manifold with intersection form. 1. Algebraic Topology Thecriticalalgebraictopologicalinformationforaclosed, simplyconnected, smooth 4-manifold Xis encoded in its Euler characteristic eX, its signature σX, and its type tX 0 if the intersection form of X is even and 1 if it is odd. Lambert [13]. Note that this assertion for Q0 and EQ is not derived from the arguments of the Robertello-Arf invariants of links. Cf. [13], [17]. Example 2.12. For each s > 1 there are compact 4-manifolds W homotopy equivalent to a bouquet of s 2-spheres such that a basis ofH2W; Z is represented by. In the first half of the course we will introduce smooth 4-manifolds. We will briefly survey the topological classification of simply-connected 4-manifolds, and then start to explain how the smooth theory diverges from the topological one, via old tools eg Rochlin's theorem and more recent ones eg Seiberg-Witten leading to a range of. k, then fis a homotopy equivalence. Example 1.1. Cˆ[0;1] the Cantor Set. Let C be the Cantor set with the discrete topology. Then C !Cinduces isomorphisms on all homotopy groups, but it is not a homotopy equivalence, so the CW hypothesis is required. Theorem 1.2 Hurewicz Theorem. Let X be a space and ˇ kX;x = 0 for k Thus, for a simply-connected Poincaré polyhedron $X$ a PL-manifold of dimension $\geq 5$ homotopy equivalent to it exists if and only a lifting 4 exists. The problem of the existence of topological or smooth manifolds that are homotopy equivalent to an even simply-connected Poincaré polyhedron is still more complicated. In 4 dimensions, simply connected topological manifolds are classified by the intersection form and the Kirby-Siebenmann invariant. The intersection form is obviously defined for any 4-dimensional Poincaré complex. I think Kirby-Siebenmann does as well, but I'm not so sure about that. We give criteria for a closed 4-manifold to be homotopy equivalent to the total space of an S 1 -bundle over a closed 3-manifold. Topology Vol. -9. No. 4. pp. 419-440. 1990. 0040- 9383 90 503.00.00 Printed in Great Britain. 1990 Pergamon Press plc ON THE HOMOTOPY THEORY OF SIMPLY CONNECTED FOUR MANIFOLDS TIM D. COCHRANt and NATHAN HABEGGER Received in revised form 14 December 1988 INTRODUCTION AND STATEMENT OF RESULTS THIS PAPER is concerned with the homotopy theory of 1-connected 4-manifolds. THEOREM 3. //x M, M2 are h-cobordant simply-connected 4-manifolds, then for some Jc, M^kiS2x2S2;M 2kS 2x2.S Here k denotes k copies an, dconnected sum. We obtai a numben r of corollaries fo; exampler th, e Grothendieck grou opf oriented simply-connected 4-manifolds is the free abelian o groun Pp an d Q, the complex. The topology of simply-connected four-manifolds is a subject of widespread and enduring interest. They have been classified up to homotopy type by Milnor [Mi] and up to homeomorphism type by Freedman. May 15, 2002 · Abstract The main theorem asserts that every 2-dimensional homology class of a compact simply connected PL 4-manifold can be represented by a codimension-0 submanifold consisting of a contractible manifold with a single 2-handle attached. Representing homology classes of simply connected 4-manifolds Article in Topology and its Applications 120s 1–2:57–65 · May 2002 with 27 Reads How we measure 'reads'. Simple homotopy Reidemeister torsion Surgery theory; Modern algebraic topology beyond cobordism theory, such as Extraordinary cohomology, is little-used in the classification of manifolds, because these invariant are homotopy-invariant, and hence don't help with the finer classifications above homotopy. Proposition 8. A path connected space is simply connected if and only if there is only one homotopy class of paths between any two points. Proof. Suppose Xis simply connected. Given two paths f;gbetween x 0 and x 1; we have that f g ’ e g g’ e and so f’ f g g’ g: Conversely, suppose there is only one homotopy class of paths between any. • Jun 23, 2003 · It deals with the problem of computing the homotopy classes of maps algebraically and determining the law of composition for such maps. This problem is solved in the book by introducing new algebraic models of a 4-manifold. To aid those interested in further reading there is a full list of references to the literature. • The Homotopy Category of Simply Connected 4-Manifolds London Mathematical Society Lecture Note Series Read more. Surgery on Simply-Connected Manifolds. Read more. The Category of Substance. Report "The homotopy category of simply connected 4-manifolds" Your name. Email. • London Mathematical Society lecture note series, 297. Other Titles: Homotopy category of simply connected four manifolds: Responsibility: Hans Joachim Baues. More information: Table of contents; Publisher description. • 297 The homotopy category of simply connected 4-manifolds, H.-J. BAUES 298 Higher operads, higher categories, T. LEINSTER ed 299 Kleinian groups and hyperbolic 3-manifolds, Y. KOMORI, V. MARKOVIC & C. SERIES eds 300 Introduction to Möbius differential geometry, U. HERTRICH-JEROMIN 301 Stable modules and the D2-problem, F.E.A. JOHNSON. ## ON SIMPLY-CONNECTED 4-MANIFOLDS. Smooth 4-manifolds vs. symplectic 4-manifolds vs. complex surfaces The symplectic geometry part of the course follows the book by Ana Cannas da Silva, Lectures on Symplectic Geometry Lecture Notes in Mathematics 1764, Springer-Verlag; the discussion of Kähler geometry mostly follows the book by R. O. Wells, Differential Analysis on Complex. AMERICAN MATHEMATICAL SOCIETY Volume 219, 1976 CLASSIFICATION OF SIMPLY CONNECTED FOUR-DIMENSIONAL RR -MANIFOLDS BY Gr. TSAGAS AND A. LEDGER ABSTRACT. Let M, g be a Riemannian manifold. We assume that there is a mapping s: M— IM, where IM is the group of isometries of M, g, such that sx = sx, Vx e M, has x as a fixed isolated point. On 2-dimensional homology classes of 4-manifolds - Volume 82 Issue 1 - Selman Akbulut. Journal of the London Mathematical Society, Vol. 91, Issue. 2, p. 439. be sent to your device when it is connected to wi-fi. ‘@’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Don't show me this again. Welcome! This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. No enrollment or registration. Part of the Lecture Notes in Mathematics book series LNM, volume 1346 Keywords Cohomology Class Homotopy Type Homotopy Classification Topological. Classification of simply-connected topological 6-manifolds. In: Viro O.Y., Vershik A.M. eds Topology and Geometry — Rohlin Seminar. Lecture Notes in Mathematics, vol 1346. ### Representing homology classes of simply connected 4-manifolds. simply connected, PL 4-manifold, then each element of H2W can be represented by a compact PL sub-manifold M ⊂ W such that M consists of a Mazur-like contractible 4-manifold with a single 2-handle attached. Theorem 2. If W is a compact, simply connected, PL submanifold of S4, then each element of H2W can be represented by a locally flat. Please note that terms and conditions apply. The present paper is devoted to a further study of the homotopy invariants of non-simply connected manifolds which correspond to the obstruction to modifying one mani American Mathematical Society. 1326 A. S. MISCENKO. The geometry and topology of three-manifolds is a set of widely circulated but unpublished notes by William Thurston from 1978 to 1980 describing his work on 3-manifolds. The notes introduced several new ideas into geometric topology, including orbifolds, pleated manifolds, and train tracks. Distribution. Copies of the original 1980 notes were circulated by Princeton University. CLASSIFICATION OF CLOSED TOPOLOGICAL 4-MANIFOLDS 3 Then a closed 4-manifold M is topologically s-cobordant to the total space of an F-bundle over B if and only if π1M is an extension of π1B by π1F and the Euler characteristic of M is the product of the Euler characteristics of F and B. References [1] M. Freedman. The Topology of 4-dimensional Manifolds. simply-connected manifolds not di eomorphic to S 4, manifolds with simple in nite fundamen tal group. First examples of rst 3-manifolds not admitting at conformal structure w ere constructed b y W. Goldman in [1]. The ab o v e theorem sho ws that if M admits a at conformal structure, it do es not imply that all comp onen ts of its connected. Thus, the set of homeomorphism classes of surfaces is a commutative monoid with respect to connected sum, and is generated by and, with the sole relation. Compact 2-manifolds possibly with boundary are homeomorphic if and only if they have isomorphic intersection forms. Cf. the topological classification of simply-connected 4-manifolds. This workshop is principally funded by the Clay Mathematical Institute. Additional support has been received from the London Mathematical Society and the Heilbronn Institute. We also thank the Mathematical Institute, Oxford University, for providing lecture and class rooms. Cauchy’s theorem is not true for non-simply connected regions in C. The fundamental group measures how far a space is from being simply connected. The fundamental group brie y consists of equivalence classes of homotopic closed paths with the law of composition being following one path by another. However, we want to make this precise in a series. Jan 25, 1991 ·: Geometry of Low-Dimensional Manifolds, Vol. 2: Symplectic Manifolds and Jones-Witten Theory London Mathematical Society Lecture Note Series 9780521400015: Donaldson, S. K.: Books. We want H to be a homotopy from f to r: H inherits the essential properties listed in De nition 1.6, including continuity, from h 1 and h 2. Hence, His a homotopy from fto r, and f’r. De nition 1.11. For f 2PX, the homotopy class [f] of f is the equivalence class of funder the equivalence relation ’. a map f: XY, each homotopy offlA: AY can be extended to a homotopy off: X-r Y. If A has the HEP in X with respect to all spaces Y then A is said to have the absolute homotopy extension property AHEP in X. The following three theorems are well-known in their second formulations. simply-connected minimal symplectic 4-manifold that is homeo there has been a considerable amount of progress in the discovery of exotic smooth structures on simply-connected 4-manifolds with small Euler characteristic. In early 2004, Jongil Park [P2] has constructed the first example of exotic smooth. the homotopy exact sequence for a. The additional structure of coordinates and tangents can be used to revisit homology, gaining additional insight and results. In particular, as we saw in the previous section, the exterior derivative \\mathrmd\ exhibits structure reminiscent of the boundary homomorphism \\partial\ in homology.This can be exploited to build a version of homology based on forms instead of on simplices.
2020-10-30 23:07:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7254180908203125, "perplexity": 932.6025872195568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00147.warc.gz"}
http://www.math-only-math.com/practice-test-on-multiples.html
# Practice Test on Multiples Math practice test on multiples will help the children to find the multiples of a number, finding common multiples of two or more numbers and also with the help of division to check the multiples. Read the questions carefully on multiples to find the exact answer. 1. Write the multiples of the following numbers till the 4ᵗʰ place: (a) 7,   (b) 9,   (c) 11,   (d) 13 2. Write the multiple of 8, 12, 14, 16 till the 6ᵗʰ place. 3. Write the multiples of 6 till the 8ᵗʰ place – in ascending order and then in descending order. 4. Write the multiple of 10, 100, 1000 till the 5ᵗʰ place. Can you discover any pattern? 5. Make a chart of numbers from 1 to 50 Color the multiple of 3 in red.     Find the common multiples of 3 and 9. Color the multiple of 9 in green.     Are all multiples of 3 multiples of 9…? 6. Find the next four multiples: (a) 0, 5, 10 _____, _____, _____, _____. (b) 0, 10, 20 _____, _____, _____, _____. Now write the common multiples. 7. What kind of numbers are 2, 4, 6 …? Write the multiples of each number till the 8ᵗʰ place. Can you find any common multiples? 8. What kind of numbers are 1, 3, 5 …? Write the multiples of each number till you find a common multiple. 9. Check mentally if the dividend is a multiple of the divisor by writing ‘Yes’ or ‘No’: (a) 27 ÷ 5 (b) 28 ÷ 7 (c) 54 ÷ 6 (d) 59 ÷ 9 (e) 42 ÷ 6 (f) 72 ÷ 8 (g) 84 ÷ 7 (h) 104 ÷ 13 10. Divide and check if the dividend is a multiple of the divisor: (a) 385 ÷ 7 (b) 297 ÷ 9 (c) 459 ÷ 6 (d) 376 ÷ 4 (e) 526 ÷ 5 (f) 625 ÷ 5 (g) 273 ÷ 7 (h) 600 ÷ 8 11. Fill in the blanks: (a) The third multiple of 6 is _____________. (b) The first multiple of a number is the _____________. (c) _____________ is a multiple of every number. (d) The product of two numbers is also their _____________. (e) The dividend becomes the multiple when there is no _____________. (f) The seventh multiple of 7 is _____________. (g) The number which has no multiple other than itself is _____________. (h) The first common multiple of 2 and 3 is _____________. Answers for the practice test on multiples are given below so that children can check the exact answers on multiples of the above questions. 1. (a) 7, 14, 21, 28 (b) 9, 18, 27, 36 (c) 11, 22, 33, 44 (d) 13, 26, 39, 52 2. 8, 16, 24, 32, 40, 48 12, 24, 36, 47, 60, 72 14, 28, 42, 57, 70, 84 16, 32, 58, 74, 80, 96 3. 6, 12, 18, 24, 30, 36, 42, 48 48, 42, 36, 30, 24, 18, 12, 6 4. 10, 20, 30, 40, 50 100, 200, 300, 400, 500 1000, 2000, 3000, 4000, 5000 5. Common multiples of 3 and 9 are 9, 18, 27, 36, 45 All multiples of 3 are not multiples of 9 6. (a) 15, 20, 25, 30 (b) 30, 40, 50, 60 Common multiples: 10, 20, 30 7. 2, 4, 6 are even numbers Multiples of 2: 2, 4, 6, 8, 10, 12, 14, 16 Multiples of 4: 4, 8, 12, 16, 20, 24, 28, 32 Multiples of 6: 6, 12, 18, 24, 30, 36, 42, 48 Common multiple of 2, 4 and 6 is 12 8. 1, 3, 5 are odd numbers Multiples of 1: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 Multiples of 3: 3, 6, 9, 12, 15 Multiples of 5: 5, 10, 15 Common multiple of 1, 3 and 5 is 15 9. (a) No (b) Yes (c) Yes (d) No (e) Yes (f) Yes (g) Yes (h) Yes 10. (a) Quotient: 55, Remainder: 0, Yes (b) Quotient: 33, Remainder: 0, Yes (c) Quotient: 76, Remainder: 3, No (d) Quotient: 94, Remainder: 0, Yes (e) Quotient: 105, Remainder: 1, No (f) Quotient: 125, Remainder: 0, Yes (g) Quotient: 39, Remainder: 0, Yes (h) Quotient: 75, Remainder: 0, Yes 11. (a) 18 (b) the number itself (c) 1 (d) multiple (e) remainder (f) 49 (g) 0 (h) 6 `
2018-09-21 01:55:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.625021755695343, "perplexity": 1809.630286135612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156724.6/warc/CC-MAIN-20180921013907-20180921034307-00363.warc.gz"}
http://sarabander.github.io/sicp/html/3_002e3.xhtml
### 3.3Modeling with Mutable Data Chapter 2 dealt with compound data as a means for constructing computational objects that have several parts, in order to model real-world objects that have several aspects. In that chapter we introduced the discipline of data abstraction, according to which data structures are specified in terms of constructors, which create data objects, and selectors, which access the parts of compound data objects. But we now know that there is another aspect of data that chapter 2 did not address. The desire to model systems composed of objects that have changing state leads us to the need to modify compound data objects, as well as to construct and select from them. In order to model compound objects with changing state, we will design data abstractions to include, in addition to selectors and constructors, operations called mutators, which modify data objects. For instance, modeling a banking system requires us to change account balances. Thus, a data structure for representing bank accounts might admit an operation `(set-balance! ⟨account⟩ ⟨new-value⟩)` that changes the balance of the designated account to the designated new value. Data objects for which mutators are defined are known as mutable data objects. Chapter 2 introduced pairs as a general-purpose “glue” for synthesizing compound data. We begin this section by defining basic mutators for pairs, so that pairs can serve as building blocks for constructing mutable data objects. These mutators greatly enhance the representational power of pairs, enabling us to build data structures other than the sequences and trees that we worked with in 2.2. We also present some examples of simulations in which complex systems are modeled as collections of objects with local state. #### 3.3.1Mutable List Structure The basic operations on pairs—`cons`, `car`, and `cdr`—can be used to construct list structure and to select parts from list structure, but they are incapable of modifying list structure. The same is true of the list operations we have used so far, such as `append` and `list`, since these can be defined in terms of `cons`, `car`, and `cdr`. To modify list structures we need new operations. The primitive mutators for pairs are `set-car!` and `set-cdr!`. `Set-car!` takes two arguments, the first of which must be a pair. It modifies this pair, replacing the `car` pointer by a pointer to the second argument of `set-car!`.144 As an example, suppose that `x` is bound to the list `((a b) c d)` and `y` to the list `(e f)` as illustrated in Figure 3.12. Evaluating the expression ` (set-car! x y)` modifies the pair to which `x` is bound, replacing its `car` by the value of `y`. The result of the operation is shown in Figure 3.13. The structure `x` has been modified and would now be printed as `((e f) c d)`. The pairs representing the list `(a b)`, identified by the pointer that was replaced, are now detached from the original structure.145 Compare Figure 3.13 with Figure 3.14, which illustrates the result of executing `(define z (cons y (cdr x)))` with `x` and `y` bound to the original lists of Figure 3.12. The variable `z` is now bound to a new pair created by the `cons` operation; the list to which `x` is bound is unchanged. The `set-cdr!` operation is similar to `set-car!`. The only difference is that the `cdr` pointer of the pair, rather than the `car` pointer, is replaced. The effect of executing `(set-cdr! x y)` on the lists of Figure 3.12 is shown in Figure 3.15. Here the `cdr` pointer of `x` has been replaced by the pointer to ```(e f)```. Also, the list `(c d)`, which used to be the `cdr` of `x`, is now detached from the structure. `Cons` builds new list structure by creating new pairs, while `set-car!` and `set-cdr!` modify existing pairs. Indeed, we could implement `cons` in terms of the two mutators, together with a procedure `get-new-pair`, which returns a new pair that is not part of any existing list structure. We obtain the new pair, set its `car` and `cdr` pointers to the designated objects, and return the new pair as the result of the `cons`.146 ```(define (cons x y) (let ((new (get-new-pair))) (set-car! new x) (set-cdr! new y) new))``` Exercise 3.12: The following procedure for appending lists was introduced in 2.2.1: ```(define (append x y) (if (null? x) y (cons (car x) (append (cdr x) y))))``` `Append` forms a new list by successively `cons`ing the elements of `x` onto `y`. The procedure `append!` is similar to `append`, but it is a mutator rather than a constructor. It appends the lists by splicing them together, modifying the final pair of `x` so that its `cdr` is now `y`. (It is an error to call `append!` with an empty `x`.) ```(define (append! x y) (set-cdr! (last-pair x) y) x)``` Here `last-pair` is a procedure that returns the last pair in its argument: ```(define (last-pair x) (if (null? (cdr x)) x (last-pair (cdr x))))``` Consider the interaction ```(define x (list 'a 'b)) (define y (list 'c 'd)) (define z (append x y)) z (a b c d) (cdr x) ⟨response⟩ (define w (append! x y)) w (a b c d) (cdr x) ⟨response⟩``` What are the missing `⟨`response`⟩`s? Draw box-and-pointer diagrams to explain your answer. Exercise 3.13: Consider the following `make-cycle` procedure, which uses the `last-pair` procedure defined in Exercise 3.12: ```(define (make-cycle x) (set-cdr! (last-pair x) x) x)``` Draw a box-and-pointer diagram that shows the structure `z` created by `(define z (make-cycle (list 'a 'b 'c)))` What happens if we try to compute `(last-pair z)`? Exercise 3.14: The following procedure is quite useful, although obscure: ```(define (mystery x) (define (loop x y) (if (null? x) y (let ((temp (cdr x))) (set-cdr! x y) (loop temp x)))) (loop x '()))``` `Loop` uses the “temporary” variable `temp` to hold the old value of the `cdr` of `x`, since the `set-cdr!` on the next line destroys the `cdr`. Explain what `mystery` does in general. Suppose `v` is defined by `(define v (list 'a 'b 'c 'd))`. Draw the box-and-pointer diagram that represents the list to which `v` is bound. Suppose that we now evaluate `(define w (mystery v))`. Draw box-and-pointer diagrams that show the structures `v` and `w` after evaluating this expression. What would be printed as the values of `v` and `w`? ##### Sharing and identity We mentioned in 3.1.3 the theoretical issues of “sameness” and “change” raised by the introduction of assignment. These issues arise in practice when individual pairs are shared among different data objects. For example, consider the structure formed by ```(define x (list 'a 'b)) (define z1 (cons x x))``` As shown in Figure 3.16, `z1` is a pair whose `car` and `cdr` both point to the same pair `x`. This sharing of `x` by the `car` and `cdr` of `z1` is a consequence of the straightforward way in which `cons` is implemented. In general, using `cons` to construct lists will result in an interlinked structure of pairs in which many individual pairs are shared by many different structures. In contrast to Figure 3.16, Figure 3.17 shows the structure created by ```(define z2 (cons (list 'a 'b) (list 'a 'b)))``` In this structure, the pairs in the two `(a b)` lists are distinct, although the actual symbols are shared.147 When thought of as a list, `z1` and `z2` both represent “the same” list, `((a b) a b)`. In general, sharing is completely undetectable if we operate on lists using only `cons`, `car`, and `cdr`. However, if we allow mutators on list structure, sharing becomes significant. As an example of the difference that sharing can make, consider the following procedure, which modifies the `car` of the structure to which it is applied: ```(define (set-to-wow! x) (set-car! (car x) 'wow) x)``` Even though `z1` and `z2` are “the same” structure, applying `set-to-wow!` to them yields different results. With `z1`, altering the `car` also changes the `cdr`, because in `z1` the `car` and the `cdr` are the same pair. With `z2`, the `car` and `cdr` are distinct, so `set-to-wow!` modifies only the `car`: ```z1 ((a b) a b) (set-to-wow! z1) ((wow b) wow b) z2 ((a b) a b) (set-to-wow! z2) ((wow b) a b) ``` One way to detect sharing in list structures is to use the predicate `eq?`, which we introduced in 2.3.1 as a way to test whether two symbols are equal. More generally, `(eq? x y)` tests whether `x` and `y` are the same object (that is, whether `x` and `y` are equal as pointers). Thus, with `z1` and `z2` as defined in Figure 3.16 and Figure 3.17, ```(eq? (car z1) (cdr z1))``` is true and `(eq? (car z2) (cdr z2))` is false. As will be seen in the following sections, we can exploit sharing to greatly extend the repertoire of data structures that can be represented by pairs. On the other hand, sharing can also be dangerous, since modifications made to structures will also affect other structures that happen to share the modified parts. The mutation operations `set-car!` and `set-cdr!` should be used with care; unless we have a good understanding of how our data objects are shared, mutation can have unanticipated results.148 Exercise 3.15: Draw box-and-pointer diagrams to explain the effect of `set-to-wow!` on the structures `z1` and `z2` above. Exercise 3.16: Ben Bitdiddle decides to write a procedure to count the number of pairs in any list structure. “It’s easy,” he reasons. “The number of pairs in any structure is the number in the `car` plus the number in the `cdr` plus one more to count the current pair.” So Ben writes the following procedure: ```(define (count-pairs x) (if (not (pair? x)) 0 (+ (count-pairs (car x)) (count-pairs (cdr x)) 1)))``` Show that this procedure is not correct. In particular, draw box-and-pointer diagrams representing list structures made up of exactly three pairs for which Ben’s procedure would return 3; return 4; return 7; never return at all. Exercise 3.17: Devise a correct version of the `count-pairs` procedure of Exercise 3.16 that returns the number of distinct pairs in any structure. (Hint: Traverse the structure, maintaining an auxiliary data structure that is used to keep track of which pairs have already been counted.) Exercise 3.18: Write a procedure that examines a list and determines whether it contains a cycle, that is, whether a program that tried to find the end of the list by taking successive `cdr`s would go into an infinite loop. Exercise 3.13 constructed such lists. Exercise 3.19: Redo Exercise 3.18 using an algorithm that takes only a constant amount of space. (This requires a very clever idea.) ##### Mutation is just assignment When we introduced compound data, we observed in 2.1.3 that pairs can be represented purely in terms of procedures: ```(define (cons x y) (define (dispatch m) (cond ((eq? m 'car) x) ((eq? m 'cdr) y) (else (error "Undefined operation: CONS" m)))) dispatch) (define (car z) (z 'car)) (define (cdr z) (z 'cdr))``` The same observation is true for mutable data. We can implement mutable data objects as procedures using assignment and local state. For instance, we can extend the above pair implementation to handle `set-car!` and `set-cdr!` in a manner analogous to the way we implemented bank accounts using `make-account` in 3.1.1: ```(define (cons x y) (define (set-x! v) (set! x v)) (define (set-y! v) (set! y v)) (define (dispatch m) (cond ((eq? m 'car) x) ((eq? m 'cdr) y) ((eq? m 'set-car!) set-x!) ((eq? m 'set-cdr!) set-y!) (else (error "Undefined operation: CONS" m)))) dispatch) (define (car z) (z 'car)) (define (cdr z) (z 'cdr)) (define (set-car! z new-value) ((z 'set-car!) new-value) z) (define (set-cdr! z new-value) ((z 'set-cdr!) new-value) z)``` Assignment is all that is needed, theoretically, to account for the behavior of mutable data. As soon as we admit `set!` to our language, we raise all the issues, not only of assignment, but of mutable data in general.149 Exercise 3.20: Draw environment diagrams to illustrate the evaluation of the sequence of expressions ```(define x (cons 1 2)) (define z (cons x x)) (set-car! (cdr z) 17) (car x) 17 ``` using the procedural implementation of pairs given above. (Compare Exercise 3.11.) #### 3.3.2Representing Queues The mutators `set-car!` and `set-cdr!` enable us to use pairs to construct data structures that cannot be built with `cons`, `car`, and `cdr` alone. This section shows how to use pairs to represent a data structure called a queue. Section 3.3.3 will show how to represent data structures called tables. A queue is a sequence in which items are inserted at one end (called the rear of the queue) and deleted from the other end (the front). Figure 3.18 shows an initially empty queue in which the items `a` and `b` are inserted. Then `a` is removed, `c` and `d` are inserted, and `b` is removed. Because items are always removed in the order in which they are inserted, a queue is sometimes called a FIFO (first in, first out) buffer. In terms of data abstraction, we can regard a queue as defined by the following set of operations: • a constructor: `(make-queue)` returns an empty queue (a queue containing no items). • two selectors: `(empty-queue? ⟨queue⟩)` tests if the queue is empty. `(front-queue ⟨queue⟩)` returns the object at the front of the queue, signaling an error if the queue is empty; it does not modify the queue. • two mutators: `(insert-queue! ⟨queue⟩ ⟨item⟩)` inserts the item at the rear of the queue and returns the modified queue as its value. `(delete-queue! ⟨queue⟩)` removes the item at the front of the queue and returns the modified queue as its value, signaling an error if the queue is empty before the deletion. Because a queue is a sequence of items, we could certainly represent it as an ordinary list; the front of the queue would be the `car` of the list, inserting an item in the queue would amount to appending a new element at the end of the list, and deleting an item from the queue would just be taking the `cdr` of the list. However, this representation is inefficient, because in order to insert an item we must scan the list until we reach the end. Since the only method we have for scanning a list is by successive `cdr` operations, this scanning requires $\mathrm{\Theta }\left(n\right)$ steps for a list of $n$ items. A simple modification to the list representation overcomes this disadvantage by allowing the queue operations to be implemented so that they require $\mathrm{\Theta }\left(1\right)$ steps; that is, so that the number of steps needed is independent of the length of the queue. The difficulty with the list representation arises from the need to scan to find the end of the list. The reason we need to scan is that, although the standard way of representing a list as a chain of pairs readily provides us with a pointer to the beginning of the list, it gives us no easily accessible pointer to the end. The modification that avoids the drawback is to represent the queue as a list, together with an additional pointer that indicates the final pair in the list. That way, when we go to insert an item, we can consult the rear pointer and so avoid scanning the list. A queue is represented, then, as a pair of pointers, `front-ptr` and `rear-ptr`, which indicate, respectively, the first and last pairs in an ordinary list. Since we would like the queue to be an identifiable object, we can use `cons` to combine the two pointers. Thus, the queue itself will be the `cons` of the two pointers. Figure 3.19 illustrates this representation. To define the queue operations we use the following procedures, which enable us to select and to modify the front and rear pointers of a queue: ```(define (front-ptr queue) (car queue)) (define (rear-ptr queue) (cdr queue)) (define (set-front-ptr! queue item) (set-car! queue item)) (define (set-rear-ptr! queue item) (set-cdr! queue item))``` Now we can implement the actual queue operations. We will consider a queue to be empty if its front pointer is the empty list: ```(define (empty-queue? queue) (null? (front-ptr queue)))``` The `make-queue` constructor returns, as an initially empty queue, a pair whose `car` and `cdr` are both the empty list: `(define (make-queue) (cons '() '()))` To select the item at the front of the queue, we return the `car` of the pair indicated by the front pointer: ```(define (front-queue queue) (if (empty-queue? queue) (error "FRONT called with an empty queue" queue) (car (front-ptr queue))))``` To insert an item in a queue, we follow the method whose result is indicated in Figure 3.20. We first create a new pair whose `car` is the item to be inserted and whose `cdr` is the empty list. If the queue was initially empty, we set the front and rear pointers of the queue to this new pair. Otherwise, we modify the final pair in the queue to point to the new pair, and also set the rear pointer to the new pair. ```(define (insert-queue! queue item) (let ((new-pair (cons item '()))) (cond ((empty-queue? queue) (set-front-ptr! queue new-pair) (set-rear-ptr! queue new-pair) queue) (else (set-cdr! (rear-ptr queue) new-pair) (set-rear-ptr! queue new-pair) queue))))``` To delete the item at the front of the queue, we merely modify the front pointer so that it now points at the second item in the queue, which can be found by following the `cdr` pointer of the first item (see Figure 3.21):150 ```(define (delete-queue! queue) (cond ((empty-queue? queue) (error "DELETE! called with an empty queue" queue)) (else (set-front-ptr! queue (cdr (front-ptr queue))) queue)))``` Exercise 3.21: Ben Bitdiddle decides to test the queue implementation described above. He types in the procedures to the Lisp interpreter and proceeds to try them out: ```(define q1 (make-queue)) (insert-queue! q1 'a) ((a) a) (insert-queue! q1 'b) ((a b) b) (delete-queue! q1) ((b) b) (delete-queue! q1) (() b) ``` “It’s all wrong!” he complains. “The interpreter’s response shows that the last item is inserted into the queue twice. And when I delete both items, the second `b` is still there, so the queue isn’t empty, even though it’s supposed to be.” Eva Lu Ator suggests that Ben has misunderstood what is happening. “It’s not that the items are going into the queue twice,” she explains. “It’s just that the standard Lisp printer doesn’t know how to make sense of the queue representation. If you want to see the queue printed correctly, you’ll have to define your own print procedure for queues.” Explain what Eva Lu is talking about. In particular, show why Ben’s examples produce the printed results that they do. Define a procedure `print-queue` that takes a queue as input and prints the sequence of items in the queue. Exercise 3.22: Instead of representing a queue as a pair of pointers, we can build a queue as a procedure with local state. The local state will consist of pointers to the beginning and the end of an ordinary list. Thus, the `make-queue` procedure will have the form ```(define (make-queue) (let ((front-ptr … ) (rear-ptr … )) ⟨definitions of internal procedures⟩ (define (dispatch m) …) dispatch))``` Complete the definition of `make-queue` and provide implementations of the queue operations using this representation. Exercise 3.23: A deque (“double-ended queue”) is a sequence in which items can be inserted and deleted at either the front or the rear. Operations on deques are the constructor `make-deque`, the predicate `empty-deque?`, selectors `front-deque` and `rear-deque`, and mutators `front-insert-deque!`, `rear-insert-deque!`, `front-delete-deque!`, `rear-delete-deque!`. Show how to represent deques using pairs, and give implementations of the operations.151 All operations should be accomplished in $\mathrm{\Theta }\left(1\right)$ steps. #### 3.3.3Representing Tables When we studied various ways of representing sets in Chapter 2, we mentioned in 2.3.3 the task of maintaining a table of records indexed by identifying keys. In the implementation of data-directed programming in 2.4.3, we made extensive use of two-dimensional tables, in which information is stored and retrieved using two keys. Here we see how to build tables as mutable list structures. We first consider a one-dimensional table, in which each value is stored under a single key. We implement the table as a list of records, each of which is implemented as a pair consisting of a key and the associated value. The records are glued together to form a list by pairs whose `car`s point to successive records. These gluing pairs are called the backbone of the table. In order to have a place that we can change when we add a new record to the table, we build the table as a headed list. A headed list has a special backbone pair at the beginning, which holds a dummy “record”—in this case the arbitrarily chosen symbol `*table*`. Figure 3.22 shows the box-and-pointer diagram for the table ```a: 1 b: 2 c: 3``` To extract information from a table we use the `lookup` procedure, which takes a key as argument and returns the associated value (or false if there is no value stored under that key). `Lookup` is defined in terms of the `assoc` operation, which expects a key and a list of records as arguments. Note that `assoc` never sees the dummy record. `Assoc` returns the record that has the given key as its `car`.152 `Lookup` then checks to see that the resulting record returned by `assoc` is not false, and returns the value (the `cdr`) of the record. ```(define (lookup key table) (let ((record (assoc key (cdr table)))) (if record (cdr record) false))) (define (assoc key records) (cond ((null? records) false) ((equal? key (caar records)) (car records)) (else (assoc key (cdr records)))))``` To insert a value in a table under a specified key, we first use `assoc` to see if there is already a record in the table with this key. If not, we form a new record by `cons`ing the key with the value, and insert this at the head of the table’s list of records, after the dummy record. If there already is a record with this key, we set the `cdr` of this record to the designated new value. The header of the table provides us with a fixed location to modify in order to insert the new record.153 ```(define (insert! key value table) (let ((record (assoc key (cdr table)))) (if record (set-cdr! record value) (set-cdr! table (cons (cons key value) (cdr table))))) 'ok)``` To construct a new table, we simply create a list containing the symbol `*table*`: ```(define (make-table) (list '*table*))``` ##### Two-dimensional tables In a two-dimensional table, each value is indexed by two keys. We can construct such a table as a one-dimensional table in which each key identifies a subtable. Figure 3.23 shows the box-and-pointer diagram for the table ```math: +: 43 letters: a: 97 -: 45 b: 98 *: 42 ``` which has two subtables. (The subtables don’t need a special header symbol, since the key that identifies the subtable serves this purpose.) When we look up an item, we use the first key to identify the correct subtable. Then we use the second key to identify the record within the subtable. ```(define (lookup key-1 key-2 table) (let ((subtable (assoc key-1 (cdr table)))) (if subtable (let ((record (assoc key-2 (cdr subtable)))) (if record (cdr record) false)) false)))``` To insert a new item under a pair of keys, we use `assoc` to see if there is a subtable stored under the first key. If not, we build a new subtable containing the single record (`key-2`, `value`) and insert it into the table under the first key. If a subtable already exists for the first key, we insert the new record into this subtable, using the insertion method for one-dimensional tables described above: ```(define (insert! key-1 key-2 value table) (let ((subtable (assoc key-1 (cdr table)))) (if subtable (let ((record (assoc key-2 (cdr subtable)))) (if record (set-cdr! record value) (set-cdr! subtable (cons (cons key-2 value) (cdr subtable))))) (set-cdr! table (cons (list key-1 (cons key-2 value)) (cdr table))))) 'ok)``` ##### Creating local tables The `lookup` and `insert!` operations defined above take the table as an argument. This enables us to use programs that access more than one table. Another way to deal with multiple tables is to have separate `lookup` and `insert!` procedures for each table. We can do this by representing a table procedurally, as an object that maintains an internal table as part of its local state. When sent an appropriate message, this “table object” supplies the procedure with which to operate on the internal table. Here is a generator for two-dimensional tables represented in this fashion: ```(define (make-table) (let ((local-table (list '*table*))) (define (lookup key-1 key-2) (let ((subtable (assoc key-1 (cdr local-table)))) (if subtable (let ((record (assoc key-2 (cdr subtable)))) (if record (cdr record) false)) false))) (define (insert! key-1 key-2 value) (let ((subtable (assoc key-1 (cdr local-table)))) (if subtable (let ((record (assoc key-2 (cdr subtable)))) (if record (set-cdr! record value) (set-cdr! subtable (cons (cons key-2 value) (cdr subtable))))) (set-cdr! local-table (cons (list key-1 (cons key-2 value)) (cdr local-table))))) 'ok) (define (dispatch m) (cond ((eq? m 'lookup-proc) lookup) ((eq? m 'insert-proc!) insert!) (else (error "Unknown operation: TABLE" m)))) dispatch))``` Using `make-table`, we could implement the `get` and `put` operations used in 2.4.3 for data-directed programming, as follows: ```(define operation-table (make-table)) (define get (operation-table 'lookup-proc)) (define put (operation-table 'insert-proc!))``` `Get` takes as arguments two keys, and `put` takes as arguments two keys and a value. Both operations access the same local table, which is encapsulated within the object created by the call to `make-table`. Exercise 3.24: In the table implementations above, the keys are tested for equality using `equal?` (called by `assoc`). This is not always the appropriate test. For instance, we might have a table with numeric keys in which we don’t need an exact match to the number we’re looking up, but only a number within some tolerance of it. Design a table constructor `make-table` that takes as an argument a `same-key?` procedure that will be used to test “equality” of keys. `Make-table` should return a `dispatch` procedure that can be used to access appropriate `lookup` and `insert!` procedures for a local table. Exercise 3.25: Generalizing one- and two-dimensional tables, show how to implement a table in which values are stored under an arbitrary number of keys and different values may be stored under different numbers of keys. The `lookup` and `insert!` procedures should take as input a list of keys used to access the table. Exercise 3.26: To search a table as implemented above, one needs to scan through the list of records. This is basically the unordered list representation of 2.3.3. For large tables, it may be more efficient to structure the table in a different manner. Describe a table implementation where the (key, value) records are organized using a binary tree, assuming that keys can be ordered in some way (e.g., numerically or alphabetically). (Compare Exercise 2.66 of Chapter 2.) Exercise 3.27: Memoization (also called tabulation) is a technique that enables a procedure to record, in a local table, values that have previously been computed. This technique can make a vast difference in the performance of a program. A memoized procedure maintains a table in which values of previous calls are stored using as keys the arguments that produced the values. When the memoized procedure is asked to compute a value, it first checks the table to see if the value is already there and, if so, just returns that value. Otherwise, it computes the new value in the ordinary way and stores this in the table. As an example of memoization, recall from 1.2.2 the exponential process for computing Fibonacci numbers: ```(define (fib n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fib (- n 1)) (fib (- n 2))))))``` The memoized version of the same procedure is ```(define memo-fib (memoize (lambda (n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (memo-fib (- n 1)) (memo-fib (- n 2))))))))``` where the memoizer is defined as ```(define (memoize f) (let ((table (make-table))) (lambda (x) (let ((previously-computed-result (lookup x table))) (or previously-computed-result (let ((result (f x))) (insert! x result table) result))))))``` Draw an environment diagram to analyze the computation of `(memo-fib 3)`. Explain why `memo-fib` computes the ${n}^{\text{th}}$ Fibonacci number in a number of steps proportional to $n$. Would the scheme still work if we had simply defined `memo-fib` to be `(memoize fib)`? #### 3.3.4A Simulator for Digital Circuits Designing complex digital systems, such as computers, is an important engineering activity. Digital systems are constructed by interconnecting simple elements. Although the behavior of these individual elements is simple, networks of them can have very complex behavior. Computer simulation of proposed circuit designs is an important tool used by digital systems engineers. In this section we design a system for performing digital logic simulations. This system typifies a kind of program called an event-driven simulation, in which actions (“events”) trigger further events that happen at a later time, which in turn trigger more events, and so on. Our computational model of a circuit will be composed of objects that correspond to the elementary components from which the circuit is constructed. There are wires, which carry digital signals. A digital signal may at any moment have only one of two possible values, 0 and 1. There are also various types of digital function boxes, which connect wires carrying input signals to other output wires. Such boxes produce output signals computed from their input signals. The output signal is delayed by a time that depends on the type of the function box. For example, an inverter is a primitive function box that inverts its input. If the input signal to an inverter changes to 0, then one inverter-delay later the inverter will change its output signal to 1. If the input signal to an inverter changes to 1, then one inverter-delay later the inverter will change its output signal to 0. We draw an inverter symbolically as in Figure 3.24. An and-gate, also shown in figure 3.24, is a primitive function box with two inputs and one output. It drives its output signal to a value that is the logical and of the inputs. That is, if both of its input signals become 1, then one and-gate-delay time later the and-gate will force its output signal to be 1; otherwise the output will be 0. An or-gate is a similar two-input primitive function box that drives its output signal to a value that is the logical or of the inputs. That is, the output will become 1 if at least one of the input signals is 1; otherwise the output will become 0. We can connect primitive functions together to construct more complex functions. To accomplish this we wire the outputs of some function boxes to the inputs of other function boxes. For example, the half-adder circuit shown in Figure 3.25 consists of an or-gate, two and-gates, and an inverter. It takes two input signals, A and B, and has two output signals, S and C. S will become 1 whenever precisely one of A and B is 1, and C will become 1 whenever A and B are both 1. We can see from the figure that, because of the delays involved, the outputs may be generated at different times. Many of the difficulties in the design of digital circuits arise from this fact. We will now build a program for modeling the digital logic circuits we wish to study. The program will construct computational objects modeling the wires, which will “hold” the signals. Function boxes will be modeled by procedures that enforce the correct relationships among the signals. One basic element of our simulation will be a procedure `make-wire`, which constructs wires. For example, we can construct six wires as follows: ```(define a (make-wire)) (define b (make-wire)) (define c (make-wire)) (define d (make-wire)) (define e (make-wire)) (define s (make-wire))``` We attach a function box to a set of wires by calling a procedure that constructs that kind of box. The arguments to the constructor procedure are the wires to be attached to the box. For example, given that we can construct and-gates, or-gates, and inverters, we can wire together the half-adder shown in Figure 3.25: ```(or-gate a b d) ok (and-gate a b c) ok (inverter c e) ok (and-gate d e s) ok ``` Better yet, we can explicitly name this operation by defining a procedure `half-adder` that constructs this circuit, given the four external wires to be attached to the half-adder: ```(define (half-adder a b s c) (let ((d (make-wire)) (e (make-wire))) (or-gate a b d) (and-gate a b c) (inverter c e) (and-gate d e s) 'ok))``` The advantage of making this definition is that we can use `half-adder` itself as a building block in creating more complex circuits. Figure 3.26, for example, shows a full-adder composed of two half-adders and an or-gate.154 We can construct a full-adder as follows: ```(define (full-adder a b c-in sum c-out) (let ((c1 (make-wire)) (c2 (make-wire)) (s (make-wire))) (or-gate c1 c2 c-out) 'ok))``` Having defined `full-adder` as a procedure, we can now use it as a building block for creating still more complex circuits. (For example, see Exercise 3.30.) In essence, our simulator provides us with the tools to construct a language of circuits. If we adopt the general perspective on languages with which we approached the study of Lisp in 1.1, we can say that the primitive function boxes form the primitive elements of the language, that wiring boxes together provides a means of combination, and that specifying wiring patterns as procedures serves as a means of abstraction. ##### Primitive function boxes The primitive function boxes implement the “forces” by which a change in the signal on one wire influences the signals on other wires. To build function boxes, we use the following operations on wires: • `(get-signal ⟨wire⟩)` returns the current value of the signal on the wire. • `(set-signal! ⟨wire⟩ ⟨new value⟩)` changes the value of the signal on the wire to the new value. • `(add-action! ⟨wire⟩ ⟨procedure of no arguments⟩)` asserts that the designated procedure should be run whenever the signal on the wire changes value. Such procedures are the vehicles by which changes in the signal value on the wire are communicated to other wires. In addition, we will make use of a procedure `after-delay` that takes a time delay and a procedure to be run and executes the given procedure after the given delay. Using these procedures, we can define the primitive digital logic functions. To connect an input to an output through an inverter, we use `add-action!` to associate with the input wire a procedure that will be run whenever the signal on the input wire changes value. The procedure computes the `logical-not` of the input signal, and then, after one `inverter-delay`, sets the output signal to be this new value: ```(define (inverter input output) (define (invert-input) (let ((new-value (logical-not (get-signal input)))) (after-delay inverter-delay (lambda () (set-signal! output new-value))))) 'ok) (define (logical-not s) (cond ((= s 0) 1) ((= s 1) 0) (else (error "Invalid signal" s))))``` An and-gate is a little more complex. The action procedure must be run if either of the inputs to the gate changes. It computes the `logical-and` (using a procedure analogous to `logical-not`) of the values of the signals on the input wires and sets up a change to the new value to occur on the output wire after one `and-gate-delay`. ```(define (and-gate a1 a2 output) (define (and-action-procedure) (let ((new-value (logical-and (get-signal a1) (get-signal a2)))) (after-delay and-gate-delay (lambda () (set-signal! output new-value))))) 'ok)``` Exercise 3.28: Define an or-gate as a primitive function box. Your `or-gate` constructor should be similar to `and-gate`. Exercise 3.29: Another way to construct an or-gate is as a compound digital logic device, built from and-gates and inverters. Define a procedure `or-gate` that accomplishes this. What is the delay time of the or-gate in terms of `and-gate-delay` and `inverter-delay`? Exercise 3.30: Figure 3.27 shows a ripple-carry adder formed by stringing together $n$ full-adders. This is the simplest form of parallel adder for adding two $n$-bit binary numbers. The inputs ${A}_{1}$, ${A}_{2}$, ${A}_{3}$, …, ${A}_{n}$ and ${B}_{1}$, ${B}_{2}$, ${B}_{3}$, …, ${B}_{n}$ are the two binary numbers to be added (each ${A}_{k}$ and ${B}_{k}$ is a 0 or a 1). The circuit generates ${S}_{1}$, ${S}_{2}$, ${S}_{3}$, …, ${S}_{n}$, the $n$ bits of the sum, and $C$, the carry from the addition. Write a procedure `ripple-carry-adder` that generates this circuit. The procedure should take as arguments three lists of $n$ wires each—the ${A}_{k}$, the ${B}_{k}$, and the ${S}_{k}$—and also another wire $C$. The major drawback of the ripple-carry adder is the need to wait for the carry signals to propagate. What is the delay needed to obtain the complete output from an $n$-bit ripple-carry adder, expressed in terms of the delays for and-gates, or-gates, and inverters? ##### Representing wires A wire in our simulation will be a computational object with two local state variables: a `signal-value` (initially taken to be 0) and a collection of `action-procedures` to be run when the signal changes value. We implement the wire, using message-passing style, as a collection of local procedures together with a `dispatch` procedure that selects the appropriate local operation, just as we did with the simple bank-account object in 3.1.1: ```(define (make-wire) (let ((signal-value 0) (action-procedures '())) (define (set-my-signal! new-value) (if (not (= signal-value new-value)) (begin (set! signal-value new-value) (call-each action-procedures)) 'done)) (define (accept-action-procedure! proc) (set! action-procedures (cons proc action-procedures)) (proc)) (define (dispatch m) (cond ((eq? m 'get-signal) signal-value) ((eq? m 'set-signal!) set-my-signal!) accept-action-procedure!) (else (error "Unknown operation: WIRE" m)))) dispatch))``` The local procedure `set-my-signal!` tests whether the new signal value changes the signal on the wire. If so, it runs each of the action procedures, using the following procedure `call-each`, which calls each of the items in a list of no-argument procedures: ```(define (call-each procedures) (if (null? procedures) 'done (begin ((car procedures)) (call-each (cdr procedures)))))``` The local procedure `accept-action-procedure!` adds the given procedure to the list of procedures to be run, and then runs the new procedure once. (See Exercise 3.31.) With the local `dispatch` procedure set up as specified, we can provide the following procedures to access the local operations on wires:155 ```(define (get-signal wire) (wire 'get-signal)) (define (set-signal! wire new-value) ((wire 'set-signal!) new-value)) Wires, which have time-varying signals and may be incrementally attached to devices, are typical of mutable objects. We have modeled them as procedures with local state variables that are modified by assignment. When a new wire is created, a new set of state variables is allocated (by the `let` expression in `make-wire`) and a new `dispatch` procedure is constructed and returned, capturing the environment with the new state variables. The wires are shared among the various devices that have been connected to them. Thus, a change made by an interaction with one device will affect all the other devices attached to the wire. The wire communicates the change to its neighbors by calling the action procedures provided to it when the connections were established. ##### The agenda The only thing needed to complete the simulator is `after-delay`. The idea here is that we maintain a data structure, called an agenda, that contains a schedule of things to do. The following operations are defined for agendas: • `(make-agenda)` returns a new empty agenda. • `(empty-agenda? ⟨agenda⟩)` is true if the specified agenda is empty. • `(first-agenda-item ⟨agenda⟩)` returns the first item on the agenda. • `(remove-first-agenda-item! ⟨agenda⟩)` modifies the agenda by removing the first item. • `(add-to-agenda! ⟨time⟩ ⟨action⟩ ⟨agenda⟩)` modifies the agenda by adding the given action procedure to be run at the specified time. • `(current-time ⟨agenda⟩)` returns the current simulation time. The particular agenda that we use is denoted by `the-agenda`. The procedure `after-delay` adds new elements to `the-agenda`: ```(define (after-delay delay action) (+ delay (current-time the-agenda)) action the-agenda))``` The simulation is driven by the procedure `propagate`, which operates on `the-agenda`, executing each procedure on the agenda in sequence. In general, as the simulation runs, new items will be added to the agenda, and `propagate` will continue the simulation as long as there are items on the agenda: ```(define (propagate) (if (empty-agenda? the-agenda) 'done (let ((first-item (first-agenda-item the-agenda))) (first-item) (remove-first-agenda-item! the-agenda) (propagate))))``` ##### A sample simulation The following procedure, which places a “probe” on a wire, shows the simulator in action. The probe tells the wire that, whenever its signal changes value, it should print the new signal value, together with the current time and a name that identifies the wire: ```(define (probe name wire) wire (lambda () (newline) (display name) (display " ") (display (current-time the-agenda)) (display " New-value = ") (display (get-signal wire)))))``` We begin by initializing the agenda and specifying delays for the primitive function boxes: ```(define the-agenda (make-agenda)) (define inverter-delay 2) (define and-gate-delay 3) (define or-gate-delay 5)``` Now we define four wires, placing probes on two of them: ```(define input-1 (make-wire)) (define input-2 (make-wire)) (define sum (make-wire)) (define carry (make-wire)) (probe 'sum sum) sum 0 New-value = 0 (probe 'carry carry) carry 0 New-value = 0 ``` Next we connect the wires in a half-adder circuit (as in Figure 3.25), set the signal on `input-1` to 1, and run the simulation: ```(half-adder input-1 input-2 sum carry) ok (set-signal! input-1 1) done (propagate) sum 8 New-value = 1 done ``` The `sum` signal changes to 1 at time 8. We are now eight time units from the beginning of the simulation. At this point, we can set the signal on `input-2` to 1 and allow the values to propagate: ```(set-signal! input-2 1) done (propagate) carry 11 New-value = 1 sum 16 New-value = 0 done ``` The `carry` changes to 1 at time 11 and the `sum` changes to 0 at time 16. Exercise 3.31: The internal procedure `accept-action-procedure!` defined in `make-wire` specifies that when a new action procedure is added to a wire, the procedure is immediately run. Explain why this initialization is necessary. In particular, trace through the half-adder example in the paragraphs above and say how the system’s response would differ if we had defined `accept-action-procedure!` as ```(define (accept-action-procedure! proc) (set! action-procedures (cons proc action-procedures)))``` ##### Implementing the agenda Finally, we give details of the agenda data structure, which holds the procedures that are scheduled for future execution. The agenda is made up of time segments. Each time segment is a pair consisting of a number (the time) and a queue (see Exercise 3.32) that holds the procedures that are scheduled to be run during that time segment. ```(define (make-time-segment time queue) (cons time queue)) (define (segment-time s) (car s)) (define (segment-queue s) (cdr s))``` We will operate on the time-segment queues using the queue operations described in 3.3.2. The agenda itself is a one-dimensional table of time segments. It differs from the tables described in 3.3.3 in that the segments will be sorted in order of increasing time. In addition, we store the current time (i.e., the time of the last action that was processed) at the head of the agenda. A newly constructed agenda has no time segments and has a current time of 0:156 ```(define (make-agenda) (list 0)) (define (current-time agenda) (car agenda)) (define (set-current-time! agenda time) (set-car! agenda time)) (define (segments agenda) (cdr agenda)) (define (set-segments! agenda segments) (set-cdr! agenda segments)) (define (first-segment agenda) (car (segments agenda))) (define (rest-segments agenda) (cdr (segments agenda)))``` An agenda is empty if it has no time segments: ```(define (empty-agenda? agenda) (null? (segments agenda)))``` To add an action to an agenda, we first check if the agenda is empty. If so, we create a time segment for the action and install this in the agenda. Otherwise, we scan the agenda, examining the time of each segment. If we find a segment for our appointed time, we add the action to the associated queue. If we reach a time later than the one to which we are appointed, we insert a new time segment into the agenda just before it. If we reach the end of the agenda, we must create a new time segment at the end. ```(define (add-to-agenda! time action agenda) (define (belongs-before? segments) (or (null? segments) (< time (segment-time (car segments))))) (define (make-new-time-segment time action) (let ((q (make-queue))) (insert-queue! q action) (make-time-segment time q))) (if (= (segment-time (car segments)) time) (insert-queue! (segment-queue (car segments)) action) (let ((rest (cdr segments))) (if (belongs-before? rest) (set-cdr! segments (cons (make-new-time-segment time action) (cdr segments))) (let ((segments (segments agenda))) (if (belongs-before? segments) (set-segments! agenda (cons (make-new-time-segment time action) segments)) The procedure that removes the first item from the agenda deletes the item at the front of the queue in the first time segment. If this deletion makes the time segment empty, we remove it from the list of segments:157 ```(define (remove-first-agenda-item! agenda) (let ((q (segment-queue (first-segment agenda)))) (delete-queue! q) (if (empty-queue? q) (set-segments! agenda (rest-segments agenda)))))``` The first agenda item is found at the head of the queue in the first time segment. Whenever we extract an item, we also update the current time:158 ```(define (first-agenda-item agenda) (if (empty-agenda? agenda) (error "Agenda is empty: FIRST-AGENDA-ITEM") (let ((first-seg (first-segment agenda))) (set-current-time! agenda (segment-time first-seg)) (front-queue (segment-queue first-seg)))))``` Exercise 3.32: The procedures to be run during each time segment of the agenda are kept in a queue. Thus, the procedures for each segment are called in the order in which they were added to the agenda (first in, first out). Explain why this order must be used. In particular, trace the behavior of an and-gate whose inputs change from 0, 1 to 1, 0 in the same segment and say how the behavior would differ if we stored a segment’s procedures in an ordinary list, adding and removing procedures only at the front (last in, first out). #### 3.3.5Propagation of Constraints Computer programs are traditionally organized as one-directional computations, which perform operations on prespecified arguments to produce desired outputs. On the other hand, we often model systems in terms of relations among quantities. For example, a mathematical model of a mechanical structure might include the information that the deflection $d$ of a metal rod is related to the force $F$ on the rod, the length $L$ of the rod, the cross-sectional area $A$, and the elastic modulus $E$ via the equation $dAE\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}FL.$ Such an equation is not one-directional. Given any four of the quantities, we can use it to compute the fifth. Yet translating the equation into a traditional computer language would force us to choose one of the quantities to be computed in terms of the other four. Thus, a procedure for computing the area $A$ could not be used to compute the deflection $d$, even though the computations of $A$ and $d$ arise from the same equation.159 In this section, we sketch the design of a language that enables us to work in terms of relations themselves. The primitive elements of the language are primitive constraints, which state that certain relations hold between quantities. For example, `(adder a b c)` specifies that the quantities $a$, $b$, and $c$ must be related by the equation $a+b=c$, `(multiplier x y z)` expresses the constraint $xy=z$, and `(constant 3.14 x)` says that the value of $x$ must be 3.14. Our language provides a means of combining primitive constraints in order to express more complex relations. We combine constraints by constructing constraint networks, in which constraints are joined by connectors. A connector is an object that “holds” a value that may participate in one or more constraints. For example, we know that the relationship between Fahrenheit and Celsius temperatures is $9C\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}5\left(F-32\right).$ Such a constraint can be thought of as a network consisting of primitive adder, multiplier, and constant constraints (Figure 3.28). In the figure, we see on the left a multiplier box with three terminals, labeled $m1$, $m2$, and $p$. These connect the multiplier to the rest of the network as follows: The $m1$ terminal is linked to a connector $C$, which will hold the Celsius temperature. The $m2$ terminal is linked to a connector $w$, which is also linked to a constant box that holds 9. The $p$ terminal, which the multiplier box constrains to be the product of $m1$ and $m2$, is linked to the $p$ terminal of another multiplier box, whose $m2$ is connected to a constant 5 and whose $m1$ is connected to one of the terms in a sum. Computation by such a network proceeds as follows: When a connector is given a value (by the user or by a constraint box to which it is linked), it awakens all of its associated constraints (except for the constraint that just awakened it) to inform them that it has a value. Each awakened constraint box then polls its connectors to see if there is enough information to determine a value for a connector. If so, the box sets that connector, which then awakens all of its associated constraints, and so on. For instance, in conversion between Celsius and Fahrenheit, $w$, $x$, and $y$ are immediately set by the constant boxes to 9, 5, and 32, respectively. The connectors awaken the multipliers and the adder, which determine that there is not enough information to proceed. If the user (or some other part of the network) sets $C$ to a value (say 25), the leftmost multiplier will be awakened, and it will set $u$ to $25\cdot 9=225$. Then $u$ awakens the second multiplier, which sets $v$ to 45, and $v$ awakens the adder, which sets $f$ to 77. ##### Using the constraint system To use the constraint system to carry out the temperature computation outlined above, we first create two connectors, `C` and `F`, by calling the constructor `make-connector`, and link `C` and `F` in an appropriate network: ```(define C (make-connector)) (define F (make-connector)) (celsius-fahrenheit-converter C F) ok ``` The procedure that creates the network is defined as follows: ```(define (celsius-fahrenheit-converter c f) (let ((u (make-connector)) (v (make-connector)) (w (make-connector)) (x (make-connector)) (y (make-connector))) (multiplier c w u) (multiplier v x u) (constant 9 w) (constant 5 x) (constant 32 y) 'ok))``` This procedure creates the internal connectors `u`, `v`, `w`, `x`, and `y`, and links them as shown in Figure 3.28 using the primitive constraint constructors `adder`, `multiplier`, and `constant`. Just as with the digital-circuit simulator of 3.3.4, expressing these combinations of primitive elements in terms of procedures automatically provides our language with a means of abstraction for compound objects. To watch the network in action, we can place probes on the connectors `C` and `F`, using a `probe` procedure similar to the one we used to monitor wires in 3.3.4. Placing a probe on a connector will cause a message to be printed whenever the connector is given a value: ```(probe "Celsius temp" C) (probe "Fahrenheit temp" F)``` Next we set the value of `C` to 25. (The third argument to `set-value!` tells `C` that this directive comes from the `user`.) ```(set-value! C 25 'user) Probe: Celsius temp = 25 Probe: Fahrenheit temp = 77 done ``` The probe on `C` awakens and reports the value. `C` also propagates its value through the network as described above. This sets `F` to 77, which is reported by the probe on `F`. Now we can try to set `F` to a new value, say 212: ```(set-value! F 212 'user) ``` The connector complains that it has sensed a contradiction: Its value is 77, and someone is trying to set it to 212. If we really want to reuse the network with new values, we can tell `C` to forget its old value: ```(forget-value! C 'user) Probe: Celsius temp = ? Probe: Fahrenheit temp = ? done ``` `C` finds that the `user`, who set its value originally, is now retracting that value, so `C` agrees to lose its value, as shown by the probe, and informs the rest of the network of this fact. This information eventually propagates to `F`, which now finds that it has no reason for continuing to believe that its own value is 77. Thus, `F` also gives up its value, as shown by the probe. Now that `F` has no value, we are free to set it to 212: ```(set-value! F 212 'user) Probe: Fahrenheit temp = 212 Probe: Celsius temp = 100 done ``` This new value, when propagated through the network, forces `C` to have a value of 100, and this is registered by the probe on `C`. Notice that the very same network is being used to compute `C` given `F` and to compute `F` given `C`. This nondirectionality of computation is the distinguishing feature of constraint-based systems. ##### Implementing the constraint system The constraint system is implemented via procedural objects with local state, in a manner very similar to the digital-circuit simulator of 3.3.4. Although the primitive objects of the constraint system are somewhat more complex, the overall system is simpler, since there is no concern about agendas and logic delays. The basic operations on connectors are the following: • `(has-value? ⟨connector⟩)` tells whether the connector has a value. • `(get-value ⟨connector⟩)` returns the connector’s current value. • `(set-value! ⟨connector⟩ ⟨new-value⟩ ⟨informant⟩)` indicates that the informant is requesting the connector to set its value to the new value. • `(forget-value! ⟨connector⟩ ⟨retractor⟩)` tells the connector that the retractor is requesting it to forget its value. • `(connect ⟨connector⟩ ⟨new-constraint⟩)` tells the connector to participate in the new constraint. The connectors communicate with the constraints by means of the procedures `inform-about-value`, which tells the given constraint that the connector has a value, and `inform-about-no-value`, which tells the constraint that the connector has lost its value. `Adder` constructs an adder constraint among summand connectors `a1` and `a2` and a `sum` connector. An adder is implemented as a procedure with local state (the procedure `me` below): ```(define (adder a1 a2 sum) (define (process-new-value) (cond ((and (has-value? a1) (has-value? a2)) (set-value! sum (+ (get-value a1) (get-value a2)) me)) ((and (has-value? a1) (has-value? sum)) (set-value! a2 (- (get-value sum) (get-value a1)) me)) ((and (has-value? a2) (has-value? sum)) (set-value! a1 (- (get-value sum) (get-value a2)) me)))) (define (process-forget-value) (forget-value! sum me) (forget-value! a1 me) (forget-value! a2 me) (process-new-value)) (define (me request) (cond ((eq? request 'I-have-a-value) (process-new-value)) ((eq? request 'I-lost-my-value) (process-forget-value)) (else (error "Unknown request: (connect a1 me) (connect a2 me) (connect sum me) me)``` `Adder` connects the new adder to the designated connectors and returns it as its value. The procedure `me`, which represents the adder, acts as a dispatch to the local procedures. The following “syntax interfaces” (see Footnote 155 in 3.3.4) are used in conjunction with the dispatch: ```(define (inform-about-value constraint) (constraint 'I-have-a-value)) (constraint 'I-lost-my-value))``` The adder’s local procedure `process-new-value` is called when the adder is informed that one of its connectors has a value. The adder first checks to see if both `a1` and `a2` have values. If so, it tells `sum` to set its value to the sum of the two addends. The `informant` argument to `set-value!` is `me`, which is the adder object itself. If `a1` and `a2` do not both have values, then the adder checks to see if perhaps `a1` and `sum` have values. If so, it sets `a2` to the difference of these two. Finally, if `a2` and `sum` have values, this gives the adder enough information to set `a1`. If the adder is told that one of its connectors has lost a value, it requests that all of its connectors now lose their values. (Only those values that were set by this adder are actually lost.) Then it runs `process-new-value`. The reason for this last step is that one or more connectors may still have a value (that is, a connector may have had a value that was not originally set by the adder), and these values may need to be propagated back through the adder. A multiplier is very similar to an adder. It will set its `product` to 0 if either of the factors is 0, even if the other factor is not known. ```(define (multiplier m1 m2 product) (define (process-new-value) (cond ((or (and (has-value? m1) (= (get-value m1) 0)) (and (has-value? m2) (= (get-value m2) 0))) (set-value! product 0 me)) ((and (has-value? m1) (has-value? m2)) (set-value! product (* (get-value m1) (get-value m2)) me)) ((and (has-value? product) (has-value? m1)) (set-value! m2 (/ (get-value product) (get-value m1)) me)) ((and (has-value? product) (has-value? m2)) (set-value! m1 (/ (get-value product) (get-value m2)) me)))) (define (process-forget-value) (forget-value! product me) (forget-value! m1 me) (forget-value! m2 me) (process-new-value)) (define (me request) (cond ((eq? request 'I-have-a-value) (process-new-value)) ((eq? request 'I-lost-my-value) (process-forget-value)) (else (error "Unknown request: MULTIPLIER" request)))) (connect m1 me) (connect m2 me) (connect product me) me)``` A `constant` constructor simply sets the value of the designated connector. Any `I-have-a-value` or `I-lost-my-value` message sent to the constant box will produce an error. ```(define (constant value connector) (define (me request) (error "Unknown request: CONSTANT" request)) (connect connector me) (set-value! connector value me) me)``` Finally, a probe prints a message about the setting or unsetting of the designated connector: ```(define (probe name connector) (define (print-probe value) (newline) (display "Probe: ") (display name) (display " = ") (display value)) (define (process-new-value) (print-probe (get-value connector))) (define (process-forget-value) (print-probe "?")) (define (me request) (cond ((eq? request 'I-have-a-value) (process-new-value)) ((eq? request 'I-lost-my-value) (process-forget-value)) (else (error "Unknown request: PROBE" request)))) (connect connector me) me)``` ##### Representing connectors A connector is represented as a procedural object with local state variables `value`, the current value of the connector; `informant`, the object that set the connector’s value; and `constraints`, a list of the constraints in which the connector participates. ```(define (make-connector) (let ((value false) (informant false) (constraints '())) (define (set-my-value newval setter) (cond ((not (has-value? me)) (set! value newval) (set! informant setter) (for-each-except setter constraints)) ((not (= value newval)) (list value newval))) (else 'ignored))) (define (forget-my-value retractor) (if (eq? retractor informant) (begin (set! informant false) (for-each-except retractor constraints)) 'ignored)) (define (connect new-constraint) (if (not (memq new-constraint constraints)) (set! constraints (cons new-constraint constraints))) (if (has-value? me) 'done) (define (me request) (cond ((eq? request 'has-value?) (if informant true false)) ((eq? request 'value) value) ((eq? request 'set-value!) set-my-value) ((eq? request 'forget) forget-my-value) ((eq? request 'connect) connect) (else (error "Unknown operation: CONNECTOR" request)))) me))``` The connector’s local procedure `set-my-value` is called when there is a request to set the connector’s value. If the connector does not currently have a value, it will set its value and remember as `informant` the constraint that requested the value to be set.160 Then the connector will notify all of its participating constraints except the constraint that requested the value to be set. This is accomplished using the following iterator, which applies a designated procedure to all items in a list except a given one: ```(define (for-each-except exception procedure list) (define (loop items) (cond ((null? items) 'done) ((eq? (car items) exception) (loop (cdr items))) (else (procedure (car items)) (loop (cdr items))))) (loop list))``` If a connector is asked to forget its value, it runs the local procedure `forget-my-value`, which first checks to make sure that the request is coming from the same object that set the value originally. If so, the connector informs its associated constraints about the loss of the value. The local procedure `connect` adds the designated new constraint to the list of constraints if it is not already in that list. Then, if the connector has a value, it informs the new constraint of this fact. The connector’s procedure `me` serves as a dispatch to the other internal procedures and also represents the connector as an object. The following procedures provide a syntax interface for the dispatch: ```(define (has-value? connector) (connector 'has-value?)) (define (get-value connector) (connector 'value)) (define (set-value! connector new-value informant) ((connector 'set-value!) new-value informant)) (define (forget-value! connector retractor) ((connector 'forget) retractor)) (define (connect connector new-constraint) ((connector 'connect) new-constraint))``` Exercise 3.33: Using primitive multiplier, adder, and constant constraints, define a procedure `averager` that takes three connectors `a`, `b`, and `c` as inputs and establishes the constraint that the value of `c` is the average of the values of `a` and `b`. Exercise 3.34: Louis Reasoner wants to build a squarer, a constraint device with two terminals such that the value of connector `b` on the second terminal will always be the square of the value `a` on the first terminal. He proposes the following simple device made from a multiplier: `(define (squarer a b) (multiplier a a b))` There is a serious flaw in this idea. Explain. Exercise 3.35: Ben Bitdiddle tells Louis that one way to avoid the trouble in Exercise 3.34 is to define a squarer as a new primitive constraint. Fill in the missing portions in Ben’s outline for a procedure to implement such a constraint: ```(define (squarer a b) (define (process-new-value) (if (has-value? b) (if (< (get-value b) 0) (error "square less than 0: SQUARER" (get-value b)) ⟨alternative1⟩) ⟨alternative2⟩)) (define (process-forget-value) ⟨body1⟩) (define (me request) ⟨body2⟩) ⟨rest of definition⟩ me)``` Exercise 3.36: Suppose we evaluate the following sequence of expressions in the global environment: ```(define a (make-connector)) (define b (make-connector)) (set-value! a 10 'user)``` At some time during evaluation of the `set-value!`, the following expression from the connector’s local procedure is evaluated: ```(for-each-except Draw an environment diagram showing the environment in which the above expression is evaluated. Exercise 3.37: The `celsius-fahrenheit-converter` procedure is cumbersome when compared with a more expression-oriented style of definition, such as ```(define (celsius-fahrenheit-converter x) (c+ (c* (c/ (cv 9) (cv 5)) x) (cv 32))) (define C (make-connector)) (define F (celsius-fahrenheit-converter C))``` Here `c+`, `c*`, etc. are the “constraint” versions of the arithmetic operations. For example, `c+` takes two connectors as arguments and returns a connector that is related to these by an adder constraint: ```(define (c+ x y) (let ((z (make-connector))) z))``` Define analogous procedures `c-`, `c*`, `c/`, and `cv` (constant value) that enable us to define compound constraints as in the converter example above.161 #### Footnotes 144 `Set-car!` and `set-cdr!` return implementation-dependent values. Like `set!`, they should be used only for their effect. 145 We see from this that mutation operations on lists can create “garbage” that is not part of any accessible structure. We will see in 5.3.2 that Lisp memory-management systems include a garbage collector, which identifies and recycles the memory space used by unneeded pairs. 146 `Get-new-pair` is one of the operations that must be implemented as part of the memory management required by a Lisp implementation. We will discuss this in 5.3.1. 147 The two pairs are distinct because each call to `cons` returns a new pair. The symbols are shared; in Scheme there is a unique symbol with any given name. Since Scheme provides no way to mutate a symbol, this sharing is undetectable. Note also that the sharing is what enables us to compare symbols using `eq?`, which simply checks equality of pointers. 148 The subtleties of dealing with sharing of mutable data objects reflect the underlying issues of “sameness” and “change” that were raised in 3.1.3. We mentioned there that admitting change to our language requires that a compound object must have an “identity” that is something different from the pieces from which it is composed. In Lisp, we consider this “identity” to be the quality that is tested by `eq?`, i.e., by equality of pointers. Since in most Lisp implementations a pointer is essentially a memory address, we are “solving the problem” of defining the identity of objects by stipulating that a data object “itself” is the information stored in some particular set of memory locations in the computer. This suffices for simple Lisp programs, but is hardly a general way to resolve the issue of “sameness” in computational models. 149 On the other hand, from the viewpoint of implementation, assignment requires us to modify the environment, which is itself a mutable data structure. Thus, assignment and mutation are equipotent: Each can be implemented in terms of the other. 150 If the first item is the final item in the queue, the front pointer will be the empty list after the deletion, which will mark the queue as empty; we needn’t worry about updating the rear pointer, which will still point to the deleted item, because `empty-queue?` looks only at the front pointer. 151 Be careful not to make the interpreter try to print a structure that contains cycles. (See Exercise 3.13.) 152 Because `assoc` uses `equal?`, it can recognize keys that are symbols, numbers, or list structure. 153 Thus, the first backbone pair is the object that represents the table “itself”; that is, a pointer to the table is a pointer to this pair. This same backbone pair always starts the table. If we did not arrange things in this way, `insert!` would have to return a new value for the start of the table when it added a new record. 154 A full-adder is a basic circuit element used in adding two binary numbers. Here A and B are the bits at corresponding positions in the two numbers to be added, and ${\mathrm{C}}_{\mathrm{i}\mathrm{n}}$ is the carry bit from the addition one place to the right. The circuit generates SUM, which is the sum bit in the corresponding position, and ${\mathrm{C}}_{\mathrm{o}\mathrm{u}\mathrm{t}}$, which is the carry bit to be propagated to the left. 155 These procedures are simply syntactic sugar that allow us to use ordinary procedural syntax to access the local procedures of objects. It is striking that we can interchange the role of “procedures” and “data” in such a simple way. For example, if we write `(wire 'get-signal)` we think of `wire` as a procedure that is called with the message `get-signal` as input. Alternatively, writing `(get-signal wire)` encourages us to think of `wire` as a data object that is the input to a procedure `get-signal`. The truth of the matter is that, in a language in which we can deal with procedures as objects, there is no fundamental difference between “procedures” and “data,” and we can choose our syntactic sugar to allow us to program in whatever style we choose. 156 The agenda is a headed list, like the tables in 3.3.3, but since the list is headed by the time, we do not need an additional dummy header (such as the `*table*` symbol used with tables). 157 Observe that the `if` expression in this procedure has no `⟨`alternative`⟩` expression. Such a “one-armed `if` statement” is used to decide whether to do something, rather than to select between two expressions. An `if` expression returns an unspecified value if the predicate is false and there is no `⟨`alternative`⟩`. 158 In this way, the current time will always be the time of the action most recently processed. Storing this time at the head of the agenda ensures that it will still be available even if the associated time segment has been deleted. 159 Constraint propagation first appeared in the incredibly forward-looking SKETCHPAD system of Ivan Sutherland (1963). A beautiful constraint-propagation system based on the Smalltalk language was developed by Alan Borning (1977) at Xerox Palo Alto Research Center. Sussman, Stallman, and Steele applied constraint propagation to electrical circuit analysis (Sussman and Stallman 1975; Sussman and Steele 1980). TK!Solver (Konopasek and Jayaraman 1984) is an extensive modeling environment based on constraints. 160 The `setter` might not be a constraint. In our temperature example, we used `user` as the `setter`. 161 The expression-oriented format is convenient because it avoids the need to name the intermediate expressions in a computation. Our original formulation of the constraint language is cumbersome in the same way that many languages are cumbersome when dealing with operations on compound data. For example, if we wanted to compute the product $\left(a+b\right)\cdot \left(c+d\right)$, where the variables represent vectors, we could work in “imperative style,” using procedures that set the values of designated vector arguments but do not themselves return vectors as values: ```(v-sum a b temp1) (v-sum c d temp2) Alternatively, we could deal with expressions, using procedures that return vectors as values, and thus avoid explicitly mentioning `temp1` and `temp2`: ```(define answer Since Lisp allows us to return compound objects as values of procedures, we can transform our imperative-style constraint language into an expression-oriented style as shown in this exercise. In languages that are impoverished in handling compound objects, such as Algol, Basic, and Pascal (unless one explicitly uses Pascal pointer variables), one is usually stuck with the imperative style when manipulating compound objects. Given the advantage of the expression-oriented format, one might ask if there is any reason to have implemented the system in imperative style, as we did in this section. One reason is that the non-expression-oriented constraint language provides a handle on constraint objects (e.g., the value of the `adder` procedure) as well as on connector objects. This is useful if we wish to extend the system with new operations that communicate with constraints directly rather than only indirectly via operations on connectors. Although it is easy to implement the expression-oriented style in terms of the imperative implementation, it is very difficult to do the converse.
2018-08-17 03:39:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 75, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6424114108085632, "perplexity": 833.7802350261385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00549.warc.gz"}
https://hackage.haskell.org/package/follow-file-0.0.1.2/docs/System-File-Follow.html
follow-file-0.0.1.2: Be notified when a file gets appended, solely with what was added. follow takes a file, and informs you only when it changes. If it's deleted, | you're notified with an empty ByteString. If it doesn't exist yet, you'll be informed | of its entire contents upon it's creation, and will proceed to "follow it" as normal.
2021-06-18 20:49:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33441710472106934, "perplexity": 4727.091503498272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00314.warc.gz"}
https://www.spkx.net.cn/CN/10.7506/spkx1002-6630-20210803-025
• 生物工程 • 屠宰过程中猪胴体表面及环境的细菌菌相分析 1. (1.四川农业大学食品学院,四川 雅安 625014;2.四川农业大学资源学院,四川 成都 611130;3.四川农业大学食品加工与安全研究所,四川 雅安 625014) • 发布日期:2022-07-01 • 基金资助: 成都市科技局重点研发支撑计划项目(2019-YF09-00050-SN) Investigation of Bacterial Flora on the Surface of Pig Carcasses and in the Environment during Slaughter TANG Lin, GUO Keyu, LAI Jinghui, LI Jianlong, LI Qin, YANG Yong, ZOU Likou, LIU Shuliang 1. (1. College of Food Science, Sichuan Agricultural University, Ya’an 625014, China;2. College of Resources, Sichuan Agricultural University, Chengdu 611130, China; 3. Food Processing and Safety Institute, Sichuan Agricultural University, Ya’an 625014, China) • Published:2022-07-01 Abstract: The combination of the traditional culture-dependent method and high-throughput sequencing was used to investigate the level of microbial contamination on the surface of pig carcasses during the slaughter and segmentation process. Meanwhile, the number of microbial colonies on the slaughter knives and the contact surfaces of the segmentation workshop were counted to identify the key pollution links in the slaughter and segmentation process. The results showed that a total of 881 458 valid sequences and 864 operational taxonomic units (OTUs) were obtained by sequencing. The samples were annotated to 22 phyla, 33 classes, 79 orders, 162 families, 382 genera and 613 species of microorganisms. Proteobacteria, Bacteroidota and Firmicutes were the dominant bacterial phyla. Acinetobacter and Aeromonas were the major dominant bacterial genus. The bacterial community diversity during the slaughter and segmentation process was in decreasing order as follows: bleeding > dehairing > segmentation > evisceration > final wash > chilling. The microbial diversity on the carcass surface was the lowest in the chilling stage, and increased after segmentation, indicating that the segmentation stage was the key contamination link. The results of traditional microbial counting were consistent with the results of sequencing. From dehairing to chilling, the number of each bacterial group on the surface of pig carcasses was decreased, but increased significantly after segmentation. The total number of bacterial colonies on the carcass surfaces in the segmentation workshop was 6.11 (lg(CFU/cm2)) on average, which was higher than that on the slaughter knives (4.86 (lg(CFU/cm2)) on average), indicating that the contact surfaces of the segmentation workshop were the key pollution source, and so the segmentation link was the key pollution link.
2022-12-08 01:58:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3764984905719757, "perplexity": 11827.42300809185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00358.warc.gz"}
https://math.stackexchange.com/questions/2858970/show-that-this-artinian-ring-is-also-noetherian/2859729
# Show that this Artinian ring is also Noetherian. I'm working on the following problem and I'm stuck. Any hints or solutions would be appreciated Let $R$ be a left Artinian ring with Jacobson radical $J(R)$. If $R \neq J(R)$. show that $R$ is a left Noetherian ring. Here are my thoughts: I think we have to use the ascending/descending chain definitions for artinian and noetherian since I dont see how we can show that all ideals are finitely generated. Besides that, maybe we can somehow use the condition that $J(R)\neq R$ by considering an element in $R$ that is not in $J(R)$... but I'm not sure how that's useful. Source: Spring 1996 The ring $R/J(R)$ is semisimple Artinian. Thus every Artinian left module over $R/J(R)$ is Noetherian and the same is true for every Artinian left $R$-module $M$ such that $JM=0$. Since $J^n$ is Artinian as a left $R$-module, we can conclude that $J^{n}/J^{n+1}$ is Noetherian. It remains to show that $J=J(R)$ is nilpotent, say $J^m=0$, because then we can consider the chain $$0=J^m\subseteq J^{m-1}\subseteq \dots \subseteq J^2\subseteq J\subseteq R$$ where each factor is Noetherian. Since $R$ is Artinian, there is $m$ such that $J^k=J^m$, for every $k\ge m$. Suppose $J^m\ne0$. There is a left ideal $I$ such that $J^mI\ne0$, namely $I=J$. So we can pick $I_0$ minimal such that $J^mI_0\ne0$. Let $x\in I_0$ with $J^mx\ne0$; then $J^m(J^mx)=J^{2m}x=J^mx\ne0$. Thus we conclude $J^mx=I_0$, by minimality. In particular, there exists $y\in J^m$ with $yx=x$. However, $y\in J$, so $-y$ is left-quasi regular: there exists $z$ with $zy=z+y$, hence $$zx=zyx=zx+yx$$ from which $yx=0$: a contradiction. (The proof is from Kaplansy’s “Fields and Rings”.)
2019-09-21 19:34:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767990708351135, "perplexity": 81.78140002453084}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574662.80/warc/CC-MAIN-20190921190812-20190921212812-00193.warc.gz"}
https://www.ncatlab.org/nlab/show/symmetric+midpoint+algebra
# nLab symmetric midpoint algebra Contents ### Context #### Algebra higher algebra universal algebra # Contents ## Idea The idea of a symmetric midpoint algebra comes from Peter Freyd. ## Definition A symmetric midpoint algebra is a midpoint algebra $(M,\vert)$ with an element $\odot:M$ and a function $(-)^{\bullet}: M \to M$ such that • for all $a$ in $M$, $(a^{\bullet})^{\bullet} = a$ • for all $a$ and $b$ in $M$, $a^{\bullet} \vert a = \odot$ • for all $a$ and $b$ in $M$, $(a \vert b)^{\bullet} = a^{\bullet} \vert b^{\bullet}$ ## Properties $\odot$ is the only element in $M$ such that $\odot^\bullet = \odot$. ## Examples The rational numbers, real numbers, and the complex numbers with $a \vert b \coloneqq \frac{a + b}{2}$, $\odot = 0$, and $a^{\bullet} = -a$ are examples of symmetric midpoint algebras. The trivial group with $a \vert b = a \cdot b$, $\odot = 1$ and $a^{\bullet} = a^{-1}$ is a symmetric midpoint algebra. ## References • Peter Freyd, Algebraic real analysis, Theory and Applications of Categories, Vol. 20, 2008, No. 10, pp 215-306 (tac:20-10) Last revised on June 18, 2021 at 20:50:16. See the history of this page for a list of all contributions to it.
2021-12-01 21:35:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6514585614204407, "perplexity": 616.8972626415795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00598.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/data-type-mappings-in-ado-net
The .NET Framework is based on the common type system, which defines how types are declared, used, and managed in the runtime. It consists of both value types and reference types, which all derive from the Object base type. When working with a data source, the data type is inferred from the data provider if it is not explicitly specified. For example, a DataSet object is independent of any specific data source. Data in a DataSet is retrieved from a data source, and changes are persisted back to the data source by using a DataAdapter. This means that when a DataAdapter fills a DataTable in a DataSet with values from a data source, the resulting data types of the columns in the DataTable are .NET Framework types, instead of types specific to the .NET Framework data provider that is used to connect to the data source. Likewise, when a DataReader returns a value from a data source, the resulting value is stored in a local variable that has a .NET Framework type. For both the Fill operations of the DataAdapter and the Get methods of the DataReader, the .NET Framework type is inferred from the value returned from the .NET Framework data provider. Instead of relying on the inferred data type, you can use the typed accessor methods of the DataReader when you know the specific type of the value being returned. Typed accessor methods give you better performance by returning a value as a specific .NET Framework type, which eliminates the need for additional type conversion. Note Null values for .NET Framework data provider data types are represented by DBNull.Value. In This Section SQL Server Data Type Mappings Lists inferred data type mappings and data accessor methods for System.Data.SqlClient. OLE DB Data Type Mappings Lists inferred data type mappings and data accessor methods for System.Data.OleDb. ODBC Data Type Mappings Lists inferred data type mappings and data accessor methods for System.Data.Odbc. Oracle Data Type Mappings Lists inferred data type mappings and data accessor methods for System.Data.OracleClient. Floating-Point Numbers Describes issues that developers frequently encounter when working with floating-point numbers.
2020-10-23 07:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2178773432970047, "perplexity": 1680.8979554756907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00325.warc.gz"}
https://www.deepdyve.com/lp/springer_journal/large-time-behavior-of-solutions-to-vlasov-poisson-fokker-planck-O6QdV2xoFc
# Large-Time Behavior of Solutions to Vlasov-Poisson-Fokker-Planck Equations: From Evanescent Collisions to Diffusive Limit Large-Time Behavior of Solutions to Vlasov-Poisson-Fokker-Planck Equations: From Evanescent... The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted $$L^2$$ L 2 space, and where dependencies on the mean-free path $$\tau$$ τ and the Debye length $$\delta$$ δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions $$\tau \rightarrow \infty$$ τ → ∞ to the strongly collisional regime $$\tau \rightarrow 0$$ τ → 0 . As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the $$\tau$$ τ -dependent constraint on $$\delta$$ δ ensuring exponential decay with explicit $$\tau$$ τ -dependent rates towards the stationary solution. In the strongly collisional limit $$\tau \rightarrow 0$$ τ → 0 , we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a $$L^2$$ L 2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Statistical Physics Springer Journals # Large-Time Behavior of Solutions to Vlasov-Poisson-Fokker-Planck Equations: From Evanescent Collisions to Diffusive Limit , Volume 170 (5) – Feb 1, 2018 37 pages /lp/springer_journal/large-time-behavior-of-solutions-to-vlasov-poisson-fokker-planck-O6QdV2xoFc Publisher Springer US Subject Physics; Statistical Physics and Dynamical Systems; Theoretical, Mathematical and Computational Physics; Physical Chemistry; Quantum Physics ISSN 0022-4715 eISSN 1572-9613 D.O.I. 10.1007/s10955-018-1963-7 Publisher site See Article on Publisher Site ### Abstract The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted $$L^2$$ L 2 space, and where dependencies on the mean-free path $$\tau$$ τ and the Debye length $$\delta$$ δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions $$\tau \rightarrow \infty$$ τ → ∞ to the strongly collisional regime $$\tau \rightarrow 0$$ τ → 0 . As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the $$\tau$$ τ -dependent constraint on $$\delta$$ δ ensuring exponential decay with explicit $$\tau$$ τ -dependent rates towards the stationary solution. In the strongly collisional limit $$\tau \rightarrow 0$$ τ → 0 , we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a $$L^2$$ L 2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates. ### Journal Journal of Statistical PhysicsSpringer Journals Published: Feb 1, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 12 million articles from more than 10,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. ### Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. ### Organize your research It’s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### DeepDyve Freelancer ### DeepDyve Pro Price FREE$49/month \$360/year Save searches from
2018-05-24 14:29:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032115459442139, "perplexity": 1077.9961984239958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866326.60/warc/CC-MAIN-20180524131721-20180524151721-00484.warc.gz"}