content
stringlengths
86
994k
meta
stringlengths
288
619
How big is a 60 90 field? How big is a 60 90 field? The term 60/90 means that the distance between the pitcher’s mound and home plate is 60 feet, while the distance between bases is 90 feet. This is different than the standard Little League Majors division field (40/60) and 50/70 field. What is the distance from home plate to second base on a 50 70 field? Distance from back point of home plate to CENTER of second base: 99 feet. How far is 60 yards on a baseball field? 60 yards is 54.864 meters, which is almost equivalent to the rarely held indoor 55 meter dash. The 60 meter dash is the indoor standard today. What is a 50 70 field? A 50/70 baseball field has dimensions that are larger than those of Little League, but smaller than those of high school baseball and beyond, with the pitching mound 50 feet from home plate, and the bases sitting 70 feet apart. Why are they moving 2nd base? We already know Major League Baseball wants to use bigger bases, but now we know moving second base slightly inward toward home plate in order to make the bases more uniform is also on the table. What is a good 60 yard dash? Most Major League Baseball (MLB) clubs look for times under 7.00 seconds. A 60 yard dash time between 6.7 – 6.9 usually equate to an average runner on the playing field. How do you run a 60 yard dash? 10 Tips For Running A Faster 60 Yard Dash 1. Turn Both Feet – Not Shoulders – Slightly Towards The Finish Line. 2. Turn Your Left Knee Toward the Finish line. 3. Load Your Left Hand Back So It’s Ready To “Rip” Forwards. 4. Have 60% Of Your Weight On Your Right Leg And 40% On Your Left. 5. “Frozen To Explosion” Left Leg Push + Left Arm Rip. How is the 90 feet between bases measured? The 90 feet is to the back of the bases. Which means when you measure straight line distance between the two closest points between two bases, it’s about 88 feet. What size field does LLWS play on? Lamade Stadium, which meant that in 2006, it took pushed its outfield fences back 20 feet. Both fields are the same size, with mounds 46 feet away from home plate, base paths of 60 feet and outfield fences of 225 feet. The biggest difference between the two is the capacity. Did MLB make bases bigger? Why did MLB make the bases bigger? Baseball is making the bases bigger to promote player safety, with the happy side effect of creating more offense. Why is the infield watered? Not enough moisture will lead to cracking, too much moisture will lead to a wet tacky ball field. So we have to manage the moisture content in our clays daily. Management of ballfield clay moisture content is done by hand watering on the infield clays or by tarps on the mound and home plate.
{"url":"https://www.evanewyork.net/how-big-is-a-60-90-field/","timestamp":"2024-11-02T22:19:40Z","content_type":"text/html","content_length":"40883","record_id":"<urn:uuid:f00f8798-87b0-4c78-ab73-330e007e605f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00855.warc.gz"}
Construct Binary Tree from Inorder and Postorder Traversal Problem Statement: Given inorder and postorder traversal of a binary tree, construct the binary tree. I am assuming that you have strong understanding of recursion and how recursion works. First thing first: why do we need both Inorder and Postorder traversal to reconstruct a Binary Tree ? The below two binary trees would clarify that: \ / The inorder traversal for the above two binary trees are 5 -> 7 and 7 -> 5 respectively. the Postorder traversal for both the binary trees are 7 -> 5. So just by having Postorder traversal won't help in reconstructing the tree correctly. Why can't we just have inorder traversal ? Because we do not know what the value of the root is. This is where Postorder traversal comes into picture. We know in Postorder traversal the root is always visited at the end. Now, let's take the below binary tree: / \ / \ / \ Postorder: {2, 3, 1, 5, 4, 7, 6} Inorder: {2, 1, 3, 6, 5, 7, 4} In Postorder we process the left subtree , followed by right subtree , followed by the In Inorder we process left subtree , followed by , followed by right subtree Now let's see how we can use the combined power of Postorder and Inorder to reconstruct the binary tree. Looking at the Traversal, we know 6 is the root. Now looking at the Traversal we can say that • all the nodes on the left of 6 in inorder traversal i.e. {2, 1, 3}, would constitute left subtree of root 6. • all the nodes on the right of 6 in inorder traversal i.e. {5, 7, 4}, would constitute right subtree of root 6. Now since we know that in Postorder: we first process the whole left subtree of the root, and then process the whole right subtree of the root before visiting the root. Using this information we would quickly grab the left and right subtree from traversal result as well. We will be using the size of the subtrees to achieve this goal, where size of subtree = number of nodes in the subtree. If from traversal result we get that there are nodes in left subtree and nodes in right subtree, then we would know that the first elements in belong to left subtree of the root and the next elements belong to the right subtree. Now that we have gotten both for both left subtree and right subtree, we could run the above described approach on them to recursively construct the left and right subtree. The below code beautifully implements this idea. Java code: Login to Access Content Python 2 code: Login to Access Content Related Chapters:
{"url":"https://systemsdesign.cloud/Algo/Tree/InorderPostorder","timestamp":"2024-11-07T17:14:49Z","content_type":"text/html","content_length":"45194","record_id":"<urn:uuid:4292c06c-0956-43ad-86fb-07d5b04fe892>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00272.warc.gz"}
Chapter 2 Linear Regression | Basics of Statistical Learning Chapter 2 Linear Regression This chapter will discuss linear regression models, but for a very specific purpose: using linear regression models to make predictions. Viewed this way, linear regression will be our first example of a supervised learning algorithm. Specifically, we will discuss: • The regression function and estimating conditional means. • Using the lm() and predict() functions in R. • Data splits used in evaluation of model performance for machine learning tasks. • Metrics for evaluating models used for the regression task. This chapter will be the most action packed as we will establish a framework that will be recycled throughout the rest of the text. This chapter is currently under construction. While it is being developed, the following links to the STAT 432 course notes. 2.1 R Setup and Source library(tibble) # data frame printing library(dplyr) # data manipulation library(knitr) # creating tables library(kableExtra) # styling tables Additionally, objects from ggplot2, GGally, and ISLR are accessed. Recall that the Welcome chapter contains directions for installing all necessary packages for following along with the text. The R Markdown source is provided as some code, mostly for creating plots, has been suppressed from the rendered document that you are currently reading. 2.2 Explanation versus Prediction Before we even begin to discuss regression, we make a strong declaration: this is not a text about general statistical inference. We will focus our efforts on a narrow sub-goal of inference: making predictions. We will only make a passing attempt to explain why our models make the predictions they do and it is very possible that there will be zero causal claims in this book. While it would certainly be nice (but extremely difficult) to uncover explanations for predictions or causal relationships, our focus will be on finding predictive relationships and checking their performance so as not to clutter the presentation. Suppose (although it is likely untrue) that there is a strong correlation between wearing a wrist watch and car accidents. That is, we can see in some data that car drivers who wear wrist watches get into more traffic accidents. Also, assume that it is the case that wrist watches actually do not cause accidents, which seems like a reasonable assumption. There is only a correlation, but this is the result of confounding variables.^31 Depending on your frame of reference, you should view this information in very different ways. • Suppose you are a car insurance company. This is great news! You can now more accurately predict the number of accidents of your policy holders if you know whether or not your policy holders wear a wrist watch. For the sake of understanding how much your company will need to pay out in a year, you don’t care what causes accidents, you just want to be able to predict (estimate) the number of accidents. • Suppose you are a car driver. As a driver, you want to stay safe. That is, you want to do things that decrease accidents. In this framing, you care about things that cause accidents, not things that predict accidents. In other words, this correlation information should not lead to you throwing away your wrist watch. Disclaimer: Extremely high correlation should not simply be ignored. For example, there is a very high correlation between smoking and lung cancer.^32 However, this strong correlation is not proof that smoking causes lung cancer. Instead, additional study is needed to rule out confounders, establish mechanistic relationships, and more. 2.3 Task Setup We now introduce the regression task. Regression is a subset of a broader machine learning tasks called supervised learning, which also include classification.^33 Stated simply, the regression tasks seeks to estimate (predict) a numeric quantity. For example: • Estimating the salary of a baseball player given statistics about their previous year’s performance. • Estimating the price of a home for sale given the attributes of the home such as square footage, location, and number of bathrooms. • Estimating the credit score of a bank customer, given demographic information and recent transaction history. • Estimating the number of downloads of a podcast episode given its length, genre, and time of day released. Each of these quantities is some numeric value. The goal of the regression task is to estimate (predict) these quantities when they are unknown through the use of additional, possibly correlated quantities, for example the offensive and defensive statistics of a baseball player, or the location and attributes of a home. 2.4 Mathematical Setup To get a better grasp of what regression is, we move to defining the task mathematically. Consider a random variable \(Y\) which represents a response (or outcome or target) variable, and \(p\) feature variables \(\boldsymbol{X} = (X_1, X_2, \ldots, X_p)\).^34 In the most common regression setup, we assume that the response variable \(Y\) is some function of the features, plus some random noise. \[ Y = f(\boldsymbol{X}) + \epsilon \] • We call \(f(\boldsymbol{X})\) the signal. This \(f\) is the function that we would like to learn. • We call \(\epsilon\) the noise. We do not want to learn this, which we risk doing if we overfit. (More on this later.) So our goal will be to find some \(f\) such that \(f(\boldsymbol{X})\) is close to \(Y\). But how do we define close? There are many ways but we will start with, and most often consider, squared error loss. Specifically, we define a loss function, \[ L(Y, f(\boldsymbol{X})) \triangleq \left(Y - f(\boldsymbol{X})\right) ^ 2 \] Now we can clarify the goal of regression, which is to minimize the above loss, on average. We call this the risk of estimating \(Y\) using \(f(\boldsymbol{X})\). \[ R(Y, f(\boldsymbol{X})) \triangleq \mathbb{E}[L(Y, f(\boldsymbol{X}))] = \mathbb{E}_{\boldsymbol{X}, Y}[(Y - f(\boldsymbol{X})) ^ 2] \] Before attempting to minimize the risk, we first re-write the risk after conditioning on \(\boldsymbol{X}\). \[ \mathbb{E}_{\boldsymbol{X}, Y} \left[ (Y - f(\boldsymbol{X})) ^ 2 \right] = \mathbb{E}_{\boldsymbol{X}} \mathbb{E}_{Y \mid \boldsymbol{X}} \left[ ( Y - f(\boldsymbol{X}) ) ^ 2 \mid \boldsymbol{X} = \boldsymbol{x} \right] \] Minimizing the right-hand side is much easier, as it simply amounts to minimizing the inner expectation with respect to \(Y \mid \boldsymbol{X}\), essentially minimizing the risk pointwise, for each It turns out, that the risk is minimized by the conditional mean of \(Y\) given \(\boldsymbol{X}\), \[ \mu(\boldsymbol{x}) \triangleq \mathbb{E}[Y \mid \boldsymbol{X} = \boldsymbol{x}] \] which we call the regression function.^35 This is not a “learned” function. This is the function we would like to learn in order to minimize the squared error loss on average. \(f\) is any function, \(\mu\) is the function that would minimize squared error loss on average if we knew it, but we will need to learn it form the data. Note that \(\boldsymbol{x}\) represents (potential) realized values of the random variables \(\boldsymbol{X}\). \[ \boldsymbol{x} = (x_1, x_2, \ldots, x_p) \] We can now state the goal of the regression task: we want to estimate the regression function. How do we do that? 2.5 Linear Regression Models What do linear regression models do? They estimate the conditional mean of \(Y\) given \(\boldsymbol{X}\)!^36 Consider the following probability model \[ Y = 1 - 2x - 3x ^ 2 + 5x ^ 3 + \epsilon \] where \(\epsilon \sim \text{N}(0, \sigma^2)\). Alternatively we could write \[ Y \mid X \sim \text{N}(1 - 2x - 3x ^ 2 + 5x ^ 3, \sigma^2) \] This perhaps makes it clearer that \[ \mu(x) = \mathbb{E}[Y \mid \boldsymbol{X} = \boldsymbol{x}] = 1 - 2x - 3x ^ 2 + 5x ^ 3 \] What do linear models do? More specifically than before, linear regression models estimate the conditional mean of \(Y\) given \(\boldsymbol{X}\) by assuming this conditional mean is a linear combination of the feature variables. Suppose for a moment that we did not know the above true probability model, or even more specifically the regression function. Instead, all we had was some data, \((x_i, y_i)\) for \(i = 1, 2, \ ldots, n\). x y -0.47 -0.06 -0.26 1.72 0.15 1.39 0.82 0.68 -0.60 -0.27 0.80 1.55 0.89 0.76 0.32 -0.40 0.26 -1.85 -0.88 -1.85 How do we fit (or “train” in ML language) a linear model with this data? In order words, how to be learn the regression function from this data with a linear regression model? First, we need to make assumptions about the form of the regression function, up to, but not including some unknown parameters. Consider three possible linear models, in particular, three possible regression functions. Degree 1 Polynomial \[ \mu(x) = \beta_0 + \beta_1 x \] Degree 3 Polynomial \[ \mu(x) = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3 \] Degree 9 Polynomial \[ \mu(x) = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3 + \ldots + \beta_9 x^9 \] These are chosen mostly arbitrarily for illustrative purposes which we’ll see in a moment. So how do we actually fit these models, that is train them, with the given data. We have a couple of options: Maximum Likelihood or Least Squares! In this case, they actually produce the same result, so we use least squares for simplicity of explanation. To fit the degree 3 polynomial using least squares, we minimize \[ \sum_{i = 1}^{n}\left(y_i - (\beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \beta_3 x_i^3)\right) ^ 2 \] Skipping the details of the minimization, we would acquire \(\hat{\beta}_0\), \(\hat{\beta}_1\), \(\hat{\beta}_2\), and \(\hat{\beta}_3\) which are estimates of \({\beta}_0\), \({\beta}_1\), \({\ beta}_2\), and \({\beta}_3\). Taken together, we would have \[ \hat{\mu}(x) = \hat{\beta}_0 + \hat{\beta}_1 x_1 + \hat{\beta}_2 x_2^2 + \hat{\beta}_3 x_3^3 \] which is then an estimate of \(\mu(x)\). While in this case, it will almost certainly not be the case that \(\hat{\beta}_0 = 1\) or \(\hat{\beta}_1 = -2\) or \(\hat{\beta}_2 = -3\) or \(\hat{\beta}_3 = 5\), which are the true values of the \(\beta\) coefficients, they are at least reasonable estimates. As a bit of an aside, note that in this case, it is sort of ambiguous as to whether there is one feature, \(x\), which is seen in the data, or three features \(x\), \(x^2\), and \(x^3\), which are seen in the model. The truth is sort of in the middle. The data has a single feature, but through feature engineering, we have created two additional features for fitting the model. Note that when using R, you do not need to modify the data to do this, instead you should use R’s formula syntax to specify this feature engineering when fitting the model. More on this when we discuss the lm() function in R. We introduce this somewhat confusing notion early so we can emphasize that linear models are about linear combinations of features, not necessarily linear relationships. Although, linear models are very good at learning linear relationships. Suppose instead we had assumed that \[ \mu(x) = \beta_0 + \beta_1 x \] This model is obviously flawed as it doesn’t contain enough terms to capture the true regression function. (Later we will say this model is not “flexible” enough.) Or, suppose we had assumed \[ \mu(x) = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3 + \ldots + \beta_9 x^9 \] This model is also flawed, but for a different reason. (Later we will say this model is too “flexible.”) After using least squares, we will obtain some \(\hat{\beta}_9\) even though there is not a 9th degree term in the true regression function! Let’s take a look at this visually. Here we see the three models fit to the data above. The dashed black curve is the true mean function, that is the true mean of \(Y\) given \(x\), and the solid colored curves are the estimated mean Now we ask the question: which of these models is best? Given these pictures, there are two criteria that we could consider. • How close is the estimated regression (mean) function to the data? (Degree 9 is best! There is no error!) • How close is the estimated regression (mean) function to the true regression (mean) function? (Degree 3 is best.) From the presentation here, it’s probably clear that the latter is actually what matters. We can demonstrate this by generating some “new” data. These plots match the plots above, except newly simulated data is shown. (The regression functions were still estimated with the original data.) Note that the degree 3 polynomial matches the data about the same as before. The degree 9 polynomial now correctly predicts none of the new data and makes some huge errors. We will define these concepts more generally later, but for now we note that: • The Degree 9 Polynomial is overfitting. It performs well on the data used to fit the model, but poorly on new data. • The Degree 1 Polynomial is underfitting. It performs poorly on the data used to fit the model and poorly on new data. There’s a bit of a problem though! In practice, we don’t know the true mean function, and we don’t have the magical ability to simulate new data! Yikes! After we discuss a bit about how to fit these models in R, we’ll return to this issue.^37 2.6 Using lm() Before we continue, let’s consider a different data generating process. We first define this data generating process as an R function. gen_mlr_data = function(sample_size = 250) { x1 = round(runif(n = sample_size), 2) x2 = round(runif(n = sample_size), 2) x3 = round(runif(n = sample_size), 2) x4 = factor(sample(c("A", "B", "C"), size = sample_size, replace = TRUE)) x5 = round(runif(n = sample_size), 2) x6 = round(runif(n = sample_size), 2) y = 2 + x1 + sin(x2) + 3 * x3 ^ 2 + 3 * (x4 == "B") - 2 * (x4 == "C") + rnorm(n = sample_size, mean = 0, sd = 0.5) tibble(y, x1, x2, x3, x4, x5, x6) We then run the function and store the data that is returned. We then inspect the data. ## # A tibble: 6 x 7 ## y x1 x2 x3 x4 x5 x6 ## <dbl> <dbl> <dbl> <dbl> <fct> <dbl> <dbl> ## 1 2.85 0.91 0.33 0.14 A 0.53 0.24 ## 2 6.22 0.94 0.19 0.18 B 0.7 0.51 ## 3 6.71 0.29 0.27 0.52 B 0.05 0.51 ## 4 7.84 0.83 0.53 0.81 B 0.92 0.76 ## 5 2.75 0.64 0.02 0.12 A 0.03 0.27 ## 6 4.60 0.52 0.8 0.89 A 0.78 0.69 Note that we see only numeric (dbl or int) and factor (fctr) variables. For now, we will require that data contains only these types, and in particular, we will coerce any categorical variables to be Mathematically, this data was generated from the probability model \[ Y \mid \boldsymbol{X} \sim \text{N}(2 + 1\cdot x_1 + 1 \cdot \sin(x_2) + 3 \cdot x_3^3 + 3 \cdot x_{4B} -2 \cdot x_{4C}, \sigma^2 = 0.25) \] • \(x_{4B}\) is a dummy variable which takes the value 1 when \(x_4 = \text{B}\) and 0 otherwise • \(x_{4C}\) is a dummy variable which takes the value 1 when \(x_4 = \text{C}\) and 0 otherwise In particular, the true mean function is \[ \mu(\boldsymbol{x}) = 2 + 1\cdot x_1 + 1 \cdot \sin(x_2) + 3 \cdot x_3^3 + 3 \cdot x_{4B} -2 \cdot x_{4C} \] Now, finally, let’s fit some models it R to this data! To do so, we will use one of the most important functions in R, the lm() function. Let’s specify the form of some assumed mean functions of models that we would like to fit. Model 1 or mod_1 in R \[ \mu_1(\boldsymbol{x}) = \beta_0 + \beta_1 x_1 \] Model 2 or mod_2 in R \[ \mu_2(\boldsymbol{x}) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 \] Model 3 or mod_3 in R \[ \mu_3(\boldsymbol{x}) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_{4B} x_{4B} +\beta_{4C} x_{4C} + \beta_5 x_5 + \beta_6 x_6 \] Model 4 or mod_4 in R \[ \mu_4(\boldsymbol{x}) = \beta_0 + \beta_1 x_1 + \beta_2 \sin(x_2) + \beta_3 x_3^3 + \beta_{4B} x_{4B} + \beta_{4C} x_{4C} \] Now, finally, R! ## (Intercept) x1 ## 3.7834423 0.9530758 Nothing too interesting here about fitting Model 1. We see that the coef() function returns estimates of the \(\beta_0\) and \(\beta_1\) parameters defined above. ## (Intercept) x1 x2 ## 3.8747999 0.9400654 -0.1802538 Again, Model 2 isn’t too interesting. We see that the coef() function returns estimate of the \(\beta_0\), \(\beta_1\), and \(\beta_2\) parameters defined above. ## (Intercept) x1 x2 x3 x4B x4C ## 1.71015079 0.76017877 0.77637360 3.00479841 3.06812204 -1.93068734 ## x5 x6 ## -0.12248770 -0.04797294 Now, Model 3, we see a couple interesting things. First, the formula syntax y ~ . fits a model with y as the response, and all other variables in the sim_mlr_data data frame (tibble) as features. Also note: we did not manually create the needed dummy variables! R did this for us! ## [1] "A" "B" "C" Because x4 is a factor variable, R uses the first level, A, as the reference level, and then creates dummy variables for the remaining levels. Cool! ## (Intercept) x1 I(sin(x2)) I(x3^3) x4B x4C ## 2.3435702 0.8176247 0.9159963 3.0446314 3.0369950 -1.9421931 Our last model, mod_4 is the most interesting. It makes use of the inhibit function, I(). This allows for on-the-fly feature engineering based on available features. We’re creating new features via R’s formula syntax as we fit the model. To see why this is necessary, consider the following: ## Call: ## lm(formula = y ~ (x1 + x2)^2, data = sim_mlr_data) ## Coefficients: ## (Intercept) x1 x2 x1:x2 ## 4.1800 0.3353 -0.8259 1.3130 This created an interaction term! That means the ^ operator has different uses depending on the context. In specifying a formula, it has a particular use, in this case specifying an interaction term, and all lower order terms. However, inside of I() it will be used for exponentiation. For details, use ?I and ?formula. These are complex R topics, but it will help to start to learn them.^38 For the first half of this book, we will always keep the data mostly untouched, and rely heavily on the use of R’s formula syntax. If you are ever interested in what’s happening under the hood when you use the formula syntax, and you recall the linear algebra necessary to perform linear regression, the model.matrix() function will be useful. ## (Intercept) x1 I(sin(x2)) I(x3^3) x4B x4C ## 1 1 0.91 0.32404303 0.002744 0 0 ## 2 1 0.94 0.18885889 0.005832 1 0 ## 3 1 0.29 0.26673144 0.140608 1 0 ## 4 1 0.83 0.50553334 0.531441 1 0 ## 5 1 0.64 0.01999867 0.001728 0 0 ## 6 1 0.52 0.71735609 0.704969 0 0 ## [,1] ## (Intercept) 2.3435702 ## x1 0.8176247 ## I(sin(x2)) 0.9159963 ## I(x3^3) 3.0446314 ## x4B 3.0369950 ## x4C -1.9421931 Back to talking about mod_4. Recall that we had assumed that \[ \mu_4(\boldsymbol{x}) = \beta_0 + \beta_1 x_1 + \beta_2 \sin(x_2) + \beta_3 x_3^3 + \beta_{4B} x_{4B} + \beta_{4C} x_{4C} \] Also recall that the true mean function is \[ \mu(\boldsymbol{x}) = 2 + 1\cdot x_1 + 1 \cdot \sin(x_2) + 3 \cdot x_3^3 + 3 \cdot x_{4B} -2 \cdot x_{4C} \] Because we know this, we can investigate how well our model is performing. We know the true values of the parameters, in this case • \(\beta_0 = 2\) • \(\beta_1 = 1\) • \(\beta_2 = 1\) • \(\beta_3 = 3\) • \(\beta_{4B} = 3\) • \(\beta_{4C} = -2\) • \(\beta_5 = 0\) (\(x_5\) is not used in the true mean function.) • \(\beta_6 = 0\) (\(x_6\) is not used in the true mean function.) We also have the estimated coefficients from mod_4. ## (Intercept) x1 I(sin(x2)) I(x3^3) x4B x4C ## 2.3435702 0.8176247 0.9159963 3.0446314 3.0369950 -1.9421931 • \(\hat{\beta}_0 = 2.344\) • \(\hat{\beta}_1 = 0.818\) • \(\hat{\beta}_2 = 0.916\) • \(\hat{\beta}_3 = 3.045\) • \(\hat{\beta}_{4B} = 3.037\) • \(\hat{\beta}_{4C} = -1.942\) • \(\hat{\beta}_5 = 0\) (We assumed \(x_5\) is not used in the true mean function.) • \(\hat{\beta}_6 = 0\) (We assumed \(x_6\) is not used in the true mean function.) Our estimated regression (mean) function is then \[ \hat{\mu}_4(\boldsymbol{x}) = \hat{\beta}_0 + \hat{\beta}_1 x_1 + \hat{\beta}_2 \sin(x_2) + \hat{\beta}_3 x_3^3 + \hat{\beta}_{4B} x_{4B} + \hat{\beta}_{4C} x_{4C} \] Perfect? No. Pretty good? Maybe. However, in reality, this is not a check that we can perform! We still need an evaluation strategy that doesn’t depend on knowing the true model! Note that the other models are “bad” in this case because they are either missing features (mod_1 and mod_2) or they are both missing features and contain unnecessary features (mod_3). 2.7 The predict() Function We stated previously that fitting a linear regression model means that we are learning the regression (mean) function. Now that we fit and stored some models, how do we access these estimated regression (mean) functions? The predict() function! The predict() function will be the workhorse of STAT 432. Let’s see how to use it with models fit using the lm() function. ## # A tibble: 1 x 7 ## y x1 x2 x3 x4 x5 x6 ## <dbl> <dbl> <dbl> <dbl> <fct> <dbl> <dbl> ## 1 0.751 0.17 0.81 0.38 C 0.6 0.6 Suppose we wanted to estimate the mean of \(Y\) when • \(x_1 = 0.17\) • \(x_2 = 0.81\) • \(x_3 = 0.38\) • \(x_4 = \text{C}\) • \(x_5 = 0.38\) • \(x_6 = 0.38\) In other words, we want to estimate \[ \mathbb{E}[Y \mid \boldsymbol{X} = \texttt{new_obs}] = \mathbb{E}[Y \mid X_1 = 0.17, X_2 = 0.81, X_3 = 0.38, X_4 = \text{C}, X_5 = 0.6, X_6 = 0.6] \] The predict() function to the rescue! ## 1 ## 3.945465 What’s being returned here? \[ \hat{\mu}_1(\texttt{new_obs}) = \hat{\mathbb{E}}[Y \mid \boldsymbol{X} = \texttt{new_obs}] = 3.9454652 \] The predict function, together with a trained model, is the estimated regression (mean) function! Supply a different trained model, then you get that estimated regression (mean) function. ## 1 ## 1.370883 What’s being returned here? \[ \hat{\mu}_4(\texttt{new_obs}) = \hat{\mathbb{E}}[Y \mid \boldsymbol{X} = \texttt{new_obs}] = 1.3708827 \] We could compare these two estimates of the conditional mean of \(Y\) to the true value of y observed in the observation. More on that in the next section. If given an entire dataset, instead of a single observation, predict() returns the estimated conditional mean of each observation. ## 1 2 3 4 5 6 7 8 ## 7.8896349 5.4061018 1.3788387 0.8560024 6.6246872 8.2203544 3.2140060 3.5738889 ## 9 10 ## 5.9928135 8.4908895 A warning: Do not name the second argument to the predict function. This will cause issues because sometimes the name of that argument is newdata, as it is here, but sometimes it is data. If you use the wrong name, bad things will happen. It is safer to simply never name this argument. (However, in general, arguments after the first should be named. The predict() function is the exception.) 2.8 Data Splitting Note: Many readers will have possibly seen some machine learning previously. For now, please pretend that you have never heard of or seen cross-validation. Cross-validation will clutter the initial introduction of many concepts. We will return to and formalize it later. OK. So now we can fit models, and make predictions (create estimates of the conditional mean of \(Y\) given values of the features), how do we evaluate how well our models perform, without knowing the true model? First, let’s state a somewhat specific goal. We would like to train models that generalize well, that is, perform well on “new” or “unseen” data that was not used to train the model.^39 To accomplish this goal, we’ll just “create” a dataset that isn’t used to train the model! To create it, we will just split it off. (We’ll actually do so twice.) First, denote the entire available data as \(\mathcal{D}\) which contains \(n\) observations of the response variable \(y\) and \(p\) feature variables \(\boldsymbol{x}_i = (x_{i1}, x_{i2}, \ldots, \[ \mathcal{D} = \{ (\boldsymbol{x}_i, y_i) \in \mathbb{R}^p \times \mathbb{R}, \ i = 1, 2, \ldots n \} \] We first split this data into a train and test set. We will discuss these two datasets ad nauseam, but let’s set two rules right now.^40 • You can do whatever you would like with the training data. □ However, it is best used to train, evaluate, and select models. • Do not, ever, for any reason, fit a model using test data! □ Additionally, you should not select models using test data. □ In STAT 432, we will only use test data to provide a final estimate of the generalization error of a chosen model. (Much more on this along the way.) Again, do not, ever, for any reason, fit a model using test data! I repeat: Do not, ever, for any reason, fit a model using test data! (You’ve been warned.) To perform this split, we will randomly select some observations for the train (trn) set, the remainder will be used for the test (tst) set. \[ \mathcal{D} = \mathcal{D}_{\texttt{trn}} \cup \mathcal{D}_{\texttt{tst}} \] As a general guiding heuristic, use 80% of the data for training, 20% for testing. In addition to the train-test split, we will further split the train data into estimation and validation sets. These are somewhat confusing terms, developed for STAT 432, but hear us out.^41 To perform this split, we will randomly select some observations (from the train set) for the estimation (est) set, the remainder will be used for the validation (val) set. \[ \mathcal{D}_{\texttt{trn}} = \mathcal{D}_{\texttt{est}} \cup \mathcal{D}_{\texttt{val}} \] Again, use 80% of the data for estimation, 20% for validation.^42 The need for this second split might not become super clear until later on, but the general idea is this: • Fit a bunch of candidate models to the estimation data. (Think of this as the data to estimate the model parameters. That’s how we chose the name.) • Using these candidate models, evaluate how well they perform using the validation data. • After evaluating and picking a single model, re-fit this model to the entire training dataset. • Provide an estimate of how well this model performs using the test data. At this point it will likely be unclear why we cannot use the same data set for selecting a model, and evaluating its performance, but we aren’t ready for that discussion yet. For now, just follow the rules while you think about why we’re worried about this. Now that we have data for estimation, and validation, we need some metrics for evaluating these models. 2.9 Regression Metrics If our goal is to “predict” then we want small errors. In general there are two types of errors we consider: • Squared Errors: \(\left(y_i - \hat{\mu}(\boldsymbol{x}_i)\right) ^2\) • Absolute Errors: \(|y_i - \hat{\mu}(\boldsymbol{x}_i)|\) In both cases, we will want to consider the average errors made. We define two metrics. Root Mean Square Error (RMSE) \[ \text{rmse}\left(\hat{f}_{\texttt{set_f}}, \mathcal{D}_{\texttt{set_D}} \right) = \sqrt{\frac{1}{n_{\texttt{set_D}}}\displaystyle\sum_{i \in {\texttt{set_D}}}^{}\left(y_i - \hat{f}_{\texttt {set_f}}({x}_i)\right)^2} \] Mean Absolute Error (MAE) \[ \text{mae}\left(\hat{f}_{\texttt{set_f}}, \mathcal{D}_{\texttt{set_D}} \right) = \frac{1}{n_{\texttt{set_D}}}\displaystyle\sum_{i \in {\texttt{set_D}}}^{}\left|y_i - \hat{f}_{\texttt{set_f}}({x} _i)\right| \] • \(\hat{f}_{\texttt{set_f}}\) is a function \(f\) estimated using a model fit to some dataset \(\texttt{set_f}\). • The \((x_i, y_i)\) are data from dataset \(\mathcal{D}_{\texttt{set_D}}\). For both, smaller is better. (Less error on average.) In both, we note both the data that the model was fit to, as well as the data the model is evaluated on. Depending on the data used for these different sets, we “define” different metrics. For example, for RMSE, we have: Train RMSE: Evaluate a model fit to estimation data, using estimation data. Note that this metric is only used for illustrative or diagnostic purposes. Do not use this metric to select a model or evaluate its performance. \[ \text{RMSE}_{\texttt{trn}} = \text{rmse}\left(\hat{f}_{\texttt{est}}, \mathcal{D}_{\texttt{est}}\right) = \sqrt{\frac{1}{n_{\texttt{est}}}\displaystyle\sum_{i \in {\texttt{est}}}^{}\left(y_i - \ hat{f}_{\texttt{est}}({x}_i)\right)^2} \] Validation RMSE: Evaluate a model fit to estimation data, using validation data. This metric will often be used to select a model. \[ \text{RMSE}_{\texttt{val}} = \text{rmse}\left(\hat{f}_{\texttt{est}}, \mathcal{D}_{\texttt{val}}\right) = \sqrt{\frac{1}{n_{\texttt{val}}}\displaystyle\sum_{i \in {\texttt{val}}}^{}\left(y_i - \ hat{f}_{\texttt{est}}({x}_i)\right)^2} \] Test RMSE: Evaluate a model fit to training data, using test data. This metric will be used to quantify the error of a chosen model. \[ \text{RMSE}_{\texttt{tst}} = \text{rmse}\left(\hat{f}_{\texttt{trn}}, \mathcal{D}_{\texttt{tst}}\right) = \sqrt{\frac{1}{n_{\texttt{tst}}}\displaystyle\sum_{i \in {\texttt{tst}}}^{}\left(y_i - \ hat{f}_{\texttt{trn}}({x}_i)\right)^2} \] For the rest of this chapter, we will largely ignore train error. It’s a bit confusing, since it doesn’t use the full training data! However, think of training error this way: training error evaluates how well a model performs on the data used to fit the model. (This is the general concept behind “training error.” Others might simply call the “estimation” set the training set. We use “estimation” so that we can reserve “train” for the full training dataset, not just the subset use to initially fit the model.) Let’s return to the sim_mlr_data data and apply these splits and metrics to this data. # test-train split mlr_trn_idx = sample(nrow(sim_mlr_data), size = 0.8 * nrow(sim_mlr_data)) mlr_trn = sim_mlr_data[mlr_trn_idx, ] mlr_tst = sim_mlr_data[-mlr_trn_idx, ] Here we randomly select 80% of the rows of the full data, and store these indices as mlr_trn_idx. We then create the mlr_trn and mlr_tst datasets by either selecting or anti-selecting these rows from the original dataset. # estimation-validation split mlr_est_idx = sample(nrow(mlr_trn), size = 0.8 * nrow(mlr_trn)) mlr_est = mlr_trn[mlr_est_idx, ] mlr_val = mlr_trn[-mlr_est_idx, ] We then repeat the process from above within the train data. Now, let’s compare mod_3 and mod_4. To do so, we first fit both models to the estimation data. mod_3_est = lm(y ~ ., data = mlr_est) mod_4_est = lm(y ~ x1 + I(sin(x2)) + I(x3 ^ 3) + x4, data = mlr_est) We then calculate the validation error for both. Because we will do it so often, we go ahead and write a function to calculate RMSE, given vectors of the actual values (from the data used to evaluate) and the predictions from the model. # calculate validation RMSE, model 3 calc_rmse(actual = mlr_val$y, predicted = predict(mod_3_est, mlr_val)) ## [1] 0.5788282 # calculate validation RMSE, model 4 calc_rmse(actual = mlr_val$y, predicted = predict(mod_4_est, mlr_val)) ## [1] 0.5452852 Here we see that mod_4_est achieves a lower validation error, so we move forward with this model.^43 We then refit to the full train data, then evaluate on test. # calculate test RMSE, model 4 calc_rmse(actual = mlr_tst$y, predicted = predict(mod_4_trn, mlr_tst)) ## [1] 0.538057 We ignore the validation metrics. (We already used them for selecting a model.) This test RMSE is our estimate of how well our selected model will perform on unseen data, on average, in a squared error sense. Note that for selecting a model there is no difference between MSE and RMSE, but for the sake of understanding, RMSE has preferential units, the same units as the response variables. (Whereas MSE has units squared.) We will always report RMSE. 2.9.1 Graphical Evaluation In addition to numeric evaluations, we can evaluate a regression model graphically, in particular with a predicted versus actual plot. x = mlr_tst$y, y = predict(mod_4_trn, mlr_tst), pch = 20, col = "darkgrey", xlim = c(-1, 10), ylim = c(-1, 10), main = "Predicted vs Actual, Model 4, Test Data", xlab = "Actual", ylab = "Predicted" abline(a = 0, b = 1, lwd = 2) The closer to the line the better. Also, the less of a pattern the better. In other words, this plot will help diagnose if our model is making similar sized errors for all predictions, or if there are systematic differences. It can also help identify large errors. Sometimes, errors can be on average small, but include some huge errors. In some settings, this may be extremely undesirable. This might get you thinking about “checking the assumptions” of a linear model. Assessing things like: normality, constant variance, etc. Note that while these are nice things to have, we aren’t really concerned with these things. If we care how well our model predicts, then we will directly evaluate how well it predicts. Least squares is least squares. It minimizes errors. It doesn’t care about model assumptions. 2.10 Example: “Simple” Simulated Data Let’s return to our initial dataset with a single feature \(x\). This time we’ll generate more data, and then split it. # define full data generating process gen_slr_data = function(sample_size = 100, mu) { x = runif(n = sample_size, min = -1, max = 1) y = mu(x) + rnorm(n = sample_size) tibble(x, y) # simulate entire dataset sim_slr_data = gen_slr_data(sample_size = 100, mu = cubic_mean) # test-train split slr_trn_idx = sample(nrow(sim_slr_data), size = 0.8 * nrow(sim_slr_data)) slr_trn = sim_slr_data[slr_trn_idx, ] slr_tst = sim_slr_data[-slr_trn_idx, ] # estimation-validation split slr_est_idx = sample(nrow(slr_trn), size = 0.8 * nrow(slr_trn)) slr_est = slr_trn[slr_est_idx, ] slr_val = slr_trn[-slr_est_idx, ] ## # A tibble: 10 x 2 ## x y ## <dbl> <dbl> ## 1 0.573 -1.18 ## 2 0.807 0.576 ## 3 0.272 -0.973 ## 4 -0.813 -1.78 ## 5 -0.161 0.833 ## 6 0.736 1.07 ## 7 -0.242 2.97 ## 8 0.520 -1.64 ## 9 -0.664 0.269 ## 10 -0.777 -2.02 This time let’s evaluate nine different models. Polynomial models from degree 1 to 9. We fit each model to the estimation data, and store the results in a list. poly_mod_est_list = list( poly_mod_1_est = lm(y ~ poly(x, degree = 1), data = slr_est), poly_mod_2_est = lm(y ~ poly(x, degree = 2), data = slr_est), poly_mod_3_est = lm(y ~ poly(x, degree = 3), data = slr_est), poly_mod_4_est = lm(y ~ poly(x, degree = 4), data = slr_est), poly_mod_5_est = lm(y ~ poly(x, degree = 5), data = slr_est), poly_mod_6_est = lm(y ~ poly(x, degree = 6), data = slr_est), poly_mod_7_est = lm(y ~ poly(x, degree = 7), data = slr_est), poly_mod_8_est = lm(y ~ poly(x, degree = 8), data = slr_est), poly_mod_9_est = lm(y ~ poly(x, degree = 9), data = slr_est) So, for example, to access the third model, we would use ## Call: ## lm(formula = y ~ poly(x, degree = 3), data = slr_est) ## Coefficients: ## (Intercept) poly(x, degree = 3)1 poly(x, degree = 3)2 ## -0.2058 5.3030 -7.4306 ## poly(x, degree = 3)3 ## 6.7638 But let’s back up. That code was terrible to write. Too much repeated code.^44 First, we see that we are repeatedly fitting models where the only differences is the degree of the polynomial. Let’s write a function that takes as input the degree of the polynomial, and then fits the model with a polynomial of that degree, to the estimation data.^45 Now, we just need to go about the business of “repeating” this process for d from 1 to 9. Your first instinct might be a for loop, but fight that instinct. This accomplishes the same task, but is much cleaner! ## Call: ## lm(formula = y ~ poly(x, degree = d), data = slr_est) ## Coefficients: ## (Intercept) poly(x, degree = d)1 poly(x, degree = d)2 ## -0.2058 5.3030 -7.4306 ## poly(x, degree = d)3 ## 6.7638 We’ll use the various *apply() functions throughout this text. A bit more on them later. We also may quickly introduce an alternative system, which is the use of the map() function (and its associated functions) from the purrr package. # make predictions on the estimation data with each model poly_mod_est_pred = lapply(poly_mod_est_list, predict, slr_est) # make predictions on the validation data with each model poly_mod_val_pred = lapply(poly_mod_est_list, predict, slr_val) If instead we wanted to return a numeric vector, we would use, sapply(). Let’s use this to calculate train and validation RMSE. # calculate train RMSE slr_est_rmse = sapply(poly_mod_est_pred, calc_rmse, actual = slr_est$y) # calculate validation RMSE slr_val_rmse = sapply(poly_mod_val_pred, calc_rmse, actual = slr_val$y) ## [1] 1.5748180 1.2717458 0.9500069 0.9480786 0.9302359 0.9187948 0.9151668 ## [8] 0.9120942 0.9117093 ## [1] 1.6584930 1.2791685 0.9574010 0.9729928 1.0104449 1.0505615 1.0617693 ## [8] 1.0953461 1.0968283 Note that training error goes down^46 as degree goes up. Validation error goes down, then starts creeping up. This is a pattern that we’ll keep an eye out for. Later, we will explain this phenomenon. ## [1] 3 The model with polynomial degree 3 has the lowest validation error^47, so we move forward with this model. We re-fit to the full train dataset, then evaluate on the test set one last time. ## [1] 0.7198306 Note: There are hints here that this process is a bit unstable. See if you can figure out why. Hint: See what happens when you change the seed to generate or split the data. We’ll return to this issue when we introduce cross-validation, but for now, we’ll pretend we didn’t notice. We’ll round out this chapter with three “real” data examples. 2.11 Example: Diamonds Data For this example, we use (a subset of) the diamonds data from the ggplot2 package. # load (subset of) data dmnd = ggplot2::diamonds[sample(nrow(ggplot2::diamonds), size = 5000), ] # data prep dmnd = dmnd %>% mutate(cut = factor(cut, ordered = FALSE), color = factor(color, ordered = FALSE), clarity = factor(clarity, ordered = FALSE)) %>% select(-price, everything()) # test-train split dmnd_trn_idx = sample(nrow(dmnd), size = 0.8 * nrow(dmnd)) dmnd_trn = dmnd[dmnd_trn_idx, ] dmnd_tst = dmnd[-dmnd_trn_idx, ] # estimation-validation split dmnd_est_idx = sample(nrow(dmnd_trn), size = 0.8 * nrow(dmnd_trn)) dmnd_est = dmnd_trn[dmnd_est_idx, ] dmnd_val = dmnd_trn[-dmnd_est_idx, ] The code above loads the data, then performs a test-train split, then additionally an estimation-validation split. We then look at the train data. That is we do not even look at the test data. ## # A tibble: 10 x 10 ## carat cut color clarity depth table x y z price ## <dbl> <fct> <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <int> ## 1 0.5 Premium H SI1 59 59 5.22 5.18 3.07 1156 ## 2 1.01 Ideal G SI2 63.2 57 6.33 6.28 3.99 4038 ## 3 0.62 Very Good D SI1 61.3 58 5.47 5.49 3.36 1949 ## 4 0.41 Ideal D VS2 62.4 54 4.78 4.74 2.97 1076 ## 5 0.31 Ideal G IF 61.6 54 4.36 4.4 2.7 853 ## 6 1.08 Ideal I SI1 62.6 53.9 6.51 6.56 4.09 5049 ## 7 0.52 Very Good G VS2 62.4 60 5.14 5.18 3.22 1423 ## 8 1.01 Premium F SI2 60.9 60 6.45 6.42 3.91 3297 ## 9 0.57 Ideal H VS1 61.7 54 5.33 5.36 3.3 1554 ## 10 0.34 Ideal H VS2 62.5 54 4.54 4.49 2.82 689 Our goal here will be to build a model to predict the price of a diamond given it’s characteristics. Let’s create a few EDA plots. Note that these plots do not contain the test data. If they did, we would be using the test data to influence model building and selection, a big no-no. Let’s consider four possible models, each of which we fit to the estimation data. dmnd_mod_list = list( dmnd_mod_1_est = lm(price ~ carat, data = dmnd_est), dmnd_mod_2_est = lm(price ~ carat + x + y + z, data = dmnd_est), dmnd_mod_3_est = lm(price ~ poly(carat, degree = 2) + x + y + z, data = dmnd_est), dmnd_mod_4_est = lm(price ~ poly(carat, degree = 2) + . - carat, data = dmnd_est) Now, let’s calculate the validation RMSE of each. dmnd_mod_val_pred = lapply(dmnd_mod_list, predict, dmnd_val) sapply(dmnd_mod_val_pred, calc_rmse, actual = dmnd_val$price) ## dmnd_mod_1_est dmnd_mod_2_est dmnd_mod_3_est dmnd_mod_4_est ## 1583.558 1517.080 1634.396 1350.659 It looks like model dmnd_mod_4_est achieves the lowest validation error. We re-fit this model, then report the test RMSE. dmnd_mod_4_trn = lm(price ~ poly(carat, degree = 2) + . - carat, data = dmnd_trn) calc_rmse(actual = dmnd_tst$price, predicted = predict(dmnd_mod_4_trn, dmnd_tst)) ## [1] 1094.916 So, on average, this model is “wrong” by about $1000 dollars. However, less-so when it is a low cost diamond, more so with high priced diamonds, as we can see in the plot below. x = dmnd_tst$price, y = predict(dmnd_mod_4_trn, dmnd_tst), pch = 20, col = "darkgrey", xlim = c(0, 25000), ylim = c(0, 25000), main = "Diamonds: Predicted vs Actual, Model 4, Test Data", xlab = "Actual", ylab = "Predicted" abline(a = 0, b = 1, lwd = 2) Some things to consider: • Could you use the predicted versus actual plot to assist in selecting a model with the validation data? • Note that the model we have chosen is not necessarily the “best” model. It is simply the model with the lowest validation RMSE. This is currently a very simplistic analysis. • Does this plot suggest any issues with this model? (Hint: Note the range of predicted values.) • Can you improve this model? Would a log transform of price help? 2.12 Example: Credit Card Data Suppose you work for a small local bank, perhaps a credit union, that has a credit card product offering. For years, you relied on credit agencies to provide a rating of your customer’s credit, however, this costs your bank money. One day, you realize that it might be possible to reverse engineer your customers’ (and thus potential customers) credit rating based on the credit ratings that you have already purchased, as well as the demographic and credit card information that you already have, such as age, education level, credit limit, etc.^48 So long as you can estimate customers’ credit ratings with a reasonable error, you could stop buying the ratings from an outside agency. Effectively, you will have created your own rating. To perform this analysis, we will use the Credit data form the ISLR package. Note: this is not real data. It has been simulated. We remove the ID variable as it should have no predictive power. We also move the Rating variable to the last column with a clever dplyr trick. This is in no way necessary, but is useful in creating some plots. # test-train split crdt_trn_idx = sample(nrow(crdt), size = 0.8 * nrow(crdt)) crdt_trn = crdt[crdt_trn_idx, ] crdt_tst = crdt[-crdt_trn_idx, ] # estimation-validation split crdt_est_idx = sample(nrow(crdt_trn), size = 0.8 * nrow(crdt_trn)) crdt_est = crdt_trn[crdt_est_idx, ] crdt_val = crdt_trn[-crdt_est_idx, ] After train-test and estimation-validation splitting the data, we look at the train data. ## # A tibble: 10 x 11 ## Income Limit Cards Age Education Gender Student Married Ethnicity Balance ## <dbl> <int> <int> <int> <int> <fct> <fct> <fct> <fct> <int> ## 1 183. 13913 4 98 17 " Male" No Yes Caucasian 1999 ## 2 35.7 2880 2 35 15 " Male" No No African A… 0 ## 3 123. 8376 2 89 17 " Male" Yes No African A… 1259 ## 4 20.8 2672 1 70 18 "Femal… No No African A… 0 ## 5 39.1 5565 4 48 18 "Femal… No Yes Caucasian 772 ## 6 36.5 3806 2 52 13 " Male" No No African A… 188 ## 7 45.1 3762 3 80 8 " Male" No Yes Caucasian 70 ## 8 43.5 2906 4 69 11 " Male" No No Caucasian 0 ## 9 23.1 3476 2 50 15 "Femal… No No Caucasian 209 ## 10 53.2 4943 2 46 16 "Femal… No Yes Asian 382 ## # … with 1 more variable: Rating <int> To get a better “look” at the data, consider running the following: • skimr::skim(crdt_trn) • str(crdt_trn) • View(crdt_trn) We also create a pairs plot. We immediately notice three variables that have a strong correlation with Rating: Income, Limit, and Balance. Based on this, we evaluate five candidate models.^49 crdt_mod_list = list( crdt_mod_0_est = lm(Rating ~ 1, data = crdt_est), crdt_mod_1_est = lm(Rating ~ Limit, data = crdt_est), crdt_mod_2_est = lm(Rating ~ Limit + Income + Balance, data = crdt_est), crdt_mod_3_est = lm(Rating ~ ., data = crdt_est), crdt_mod_4_est = step(lm(Rating ~ . ^ 2, data = crdt_est), trace = FALSE) crdt_mod_val_pred = lapply(crdt_mod_list, predict, crdt_val) sapply(crdt_mod_val_pred, calc_rmse, actual = crdt_val$Rating) ## crdt_mod_0_est crdt_mod_1_est crdt_mod_2_est crdt_mod_3_est crdt_mod_4_est ## 140.080591 12.244099 12.333767 9.890607 11.575484 From these results, it appears that the additive model, including all terms performs best. We move forward with this model. final_credit_model = lm(Rating ~ ., data = crdt_trn) sqrt(mean((predict(final_credit_model, crdt_tst) - crdt_tst$Rating) ^ 2)) ## [1] 10.47727 It seems that on average, this model errors by about 10 credit points. ## [1] 93 982 ## [1] 157.5897 Given the range of possible ratings, this seem pretty good! What do you think? x = crdt_tst$Rating, y = predict(final_credit_model, crdt_tst), pch = 20, col = "darkgrey", main = "Credit: Predicted vs Actual, Test Data", xlab = "Actual", ylab = "Predicted" abline(a = 0, b = 1, lwd = 2) The predicted versus actual plot almost looks too good to be true! Wow!^50 In summary, if this data were real, we might have an interesting result! Do note, that both this example and the previous should not be considered data analyses, but instead, examples that reinforce how to use the validation and test sets. As part of a true analysis, we will need to be much more careful about many of our decision. After putting down some additional foundation, we’ll move towards these ideas in this text. One possible critique of what we’ve done here: It’s possible we should not have used the Limit variable. Why? Because (and we’re guessing here, as this is not real data) it’s possible that we would have to acquire this information along with the Rating information. Let’s assume this is true. We need to first make a distinction between train time and test time. • Train time: The portion of the ML process where you are creating models. (Anything you do before make predictions on truly new data.) • Test time: The portion of the ML process where you are making predictions in the real world. The issue here is that the models you create during training should only include features which you will have access to at test time. Otherwise, you won’t actually be able to make a prediction! 31. Hint: If you’re wearing a wrist watch, you probably have the need to be at certain locations at certain times. That is, you’re probably more likely to be in a hurry!↩︎ 32. Fun fact: RA Fisher, the most famous statistician, did not believe that smoking caused cancer. It’s actually a part of a larger fasinating story.↩︎ 33. We will return later to discuss supervised learning in general after getting through some specifics of regression and classification.↩︎ 34. Features are also called covariates or predictors but we find the “predictors” nomenclature to be problematic when discussing prediction tasks. We will attempt to consistently use “features.”↩︎ 35. Note that using a different loss function will result in a different regression function. For example, if we used absolute loss, we would then have a regression function that is the conditional median. This particular regression function is related to Quantile Regression. Perhaps more on this later.↩︎ 36. That’s convenient isn’t it?↩︎ 37. Spoiler: Don’t fit the model to all the available data. Pretend the data you didn’t use is “new” when you evaluate models.↩︎ 38. For some additional reading on R’s formula syntax, the following two blog posts by Max Kuhn are good reads: The R Formula Method: The Good Parts and The R Formula Method: The Bad Parts.↩︎ 39. However, we will be assuming that this data is generated using the same process as the original data. It is important to keep this in mind in practice.↩︎ 40. We’re ignoring some nuance by adhering to these rules, but unless you have a very good reason to break them, it’s best to follow them.↩︎ 41. The hope is that these terms will make the transition to using cross-validation much easier.↩︎ 42. There is a trade-off here. More data for estimation gives better estimates. More data for validation gives a better sense of errors on new data.↩︎ 43. We note that there isn’t actually a huge difference between these two, an idea we will return to later.↩︎ 44. This function could be made more general if also supplied an argument for data, but we’re keeping things simple for now.↩︎ 45. More specifically, it never increases.↩︎ 46. This shouldn’t be too surprising given the way the data was generated!↩︎ 47. We make no comment on the legality or ethics of this idea. Consider these before using at your own risk.↩︎ 48. You might be wondering, aren’t there about a million different candidate models we could consider if we included things like engineered variables and interactions? Yup! Because of this, we’ll look at some variable selection techniques, as well as some algorithms that avoid this issue to a certain extent.↩︎ 49. Perhaps not surprising since this data was simulated.↩︎
{"url":"https://statisticallearning.org/linear-regression.html","timestamp":"2024-11-12T12:04:07Z","content_type":"text/html","content_length":"150197","record_id":"<urn:uuid:84f96896-6cdf-4adb-b00d-39c695826f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00380.warc.gz"}
Supporting Math MTSS through SpringMath: FAQs Supporting Math MTSS through SpringMath: FAQs Getting to know SpringMath FAQs | SpringMath MTSS Guide Evidence-base FAQs Is SpringMath an evidence-based tool? Yes. Two randomized controlled trials studying the efficacy of SpringMath were submitted to the National Center on Intensive Intervention (NCII) for rating on their academic interventions tools chart. NCII technical experts evaluate studies of academic intervention programs for quality of design and results, quality of other indicators, intensity, and additional research. The studies of SpringMath earned the highest possible rating for one study, and one point below the highest possible rating for the second study submitted. Additionally, SpringMath is listed as an evidence-based intervention by the University of Missouri Evidence-Based Intervention Network. SpringMath is also listed as an evidence-based resource by the Arizona Department of Education. There are different levels of evidence, and some sources set forth by commercial vendors as “evidence” would not be considered evidence by our SpringMath team. We prioritize peer-reviewed published research studies in top-tier academic journals using rigorous experimental designs and a program of research that addresses novel questions and conducts systematic replication. SpringMath meets the most rigorous standards for evidence, including a transparent theory of change model for which each claim has been evaluated. The assessments and decision rules have met the most rigorous standards for reliability and decision accuracy. Intervention efficacy has been evaluated in three randomized, controlled trials, one of which was conducted by an independent research team not associated with the author or publisher of SpringMath. Additionally, the first two RCTs were published in three articles in top-tier, peer-reviewed academic journals. The third RCT was completed in 2021 and has not yet been published, but SpringMath earned the strongest effect size on math achievement among intervention tools included in that study. Additional published research has demonstrated an extremely low cost- to- benefit ratio (or stated another way, a strong return on investment), reduced risk, systemic learning gains, and closure of opportunity (or equity) gaps via SpringMath. Drs. VanDerHeyden and Muyskens maintain an ongoing program of research aimed at continually evaluating and improving SpringMath effects in schools. COVID provided a natural experiment for how districts struggling with math achievement can accelerate learning. In districts using SpringMath before COVID closures, we have seen rapid recovery of learning loss. For example, these data are from all second graders in a district in Michigan. One can see that scores were lower at winter screening following COVID school closures in the preceding spring. But one can also see that post-classwide-math-intervention, percent proficient was comparable to the years preceding COVID, which is powerful evidence of learning loss recovery. Screening occurs three times per year. The first two screenings (fall and winter) are primarily the basis for identifying classes in need of classwide intervention and/or students in need of individual intervention. The spring screening is primarily used for program evaluation (along with the fall and winter screening data). Generally, students who require intervention are identified by the fall and winter screenings, and the only “new” cases detected at spring screening are students who did not participate in previous screenings and did not participate in classwide intervention. For classes routed into classwide intervention, the classwide intervention data are used to identify students for more intensive intervention. The SpringMath screening measures are intentionally efficient. They are group administered and take about 15 minutes each season. If your class is recommended for classwide intervention, 15-20 minutes per day is sufficient to complete classwide intervention, and the recommended dosage is a minimum of four days per week. Day 5 can fall on any day of the week. Individual interventions (which can be recommended from classwide intervention or directly from screening when there is not a classwide learning problem), require 15-20 minutes per day. If students are recommended for individual intervention, recommendations will be made for small groups from week to week, which allows for a single investment of 20 minutes to deliver intervention to several students at the same time. We recommend that teachers and data team members (e.g., principals, RTI coaches, math coaches) log in to the SpringMath coach dashboard each week to identify what actions are underway and where assistance is needed to improve results. The coach dashboard makes specific recommendations about which teachers require assistance; summarizes learning gains at the grade, teacher, and student level; allows for coaches to share data with parents; and provides an ideal tool to facilitate data team meetings in the school. One of the lessons from MTSS research is that supplemental interventions can be much shorter than 45 minutes, but sessions need to occur more often than twice per week. In fact, exciting contemporary dosage research is finding in rigorous experimental studies that briefer, more frequent intervention sessions (i.e., micro-dosing) produce stronger and more enduring learning gains (Duhon et al., 2020; Schutte et al., 2015; Solomon et al., 2020). Codding et al. (2016) directly examined this question for SpringMath intervention in a randomized, controlled trial and found that given the same weekly investment of minutes allocated to intervention, the condition that offered the greatest number of sessions per week (with shorter sessions) produced about twice the effect of the twice weekly condition, which was about twice the effect of the weekly condition, which was no different from students who received no intervention (i.e., control). Given recent experimental findings related to dosage, it is becoming clear that daily dosing of intervention or at least four days per week is likely ideal, not just for SpringMath, but for all supplemental and intensive academic interventions, as measured by producing strong learning gains. That said, schools that are trying to implement within a schedule that has already been set can find ways to approximate more effective dosing schedules. For example, if students are available twice per week, teachers might try running two consecutive sessions each time. Teachers might find a way to add a third session during the week either via creative scheduling or working with the core classroom teacher to deliver classwide intervention to the entire class as part of the day’s fluency-building routine in the classroom. We recommend that schools use their planning for year 2 implementation to build schedules that are consistent with optimal dosing in MTSS, which means four or five days per week of shorter supplementation blocks (e.g., 20 minutes). This question is often a sign to our teams that the instructional expectations, sequencing, and pacing of math learning in a system may be out of sync with best practices. The SpringMath measures have been designed to reflect current learning expectations and when systems report that such content has not yet been taught, this is often a sign that the pacing of instruction in a system may be off track. Let’s answer this question in three parts: screening, classwide intervention, and individual intervention. For part one, let’s focus on screening. Screening skills have been selected for maximal detection of risk. These skills are those that are expected to have been recently taught, that students must be able to do accurately and independently to benefit from the upcoming grade-level instruction. High-leverage skills are emphasized, and mastery is not expected. Screening skills are rigorous, and we expect that more learning will occur with those skills. Our instructional target used at screening reflects successful skill acquisition. There is flexibility for schools to choose when to screen, and this allows some flexibility if you know you have not yet taught a skill that will be assessed but anticipate teaching that skill in the next week or two. It is fine to delay winter and spring screening for a few weeks into each screening window. Now for part two to this answer: Let’s focus on classwide intervention. There is a set sequence of skills unique to each grade level that classes must progress through. The sequence is intentionally designed to begin with below-grade-level foundation skills and to increase incrementally in difficulty such that mastery on skills that appear early in the sequence prepares students for mastery on the later skills. This sequencing is used because classwide math intervention is, by design, a fluency-building intervention. You might have noticed (or will notice) that when the median score in classwide intervention is below the instructional target, which we call the frustrational range, a whole-class acquisition lesson is provided so that the teacher can easily pause and reteach that skill before proceeding with the classwide fluency-building intervention. There is flexibility here in that teachers can enter classwide intervention at any time during the year (following fall or winter screening) and can enter scores daily until the class median is below mastery. When the class median is below mastery, that is the sweet spot within which to begin classwide math intervention for maximal efficacy and efficiency. Part three of this answer is about individual intervention. SpringMath screenings are the dropping-in points to intervention. Screening skills reflect rigorous, grade-level content. When a student is recommended for individual assessment and intervention, the assessment will begin from the first screening skill and sample back through incrementally prerequisite skills to find the skill gap or gaps that must be targeted to bring the student to mastery on the grade-level skill (which is the screening skill). There is a science to this process, and the skill that should be targeted is based on student assessment data. There is flexibility for teachers in that they might use the instructional calendars in Support as a model of pacing and a logical organizer to locate acquisition lessons, practice sheets, games, and word problems, which are available for all 145 skills in SpringMath. For convenience, we have also recommended booster lessons and activities in Support for teachers who wish to do more than just classwide math intervention. In other words, teachers can always do more, but it is sufficient to simply do what is recommended within the teacher and coach dashboards. SpringMath is a comprehensive multi-tiered system of support (MTSS). MTSS begins with universal screening to identify instructional need. These needs can range from an individual to a schoolwide level. If a curricular gap is identified, classwide intervention is the most efficient level at which to address these needs. Thus, all students need to be included in SpringMath. The screening data are used to summarize learning gains within and across classes and grades and to identify areas of need at the school level via automated program evaluation. These important actions of SpringMath and MTSS are not possible when all students are not included in the implementation. Many systems wonder if they can use an existing screening tool to identify students for intervention and then just use SpringMath to deliver intervention. This is a costly mistake because SpringMath continually harvests data to make more accurate decisions about who requires intervention and what kind of intervention they require based on the foundation of universal data collected three times per year. Classwide intervention is a very powerful tactic when it is needed. It is an inefficient tactic when it is not needed. With SpringMath, you are not buying classwide math intervention. Rather, you are buying a comprehensive MTSS system that seamlessly links instructional intensifications to assessment data using automated data interpretation and summary reports and guides your MTSS implementation step by step. All students must be included in the assessments at routine timepoints to verify core program effectiveness, to detect students who are lagging behind expectations in a highly sensitive way, and to guide effective reparation of learning gaps. When classwide intervention is not recommended, that is because it is not needed as a second screening gate, nor as a mechanism to reduce overall risk (because risk is low). Using the automated summary reports and data interpretation, SpringMath will then very accurately recommend students for small-group or individual intervention. The summary reports, including the coach and administrator dashboard, harvest data from all students to allow for progress monitoring of math learning at the system level. This is not possible when SpringMath is not used universally. It is also not possible to use other screening measures to determine who should get intervention in SpringMath because other measures are not as sensitive as those used in SpringMath. Further, SpringMath screening measures are the “dropping-in” points for diagnostic assessment to provide the right interventions for students who need them. Summary reports will show you the growth of all students across screening measures over the course of the school year and gains for students receiving tier 2 or 3 interventions relative to the performance of classroom peers on the same measures. Again, this is not possible when not all students are included in the assessments. SpringMath assessment and expert layered decision rules recommend — and then provide — precisely what is needed, ensuring the correct stewardship of limited resources and the best return on investment for the effort. If your class has already started classwide intervention based upon the results of a prior screening you will be allowed to continue classwide intervention if you choose, even if your class scores above the criterion for recommending classwide intervention on subsequent screenings. Teachers often express this concern. The short answer is “No, it’s not a waste of time for high-performing students to participate in classwide math intervention.” Classwide intervention requires only about 15-20 minutes per day, and children in all performance ranges tend to grow well. If many children score in the frustrational range, an acquisition lesson will be recommended (and provided) right in the classwide intervention teacher dashboard. Students in the instructional and mastery range show strong upward growth given well-implemented classwide intervention. When the median reaches mastery, the entire class is moved up to the next skill, which ensures that students who are already in the mastery range do not remain on that skill for very long. Additionally, for higher-performing students, classwide intervention is a useful cumulative review tactic that has been associated with long-term skill retention and robust, flexible skill use. Thus, it has been demonstrated that even higher-performing students grow well with classwide math intervention, both in SpringMath and in other peer-mediated learning systems. The decision rule to move children to the next skill is written such that most of your remaining students should be above the instructional target (meaning they have acquired the skill but have not yet reached the level of proficiency that forecasts generalized, flexible use of that skill). Because the skills often overlap within a grade level and across grade levels in increasing increments of difficulty, it is a low-stakes event to move students forward who are in the instructional range as they are likely to experience continued skill development support and to demonstrate growth and even future skill mastery on subsequent skills. In contrast, children who remain in the frustrational range when the class has moved on to a new skill are highly likely to be recommended for individual intervention. That decision occurs in the background and has a very high degree of accuracy, such that children who require individual intervention are not missed (we call this “sensitivity”). Thus, when SpringMath recommends that you move on to a new skill, you should feel confident moving your class on to the next skill. SpringMath continues to track individual student progress and will recommend intervention for any child who really requires additional support to succeed in grade-level content. That said, when considering student progress, users should pay particular attention to the metric on the dashboard called “Scores Increasing.” You should aim to move this value greater than 80% each week as it reflects the overall growth of all your students from week to week and reflects that even your lowest-performing student beat his or her score from the preceding week. If you have an additional 15-20 minutes to supplement students and/or a teacher or coach to help you differentiate during core instruction, you can provide additional instruction via acquisition lessons, games, and practice materials. These materials are available in the support portal and can be used with students who were most recently not at mastery when the class moved on to a subsequent skill. Again, working with students who did not attain mastery in this way is not technically necessary since SpringMath will recommend individual intervention and deliver the right protocol for you when it’s truly necessary for individual students. All students at times and especially students who experience mathematics difficulties may also have difficulties with attention, motivation, self-regulation, and working memory (Compton, Fuchs, Fuchs, Lambert, & Hamlett, 2012). Unfortunately, executive-function-focused interventions have not been shown to improve achievement. Yet, supporting executive-function difficulties within academic interventions can enable more successful academic intervention. For example, Burns, et al. (2019) found that intervention based on student assessment produced strong effects on multiplication learning gains that were independent of working memory, whereas simple flashcard practice did not. Thus, ideally, intensive interventions provide math instruction in ways that support self-regulation and reinforcement strategies, minimize cognitive load on working memory, minimize excessive language load by incorporating visual representations, and provide fluency practice (Fuchs, Fuchs, & Malone, 2018; Powell & Fuchs, 2015). In SpringMath, diagnostic assessment is used to verify mastery of prerequisite understandings and determine the need for acquisition support versus fluency-building support so that a student is placed into the correctly aligned intervention skill (task difficulty) and receives the aligned instructional support according to their needs. Concrete-representational-abstract sequencing is used within and across intervention protocols, and students are provided with multiple response formats (e.g., oral, written, completing statements and equations to make them true, finding and correcting errors). Tasks are broken down into manageable chunks and new understandings are introduced using skills students have already mastered. Fluency of prerequisite skills is verified and, if needed, developed to ease learning of more complex content. Weekly progress monitoring and data-driven intervention adjustments ensure that the intervention content and tactic remain well aligned with the student’s needs. Intervention scripts provide goals to students and emphasize growth and beating the last best score. Self-monitoring charts are used for classwide and individual intervention, and rewarding gains is recommended. Sessions are brief but occur daily for optimal growth (Codding et al., 2016). These features make the learning experience productive, engaging, and rewarding to the student. The coach dashboard tracks intervention use and effects throughout the school. When needed, SpringMath alerts the coach to visit the classroom for in-class coaching support to facilitate correct intervention to promote student learning gains and student motivation for continued growth. Several accommodations are permitted, including adjusting response formats, reading problems to students, and repeating assessments. (My students have processing disorders so they will never reach mastery if I don’t extend their time.) We understand the pressure teachers feel to move through content quickly, but moving students on to the next skill before they have attained mastery on the prerequisite skill will lessen their chances of mastering future content.In one district, in classes in which students moved on without having reached the mastery criterion, the probability of mastering any additional content plummeted to 35%. In classes that required students to reach mastery before moving on, probability of future skill mastery was 100%. Further, classes that took the time to make sure students attained mastery actually covered more content because they were able to reach mastery much more quickly on later skills. Thus, we strongly recommend that you make the investment to bring as many students to mastery as possible during classwide intervention. It is possible. Students will get there. And the investment will pay off for you with better learning and more content coverage in the long run. It is not true that students with processing disorders cannot attain instructional and mastery criteria. One or two low-performing students will not prevent your class from moving forward, and these one or two students would certainly be recommended for individual intervention, which arguably is the correct level of support for such students. Individual intervention is customized to a specific student’s learning needs via diagnostic assessment, and often, such students require acquisition and fluency building for prerequisite skills to enable success in grade-level content. In fact, we recommend that students who receive individual intervention be allowed to continue in classwide intervention if possible because at some point their skill gap will close sufficiently that they will derive greater benefit from classwide intervention. We recommend focusing on improving your Scores Increasing metric in your dashboard, which is an indicator of overall growth during classwide intervention. We have many resources in Support to help teachers attain upward growth in classwide intervention. Does SpringMath work well with special populations? (e.g., students receiving special education, students who are struggling in math, advanced students) Fortunately, two randomized, controlled trials were conducted that included students at risk for math difficulties and disabilities. Analyses were conducted to examine effects for students in intervention groups who met the criteria for being at risk for math difficulties (e.g., scoring 1 SD below the mean on last year’s state test, scoring 2 SDs below the mean on last year’s state test), and for students receiving special education services. In general, in populations with greater risk to start, when assigned randomly to SpringMath intervention, those students experience a much stronger risk reduction (because they had more risk to start). Cost-benefit analyses indicate that as risk increases, the return on investment in implementing SpringMath intervention actually improves (i.e., lower ICERs). The National Center for Intensive Intervention (NCII) requires reporting of intervention effects for students scoring below the 20th percentile on an external measure (i.e., a population considered to be at risk or in need of supplemental instruction). On the standard curriculum-based measures, effect sizes ranged from 0.56 to 1.11 for students scoring below the 20th percentile on the preceding year’s state test and 0.19 to 0.73 for students receiving special education services under any eligibility category. On the year-end test for grade 4 students, the effect size was 0.79 for students scoring below the 20th percentile on the preceding year’s test and 0.35 for students receiving special education services. For advanced students, we recommend using the support portal to access more challenging lessons, practice materials, games, and word problems to enrich students’ experiences during core math instruction or during the supplemental enrichment period. How can educators use SpringMath data to write IEP goals and objectives? Ideally, students receiving special education services would have routine access to the most intensive instruction available in a school setting. This ideal is not often (or even rarely) the reality. Most schools have to make decisions about how to deploy limited resources toward meeting the needs of all learners. Still, it is worth a team’s consideration when planning instructional calendars and daily schedules to think about the ideal of making the most intensive instruction available to the students with the greatest demonstrated needs. Systems can use SpringMath to deliver intensive intervention to students on IEPs daily in a highly efficient manner. Students on IEPs can participate in core instruction in the general education setting if that is their usual plan and can participate in classwide math intervention with their core instructional grouping. If they are identified as needing more intensified instruction, SpringMath will identify and recommend them for diagnostic assessment and intervention. If SpringMath does not recommend them, but you want to provide them with individual intervention anyway, anyone with coach-level access can schedule any student in the school for individual intervention by clicking on the dots next to the student’s name in the Students tab. Diagnostic assessment begins with grade-level screening skills (we call these the Goal skills) and samples back through incrementally prerequisite skills, assessing each skill systematically, to identify which skills should be targeted for intervention. We call the targeted skill the Intervention skill. Weekly assessment is conducted on the Intervention skill and the Goal skill. For IEP goals, we suggest instructional-range performance on current Goal skill scores and future Goal skill scores if the IEP crosses years. Logical objectives for IEPs include mastery of the Intervention skills. Drs. VanDerHeyden and Muyskens are always happy to advise systems on using SpringMath for IEP planning and implementation. What accommodations are allowed? Several accommodations are permitted, including adjusting response formats (students can respond orally, in writing, or in some cases, by selecting responses). Problems can be read aloud to students. Instructions can be provided to students in their native languages. Teachers can work one on one to deliver assessment and/or intervention. Assessments can be repeated under optimal conditions (using rewards, quiet space, warm-up activities). Timings can be adjusted and prorated to obtain the needed answers correct per unit of time equivalent scores. For example, a child can work for a minute and the score can be doubled to estimate what the performance would have been over two minutes. If a child seems anxious about being stopped after two minutes, the teacher can mark the stopping point at two minutes and allow the child to keep working, reporting the score as the number of answers correct completed at the two-minute mark. Assessment-related FAQs SpringMath uses a novel approach to math screening that enables more sensitive detection of learning needs and more sensitive monitoring of learning gains. The measures work in tandem with the instructional protocols so that students are matched with the right interventions when they are needed. In other words, the SpringMath measures are the “dropping in” points to intervention and must be administered even if you are using another screening in your school. Technical adequacy data reported and evaluated on the Academic Screening Tools’ Chart at the National Center for Intensive Intervention indicate that SpringMath screening measures are technically strong and suitable for universal screening in math. Additionally, SpringMath is the only tool on the market that uses classwide intervention as a second screening gate, which is the recommended approach to screening in MTSS when many students in a system are experiencing academic risk. If your school uses another math screening already, we suggest using the second screening as an external measure of the effectiveness of SpringMath in your system. These data can be uploaded into the program evaluation feature, and we will report dosage and effects on SpringMath measures and the external measures of your choice. First, SpringMath is an in-tandem assessment and intervention system that improves math achievement for all students. To accomplish the goal of improving achievement for all students, the assessments used in SpringMath are necessary to drive the instructional recommendations and actions. As detailed in the previous question, it is not possible to substitute other measures to make the instructional decisions within SpringMath. SpringMath measures have specific cut scores that are specific to each measure and the measures are used in multiple ways to drive implementation. For example, the measures are used to feed metrics in coach dashboards, which reflect intervention dosage and drive implementation support. In effect, every assessment datapoint aggregates up from the single student to power all kinds of important MTSS decisions. Second, SpringMath measures are currently unique in the marketplace. No other system of mastery measurement is currently available. Mastery measurement is a necessary feature in math MTSS because general outcome measures do not have sufficient sensitivity to power the MTSS decisions (screening, progress monitoring) (VanDerHeyden, Burns, Peltier, & Codding, 2022). So given this rationale, it would certainly be possible for schools to undertake building their own measures. After all, that is where Dr. VanDerHeyden began this work in 2002: building measures that were necessary but unavailable at that time. What followed was 20 years of research and development that created the 145 reliable, valid, sensitive, and technically equivalent generated measures with tested cut scores and decision rules of SpringMath. Building measures is technically challenging and requires measurement expertise. Building measures also requires a tremendous investment of time. Building the SpringMath measure generator took several years. Before the measures were used with children, our research team generated and tested over 45,000 problems. SpringMath measures were built based on 20 years of research led by Dr. VanDerHeyden and that research continues today via our own programs of research and those led by independent researchers in school psychology and education. Importantly, the program of research to date demonstrates not just the technical adequacy of the measures, but their value within MTSS to improve math achievement for all students and close opportunity gaps. This type of evidence is called “consequential validity” and it is especially lacking among many assessment tools. When there is not research to show that the scores are used to improve outcomes, then the assessment might just invite problem admiration and reflect wasted resources (e.g., instructional minutes) because the use of the assessment did not bring benefit to students. It is an important ethical mandate of school psychology and educational assessment in general that children be subjected to assessments that meet certain technical expectations (AERA/APA/NCME, 2014) such that the scores can be considered reliable, valid, equitable, and useful to the student. Thus, it is a very questionable practice to use measures that have not been subjected to technical validation. Few systems have the bandwidth to develop and then sufficiently evaluate measures for use within their systems. And if they did, the effort required to do so is not free. It would involve a significant investment of resources that likely would be greater than simply adopting a research-based set of measures. Teams may decide that SpringMath is not a good fit for their district, of course, but teams should follow the ethical tenet of educational assessment in adopting measures that meet conventional technical adequacy standards. Fortunately, implementers can identify measures that suit their needs by consulting NCII Tools Charts (www.intensiveintervention.org). Unlike other tools, SpringMath emphasizes targets based on functional benchmark criteria. Many tools use simple norm-referenced cut scores to determine risk. For example, students below the 20th percentile are considered at risk and in need of supplemental intervention. Students below the 40th percentile might be considered at “some risk.” Such criteria are highly error prone. First, it is very possible to be at the 50th percentile and still be at risk, and conversely, it is possible to be at the 20th percentile and not be at risk. Simple norm-referenced criteria ignore local base rates. Local base rates reflect the amount of risk in a given classroom (which is a more direct reflection of the quality of instruction and rigor of expectations received in that setting). Second, many schools do not have normal distributions of performance, and norm-referenced criteria will be highly error prone in such contexts. Third, percentile criteria used by commercial vendors are based upon the convenience samples available to them from their current user base. Fourth, simple percentile criteria are arbitrary. Functionally, what makes the child at risk at the 20th percentile but not at risk if scoring at the 21st percentile? Is one percentile change equal to one less meaningful unit of risk? How can such risk be explained to parents? In contrast, benchmark criteria are meaningful and easy to explain to parents. SpringMath uses criteria that indicate meaningful outcomes like an absence of errors (or stated another way, a high degree of accuracy), likelihood of skill retention, and flexible skill use (i.e., faster learning of more complex, related content and application or generalized performance). We recommend (and SpringMath uses) a dichotomous decision — at risk or not at risk -- because having a third layer of “some risk” is difficult for systems to act on. Specifically, at screening, we require that 50% of students in a class score at or above the “instructional target.” The instructional target is the threshold for the instructional range, indicating that a skill has been acquired and students are independently accurate. During intervention, we require students reach the mastery target. Mastery is the level of performance that is most strongly associated with skill retention, application/generalization, and flexibility. Mastery-level proficiency is most strongly associated with scoring in the proficient range on more comprehensive and distal measures like year-end tests. The SpringMath targets are designed to maximize efficiency and accuracy of decision making at each decision point in MTSS. For example, requiring the median to reach mastery before the class advances to the next skill during classwide intervention increases the number of students who are at least instructional. Thus, students who remain in the frustrational range, when the class median has reached mastery, can readily be identified and recommended for individual assessment and intervention. SpringMath targets are based on decades of academic screening research and are routinely evaluated to ensure continued accuracy and efficiency. In math, some skills are more complex than others and complexity is highly reflected by the digit correct responses required to correctly solve a problem. In math, digits are usually counted for each operational step and for each place value position. One simple way to think about digits correctly is that it is similar to giving partial credit. But digits correct can be very complicated to score, especially given more challenging skills with multiple solution paths (e.g., simplification of fractions). Thus, SpringMath has identified the answers correct equivalent scores that reflect instructional and mastery performance for all 145 skills K-12. Easier skills (which also appear at lower grade levels) require more answers correct because each answer may require only one- to two-digit responses, whereas more challenging skills (which appear at the higher grades) may require only a few answers correct because each answer may be worth 25-30 digits correct. We calculate Rate of Improvement (ROI) using ordinary least squares regression, which is the conventional way to compute ROI in curriculum-based measurement. In effect, using each data point in a series, it finds the linear trend (line) that is nearest to all the data points or represents the “line of best fit.” Typically, ROI is reported as the change in number of answers correct per unit of time (one, two, or four minutes) per week. If you were graphing this, you’d graph each score as the date (on the x-axis) by the progress monitoring score (on the y-axis). So basically, ROI is how much the score went up each week. So, on a two-minute measure, if at week one the score was 30, at week two the score was 35, and at week three the score was 40, then the ROI would be five answers correct per two minutes per week. The ROI calculation used in SpringMath is consistent with recommended practices in RTI. Yet, ROI can never be interpreted in a vacuum. ROI depends on many things, including the efficacy of core instruction, the intensity and integrity of intervention, the scope of the intervention target, the sensitivity of the outcome measures, and the agility (and frequency) with which the intervention is adjusted to maintain alignment and intensity of the intervention. Determining what ROI is considered adequate raises other problems. Aimlines draw a line from starting performance to desired ending performance over some selected period of time, and ROI that is similar to the aimline is considered sufficient. Aimlines are easy to understand and highly practical; however, they are greatly influenced by the starting score (baseline) and the amount of time allocated to the intervention trial, which is often arbitrary. Spring Math helps by automating decision making at each stage of the RTI process to limit potential “confounds” or errors that can threaten the validity of the ultimate RTI decision. SpringMath also emphasizes comparison between student ROIs and class ROIs during classwide intervention where students are receiving the same intervention layered on the same core instructional program with the same degree of integrity. SpringMath reports ROI for goal skills during individual intervention (rather than target skills) because goal skills will have the greatest number of available data points from which to estimate ROI. Teams can use the ROI metric to characterize student progress across phases of the intervention. For a more detailed discussion and case study in the use of an RTI identification approach, see https://www.pattan.net/multi-tiered-system-of-support/response-to-intervention-rti/rti-sld-determination. Teachers and parents worry about math anxiety, and some math education experts caution against tactics used in math class, such as timed tasks and tests, which might theoretically cause or worsen anxiety (Boaler, 2012). Such assertions are theoretical but seem to really resonate with some teachers and parents and get repeated as “fact” in the media. Yet, evidence does not support that people are naturally anxious or not anxious in the context of math assessment and instruction (Hart & Ganley, 2019). Rather, math anxiety is bi-directionally connected to skill proficiency. In other words, students with weaker skills report more anxiety, and students who report more anxiety have weaker skills. Gunderson, Park, Maloney, Bellock, and Levine (2018) found that weak skill reliably preceded anxiety, and anxiety further contributed to weak skill development. They found that anxiety could be attenuated by two strategies: improving skill proficiency (this cannot be done by avoiding challenging math work and timed assessment) and promoting a growth mindset (as opposed to a fixed ability mindset) using specific language and instructional arrangements. Namkung, Peng, & Lin (2019) conducted a meta-analysis and found a negative correlation between anxiety and math performance (r = -.34). This relationship was stronger when the math task involved multiple steps and when students believed that their scores would impact their grades. Interestingly, Hart and Ganley (2019) found the same pattern with adults. Self-reported adult math anxiety was negatively correlated with fluent addition, subtraction, multiplication, and division performance (r = - .25 to - .27) and probability knowledge (r = - .31 to - .34). Self-reported test-taking anxiety was negatively correlated with math skill fluency and probability knowledge, too (r = -.22 to - .26). One must wonder with these emerging data whether math anxiety has been oversimplified in the press. In any case, avoiding challenging math tasks is not a wise response when teachers worry about math anxiety because that will only magnify skill deficits, which in turn will worsen anxiety. Finally, some teachers believe that timed activities are especially anxiety-causing for students. Existing research that included school-age participants does not support the idea that timed assessment causes anxiety, no matter what thought leaders have speculated (Grays, Rhymer, & Swartzmiller, 2017; Tsui & Mazzocco, 2006). Fortunately, there is much that teachers can do to help students engage with challenging math content and limit possible anxiety in the classroom. Research data suggest that exposure to math instruction with opportunities for brief timed practice opportunities, provided in low stakes ways, using pre-teaching and warm-up activities, and focusing on every child “working their math muscles to get stronger and show growth” are powerful mechanisms to reduce anxiety while also building student skill. Verifying prerequisite skill mastery improves student engagement, motivation, and self-efficacy. SpringMath interventions include all these specific actions to mitigate anxiety and build mathematical success, which is protective for students in a multitude of ways, including preventing future math anxiety. One final comment here. It is critical that adults set the tone for practice with challenging math content for which students have mastered the prerequisites in a way that presumes learning is possible without anxiety. In popular media, it seems that some teachers have been made afraid to provide timed practice activities and one of the authors of this text recently read a posting in social media from a math teacher that referenced “teacher’s autobiographical histories” as a source of evidence in presuming this inevitable relationship of math anxiety given any timed activities. This kind of presumption is inappropriate because it can lead teachers to avoid using tactics that are highly effective in building math mastery for all students. Teachers’ recalled experiences of their own learning in elementary school are interesting and certainly personally meaningful to the teacher. But they are not a source of evidence that should guide how instruction is provided. Evidence seeks to identify findings that are generally meaningful and to limit alternative explanations for findings (i.e., confounds). A teacher’s own experience (or even collectively most teachers’ experiences) even if presumed perfectly accurate and reliable reported many decades after the experience was lived and without regard to the context in which the experience was lived is still not generally applicable because it would pertain to only the universe of future teachers. In other words, it is possible that other students in the same environment given the same exposure did not experience anxiety and maybe those students became future engineers, physicians, and such. We know that teaching is a human enterprise and that teachers must provide an environment in which students feel supported to learn. We believe and encourage teachers to follow the evidence to support learners to meet challenging content and to grow incrementally toward mastery. We believe this means using brief timed practice opportunities in low stakes ways having ensured students' mastery of prerequisite skills. That’s exactly how SpringMath works. Schools collect lots of student data during a school year but often fail to consider and respond to what those data say about the effects of instruction in the school. MTSS is a powerful mechanism to evaluate local programs of instruction, allowing leaders and decision teams to ask questions about specific program effects, effects of supplemental supports, reduction of risk among specific groupings of students, and progress toward aspirational targets like students thriving in more advanced course sequences over time (Morrison & Harms, 2018). SpringMath is designed to aggregate data in useful ways to facilitate systemic problem solving. From the coach dashboard, teams can view progress at the school, grade, class, and student level in easy-to-read summary reports and graphs. Assisted data interpretation is always provided so implementers do not have to guess about what the data are telling them. Because universal screening is part of SpringMath, systems can view improved proficiency across screening occasions on grade-aligned measures. With effective implementation, schools would want to see that percent of students not at risk improves across subsequent screenings (this can be viewed from the Screening tab and from the Growth tab, which is automatically updated as you implement). Schools would want to see that classes are mastering skills every three to four weeks during classwide intervention, that classes are meeting the mid-year goal marked in the Classwide Intervention tab, and that all students are growing each week during classwide intervention (Scores Increasing metric in the coach dashboard should be greater than 80%). The Growth tab should show that classes and grades are gaining in the percent of students not at risk across subsequent screenings (e.g., fall to winter and winter to spring) and that the final classwide intervention session brings most students into the not-at-risk range. Finally, systems can use Program Evaluation under their log-in ID to access a full program evaluation report that reports effects by school, grade, and class, along with dosage and recommends specific ways that schools might improve their This feature was released in the 2022-23 school year. It can be accessed from the drop-down menu attached to your log-in ID. Security FAQs The SpringMath software application is a published by Sourcewell. Sourcewell has implemented separate policies (generally the “Data and Security Policies” or “Data and Security Policy”) that include details about the manner in which Sourcewell protects the confidentiality and safety of educational data. Below is a list of current Data and Security Policies. Sourcewell reserves the right to update each Data and Security Policy from time to time in the ordinary course of business. The most recent version of each policy is available at the link on Sourcewell ’s website indicated below or by hard copy upon request. (a) Data Privacy Policy (“Privacy Policy”), available at https://sourcewell.org/privacy-policy (b) Minnesota Government Data Practices Act Policy (“MGDPA Policy”), available at https://sourcewell.org/privacy-policy (c) Terms of Use – Sourcewell website, available at https://sourcewell.org/privacy-policy (d) Information Security Policy (“Security Policy”), available upon request Teachers have access to current year student data for those classes that they have been assigned to by the rosters submitted by the district or to classes as assigned by the district data administrator through the application. Coaches have access to schoolwide data for the current year and for prior years. The coach role is assigned by the authorized district representative. Data administrators submit the rosters for their districts and have access to all schools within the district. Data administrators are assigned by the authorized district representative. Sourcewell acknowledges and agrees that customer data is owned solely by the customer. Sourcewell will not share or disclose customer data to any third party without prior written consent of the Customer. Sourcewell will ensure that any and all customer data shall be used expressly and solely for the purposes enumerated in Customer Agreements. Sourcewell will restrict access of all employees and consultants to customer data strictly on a need-to-know basis in order to perform their job responsibilities. Sourcewell will ensure that any such employees and consultants comply with all applicable provisions of this Privacy Policy with respect to customer data to which they have appropriate access. Further details about data privacy can be viewed by accessing the Sourcewell Data Privacy Policy. On August 1 of each year, SpringMath rolls forward to a new academic year. Currently, schools or data administrators need to submit a new roster or rosters of students for the upcoming school year. Teachers can view only students who are enrolled in classrooms to which they are assigned, but coaches can view SpringMath data for prior years. Prior year data are maintained and can be accessed from your account from the drop-down menu on your log-in ID. From the teacher dashboard (easily accessed from the coach’s dashboard), you can click into your Students tab. From there you can select any student and a summary of all data collected for that student relative to instructional and mastery targets and compared to the class median on the same skill is displayed in a series of graphs to help parents see exactly how their child is progressing during math instruction and intervention. Summary graphs can also easily be downloaded as a PDF, attached to an email and then shared with parents. Boaler, J. (2012). Commentary: Timed tests and the development of math anxiety: Research links “torturous” timed testing to underachievement in math. Education Week. Retrieved from https:// Burns, M. K., Aguilar, L. N., Young, H., Preast, J. L., Taylor, C. N., & Walsh, A. D. (2019). Comparing the effects of incremental rehearsal and traditional drill on retention of mathematics facts and predicting the effects with memory.School Psychology, 34(5), 521–530.https://doi.org/10.1037/spq0000312 Codding, R., VanDerHeyden, Martin, R. J., & Perrault, L. (2016). Manipulating Treatment Dose: Evaluating the Frequency of a Small Group Intervention Targeting Whole Number Operations. Learning Disabilities Research & Practice, 31, 208-220. Compton, D. L., Fuchs, L. S., Fuchs, D., Lambert, W., & Hamlett, C. (2012). The cognitive and academic profiles of reading and mathematics learning disabilities. Journal of Learning Disabilities, 45 (1), 79–95. Retrieved from https://doi-org.ezproxy.neu.edu/10.1177/0022219410393012 Duhon, G. J., Poncy, B. C., Krawiec, C. F., Davis, R. E., Ellis-Hervey, N., & Skinner, C. H.(2020)Toward a more comprehensive evaluation of interventions: A dose-response curve analysis of an explicit timing intervention,School Psychology Review,https://doi.org/10.1080/2372966X.2020.1789435 Fuchs, L. S., Fuchs, D., & Malone, A. S. (2018). The taxonomy of intervention intensity. Teaching Exceptional Children, 50(4), 194–202. doi:10.1177/0040059918758166 Grays, S., Rhymer, K., & Swartzmiller, M. (2017). Moderating effects of mathematics anxiety on the effectiveness of explicit timing. Journal of Behavioral Education, 26(2), 188–200. doi:10.1007/ Gunderson, E. A., Park, D., Maloney, E. A., Beilock, S. L. & Levine, S. C. (2018) Reciprocal relations among motivational frameworks, math anxiety, and math achievement in early elementary school. Journal of Cognition and Development, 19, 21–46. doi:10.1080/15248372.2017.1421538 Hart, S. A., & Ganley, C. M. (2019). The nature of math anxiety in adults: Prevalence and correlates. Journal of Numerical Cognition, 5, 122–139. Morrison, J. Q., & Harms, A. L. (2018). Advancing evidence-based practice through program evaluation: A practical guide for school-based professionals. Oxford University Press. Namkung, J. M., Peng, P., & Lin, X. (2019). The relation between mathematics anxiety and mathematics performance among school-aged students: a Meta-analysis. Review of Educational Research, 89(3), 459–496. hdoi: 10.3102/0034654319843494 AERA/APA/NCME (2014). Standards for educational and psychological testing. Washington, DC: AERA. Powell, S. R., & Fuchs, L. S. (2015). Intensive intervention in mathematics. Learning Disabilities Research & Practice (Wiley-Blackwell), 30(4), 182–192. doi:10.1111/ldrp.12087 Schutte, G., Duhon, G., Solomon, B., Poncy, B., Moore, K., & Story, B. (2015). A comparative analysis of massed vs. distributed practice on basic math fact fluency growth rates. Journal of School Psychology, 53, 149-159. https://doi.org/10.1016/j.jsp.2014.12.003 Solomon, B. G., Poncy, B. C., Battista, C., & Campaña, K. V. (2020). A review of common rates of improvement when implementing whole-number math interventions. School Psychology, 35, 353-362. Tsui, J. M., & Mazzocco, M. M. M. (2006). Effects of math anxiety and perfectionism on timed versus untimed math testing in mathematically gifted sixth graders. Roeper Review, 29(2), 132–139. VanDerHeyden, A. M., Burns, M. K., Peltier, C., & Codding, R. S. (2022). The Science of Math – The Importance of Mastery Measures and the Quest for a General Outcome Measure. Communique, 51 (1). VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. A (2007). Multi-Year Evaluation of the Effects of a Response to Intervention (RTI) Model on Identification of Children for Special Education. Journal of School Psychology, 45, 225-256. https://doi.org/10.1016/j.jsp.2006.11.004 Ready to add SpringMath to your school or district?
{"url":"https://springmath.org/supporting-math-mtss-through-springmath-faqs","timestamp":"2024-11-13T06:19:26Z","content_type":"text/html","content_length":"168332","record_id":"<urn:uuid:efebe8e5-0eab-44c9-b6b1-90c5ae5af593>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00390.warc.gz"}
Web pages of Luca Spada The course is an introduction to the theory of abelian lattice-ordered groups from different perspectives. Initially, we study these structures with purely algebraic methods. We will analyse some important theorems and connections with other parts of mathematics, such as AF C*-algebras. Later we will move on to their geometric study, through the Baker-Beynon duality. It will be seen that, just as the commutative rings provide an algebraic counterpart for the study of affine manifolds with polynomial maps, lattice-ordered groups represent the algebraic counterpart of the polyhedral cones and piece-wise linear homogenous maps between them. Course topics • Abelian lattice-ordered groups: definition and examples. • Representation results. • Archimedeanity and strong (order) unit. • Free and finitely presented abelian l-groups. • Baker&Beynon duality. • Polyhedral geometry Lecture by lecture topics □ 5/5/2023: Introduction to the course. Motivations and applications of the theory of abelian lattice ordered groups. Main examples. Crash course on Galois connections and categorical □ 11/5/2023: Overview of the main results and techniques in the study of l-groups: The integers and Weinberg’s theorem. Archimedeanicity and Hölder’s theorem. Semisimplicity and Yosida’s □ 12/5/2023: Strong unit and MV-algebras. The free abelian l-groups as algebras of functions. Lattice ordered groups and piecewise linear geometry. □ 18/5/2023: The general adjunction and its fixed points: Baker&Beynon duality and polyhedral geometry. Course material • Bigard, A., Keimel, K., & Wolfenstein, S. (2006). Groupes et anneaux réticulés (Vol. 608). Springer. • Anderson, M. E., & Feil, T. H. (2012). Lattice-ordered groups: an introduction (Vol. 4). Springer Science & Business Media. • Goodearl, K. R. (2010). Partially ordered abelian groups with interpolation (No. 20). American Mathematical Soc. • Glass, A. M. W. (1999). Partially ordered groups (Vol. 7). World Scientific. Practical aspects Term and schedule Lecturer: Luca Spada Course duration: 10 hours. Course calendar: 5, 11, 12 and 18 of May, from 10:00 to 12:45. All lectures are in room P19 (last floor, building F3). You can choose to take the final exam in one of the following ways: • A short oral interview (about 30 minutes) in which the knowledge acquired on the basic and more advanced concepts will be evaluated. • The presentation of a topic agreed with the teacher and not covered in the course, in the form of a short seminar also open to other doctoral students lasting about 30 minutes. • Solving some exercises at home.
{"url":"http://logica.dipmat.unisa.it/lucaspada/tag/riez-spaces/","timestamp":"2024-11-04T02:04:53Z","content_type":"application/xhtml+xml","content_length":"38161","record_id":"<urn:uuid:2eb00210-10a7-4786-bf11-38695049d223>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00280.warc.gz"}
A model of this concept must provide: bool operator() (Type1 obj1, Type2 obj2) determines if two geometrical objects of type Type1 and Type2 intersect or not, for all pairs Type1 and Type2, where the types Type1 and Type2 can be any of the following: More... determines if two geometrical objects of type Type1 and Type2 intersect or not, for all pairs Type1 and Type2, where the types Type1 and Type2 can be any of the following:
{"url":"https://doc.cgal.org/5.5.1/Kernel_23/classKernel_1_1DoIntersect__3.html","timestamp":"2024-11-05T16:13:13Z","content_type":"application/xhtml+xml","content_length":"13404","record_id":"<urn:uuid:1da8a26e-7327-450f-b2d5-794a696358ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00645.warc.gz"}
Video of relativity of simultaneity | Britannica relativity of simultaneity BRIAN GREENE: Hey, everyone. Welcome to this next episode of Your Daily Equation. Today I'm going to focus upon an equation that captures the notion of the relativity of simultaneity. A mouthful right? What does that mean? It means that two individuals that are moving relative to one another will not agree on what things happen at the same moment of time. They will not agree on what events, what things happen simultaneously. And again, it'll come out, again, from this notion of the constant nature of the speed of light, an idea that is so incredibly fruitful, at least in the mind of an Albert Einstein. OK, and then, after I explain the idea in animation form and visuals, and to give you some intuition for the-- well, intuition is probably too strong a word, actually, if you think about it. Many people have asked me, do I have an intuition for these ideas in relativity, and the answer-- the most truthful answer is, not really. I can follow the chain of reason, I can do the mathematics, but do I have sort of a deep inner intuition in my bones for these ideas? I don't think that I really do, so I don't know if I can get you to that point. But at least I want to show you the chain of reasoning, so at least you can explain it to yourself in your own mind for why it is that these weird implications are true. OK, so to give you that level of understanding I'm going to begin with a little story, a story I first wrote down in my book The Elegant Universe, so some of you may be familiar with the story, but it goes like this. There are two nations that have long been at war. They are called Forward Land and Backward Land, and that's not a value judgment by any means, though you see it comes right out of the scenario that I'll be describing. Now, the two nations finally have come to a peace treaty, but neither president from the two countries wants to sign the treaty before the other president. So they need some scheme that will ensure that they each signed the treaty simultaneously. So they consult with the United Nations Secretary General and they come up with the following plan. The two presidents are going to sit at opposite-- how do I orient my hands so that I look like-- ah, there you go. They're going to sit at opposite ends-- wow, it's harder than you think-- opposite ends-- it's like I got jazz hands here-- opposite ends of a long train, and in between them there's going to be a light bulb. And the idea is that the bulb will initially be in the off position, and then the Secretary General will turn on the bulb, sending light heading left and right. And since the speed of light is constant, isn't affected by the direction in which it moves, and since each president is equidistant from the bulb, the travel time to each president's eyeball will be the same. When they see the light each president picks up the pen, signs the treaty, and that should be the means of ensuring that they sign simultaneously. Now, one little additional detail that the Secretary General puts into the scenario, a crucial one as we will see. The Secretary General says, hey when you sign the treaty why don't you let the inhabitants of the two nations, that live on opposite sides of a railroad track. Why don't you let them watch the signing ceremony by doing it on a train that goes right along that track? And both presidents say, sure, yeah why not? Let's do it on a train so that everybody can witness this historic event. OK, so now let's take a look at a little animation that will show you what happens. And we'll begin with the perspective of those on the train, the two presidents and their respective entourages, if that's the right plural of entourage. There's the two presidents. President of Forward Land facing forward. Backward Land is facing backward. That's where the names come from. The bulb is in the off position, and then we turn on the bulb, and when we do so the light streaks left and right heading toward each of the presidents. Again same distance, same speed of light. So it hits them at the same moment. They pick up their pens they sign the treaty and everybody is so thrilled with the outcome. But then, surprisingly, shockingly, unexpectedly, all of these folks on the train get word that riots have broken out-- you can see it here-- on the platform. Because the inhabitants of the two nations do not agree that the president's signed at the same moment. In fact, they claim that the president of Forward Land signed first, and the president of Backward Land signed second. Let me show you why they come to this conclusion. Let's rerun the scenario, but now, not from the perspective of those on the train, but from those on the platform. Here's what they see. The light goes off and the president of Forward Land is running toward the light, so it hits his eye first. President of Backward Land is running away, so it hits his eyes second. Let me show you this a little bit more slowly. Again, speed of light is constant. It's just that the president of Forward Land is heading toward the light, so look, the light doesn't have to travel as far to reach his eye. Boom, hits it first. President of Backward Land, boom, hits him second, because he's running away from the light. Light has to travel farther to reach his eye. Again, this has nothing to do with the speed of light changing. It has everything to do with the speed of light being constant. It is just that the president of Forward Land is making it easier for the light to reach his eye by heading toward the light. Light doesn't have to travel as far. The president of Forward Land has made up some of the distance by heading toward the light. President of Backward Land is making it more difficult for the light to reach his eye by racing away, on the train, from that beam of light. Light has to travel farther. So in other words, we have two perspectives that I can summarize right here. On the left, the perspective of those on the train. Light goes off, heads in both directions, equidistant, hits both eyes at the same moment, simultaneously. Perspective of those on the platform, President of Forward Land is heading toward the light, therefore it hits his eye first. Light doesn't have to travel as far. And you see that the ball of light is still in transit to reach the president of Backward Land, according to those on the platform, because the president of Backward Land is running away from the light. Light has to travel farther, therefore, to reach his eye. What do we make of this? The question, of course, is who is right? Those on the train. On the left perspective. Those on the platform. The right perspective. And the answer is, I suspect many of you, no doubt, would guess, or know, or thought through, they are both right. It is not that one perspective is right and the other is wrong. Both perspectives are right, even though they come to the shocking conclusion that, according to those on the train, the events of the two presidents seeing the light and signing the treaty, are simultaneous. While according to those on the platform those events are not simultaneous. And in a sense, I'm actually tempted to just stop our discussion right here, because that is such a deep insight into the nature of time. Again, before Einstein, the view was that there is a universal notion of what happens at a given moment, a universal notion of what things happen simultaneously. But Einstein showed that the constant nature of the speed of light, which is vital to this story. Because the light is having to travel different distances, according to those on the platform, at the same speed, and therefore it doesn't complete its journey simultaneously. According to those on the platform those two events are not simultaneous. And, you know, just referring back to The Elegant Universe, I remember I said, writing as a young man, you know, whatever it was, more than 20 years ago, that if you took one thing away from the book, this scenario of the treaty signing ceremony and how it allows you to come to the conclusion that simultaneity is in the eye of the beholder, in the eye of the observer. If you took that away I'd be satisfied. If that, in fact, was the only point you took away. I feel the same here. If, in this whole series, the one thing that sticks is the relativity of simultaneity and this little story that allows you to think through why it's true, I will feel like this whole series has served its purpose. But I'm not going to stop here. This is called your daily equation for a reason. Which is, I want to now spend a couple of minutes, won't take long, to derive mathematically the time discrepancy, according to those on the platform, between these two events. Between the president of Forward Land signing the treaty and the president of Backward Land signing the treaty. And that is not hard to do. We're going to get a formula that expresses that time difference in terms of the speed of the train-- that's vital. The train wasn't moving everybody would agree on what things happen at the same moment. You need observers and relative motion. And also, as we'll see, what comes into the formula is the distance between the two presidents. And that has an interesting implication which I will finish up with. Now to get there let me use my iPad here, and let me try to bring it up on the screen. Good. All right, so now you and I, we can now work this out together. How to do that? All right. Let me just draw a little picture. So imagine that I have my train. I'm not going to really draw a train, but schematically. And I've got the president of Forward Land facing forward, Backward Land facing back. And in between, right, we have our little light bulb. And, I don't know, the little light bulb gives off, you know, some light that will stream both left and right toward each of the two presidents. And we want to work out the time difference between it hitting those two presidents. So to do that let's first work out the amount of time that it takes that beam of light to reach the president from Backward Land. How do we do that? Well, bear in mind that once the light is emitted it streams toward the president of Backward Land. So certainly, it has to cover the distance between the bulb to the president. So let's give that a name. Let's say that the entire distance between the two presidents, let's call that equal to L. And therefore, you've got L over 2 and L over 2 being the two distances the light needs travel. But because the train is moving while the light is in transit, president of Backward Land is actually going to move a little bit toward the right, while the light is in transit. How far will the President of Backward Land move? Well, if the speed of the train is v, and it takes a time tb for the light to reach the president, then the distance that it will have to travel is half the length of the train plus v times tb. So it's got to cover half the length of the train. It's also got to cover the distance that the president of Backward Land raced away while the light was flying toward him. And therefore, it must be the case that c times tb, that is the total distance that the light travels before it hits the president of Background Land in the eye, must be L over 2 plus v times tb. And that allows us to solve for tb. And we can just write tb times c minus v equals L over 2. And therefore tb is equal to L over 2 times c minus v. OK, now let's do the same calculation for the amount of time, from the perspective of those on the platform, for the light to reach the president of Forward Land. The only difference here is that the distance that the light needs to travel is now L over 2 minus the distance that the president of Forward Land travels while the light is in transit, because Forward Land-- President of Forward Land is going toward the light, makes it easier, decreases the distance that the light needs to travel. And well, how far does the train travel while the light is in Transit? v times tf. And again, just as above, we can solve that tf, now it's c plus v-- only difference is that sign, L over 2. And therefore tf is equal to L divided by 2 times c plus v. All right, so we have the, the 2 times tb over here, and tf over here. And to get the discrepancy between those two events, that is, the lack of simultaneity of those two events, we just can subtract those. So let's do tb minus tf. What do we get? So we get L over 2 times 1 over 2 times c minus v minus 1-- whoops, already have the 2 there. Let me-- don't want to double it up. 1 over c minus v times 1 over c plus v. And that gives us, then, L over 2. And I'm going to combine those by putting them over the same denominator. c squared minus v squared. How do I get that? Well, I'm going to multiply the left term by c plus v and use c plus v times c minus v is c squared minus v squared. So it's c plus v from the first term, minus-- and I have to multiply above and below denominator numerator by c minus v on the right. And now we can just get L over 2 times, upstairs, 2v divided by c squared minus v squared. So our formula, then, is Lv divided by c squared minus v squared. And that's it. That's the formula that we are looking for. This L is the L measured by those on the platform. So you don't have to worry about length contraction, per se, it's built in to the very definition of L. And the formula is quite nice. Now there's another way I can write it. Let me also write this as L times v over c divided by c times 1 minus v over c squared. All I did was pull some c's out in the top and the bottom to write it in that form. And writing it that way is particularly nice because v over c, in everyday life, is a small number, and therefore this time difference, this is the time difference between the two events, is minuscule. And that's why, in everyday life, we are not aware of the relativity of simultaneity. But if v over c is large, then that relativity of simultaneity, that time difference will grow larger and larger. But I want to finish up by just noting one other curious fact that's kind of fun to bear in mind. Because there's this factor of L that comes into the time difference formula, you can also make this expression large not only by making v approach c-- that's sort of the usual thing in relativity. You say, hey, how can you make the effects bigger? You've got to go to higher and higher speed. You can also make it large by letting L-- whoops, let me get rid of that guy over here. That's making this guy, L, goes big. Big separation between the two events of interest. And so if you imagine that even if you have slow speeds, v over c is a small number, but two observers are very far apart, maybe on opposite sides of the universe. There's some subtleties that even come into talking about the notion of time over such large distance scales. But put that to the side, on very, very large distances even small velocities can yield significant discrepancies in what observers claim to happen at the same moment. So it's sort of a second lever arm for making the effects of relativity larger and larger. OK that's the, um, equation that I wanted to get to today. The equation that captures the relativity of simultaneity. And I think next time I'll pick it up probably-- I keep guessing and I make the wrong guess as to what I'll do next. I kind of change my mind every time I sit down to do this, but I think I'm going to focus upon relativistic mass. Or at least sometime soon I'll do relativistic mass. Maybe it will be the next episode. In any event, that's it for today. That is your daily equation. Looking forward to seeing you next time. Take care.
{"url":"https://www.britannica.com/video/Your-Daily-Equation-04-Relativity-of-Simultaneity/-254968","timestamp":"2024-11-01T23:18:47Z","content_type":"text/html","content_length":"94252","record_id":"<urn:uuid:5e61cfc5-4a75-42e4-becb-cca28bc9ee64>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00440.warc.gz"}
Understanding Profitability Index MethodUnderstanding Profitability Index Method | Cleverism Understanding Profitability Index Method by Anastasia Belyh • Understanding Profitability Index Method The profitability index is a ratio of an investment’s benefits to the cost involved in making the investment. It is an index used to measure the present value of future cash flow compared to the initial investment. This measurement is more of a prediction of the profitability of investing in a given project. It helps you or the company using it to decide whether to invest in a project or not. This index can be used by individual investors to make investment decisions. However, it is mostly used by companies which are about to venture into projects and they need to know their viability. It is also used to compare two or more different projects so as to decide which one to invest in. The use of the profitability index is in line with the need to maximize investments while lowering costs. As you make an investment, your goal is to gain significant returns which will offset the initial investment you put in. This is basically how you make profits from your investments. Other terms for the profitability index are the Profit Investment Ration (PIR) and the Value Investment Ratio (VIR). When considering several investment-grade projects, the profitability index serves to rank the projects in order of best return on initial investment. This makes it easy for you to choose which one to go for. The higher the profitability index, the better the returns to be realized. The profitability index is calculated using several variables which depict different aspects of the investment. There are two different formulas used. Let’s look at them separately. Formula 1 This is the more simple and direct formula. It uses the present value of future cash flow and the initial investment required. PI = Present value of future cash flow / Initial investment required The present value of future cash flow seeks to understand how valuable the returns to be received are. This is based on the reasoning that $1 held today cannot be of the same value as $1 next year. This reasoning is true because the value of money never remains constant. This is not because the value can itself change but because its purchasing power can, and does, change. One of the biggest determining factors of the purchasing power of money is the ever-changing force of demand and supply. If the supply of a commodity is low, then the product becomes expensive. The amount of money you previously needed to buy the product is then no longer enough. Another factor which indirectly affects the demand and supply while directly affecting the value of money is inflation. High inflation makes life expensive, thus the value of money goes down. These factors are what lead investors and companies looking for an investment to calculate potential returns. They need to know how much value the future returns will have if they were held today. Example calculation The management of company A is thinking of building a new factory to supplement its current factory’s operations. They are projecting an increase in demand for their products. In order to be able to handle the new orders, they are considering this move. Since they don’t want to invest in a project which won’t provide good returns, they give their finance team some work. They are to calculate the profitability index of the new investment. The expected cash flow is $250,000 and the initial investment is $200,000. below is the calculation of the profitability index. PI = Preset value of future cash flow / Initial investment required PI = 250,000 / 200,000 = 1.25 With a PI of 1.25, the project is worth undertaking. Any index greater than 1 is a sign of good returns. Formula 2 The second formula requires more calculations to be done. It is shown below. Profitability Index Formula = 1 + (Net Present Value / Initial Investment Required) To use this formula, you will need to find the Net present value. The Net present value is arrived at by using the below formula. Net present value = Present value of future cash flow – Initial investment required Example calculation To show how both these formulas give the same results, we will use the example from the first formula. With the initial investment being $200,000 and the present value of future cash flow being $250,000, here is what the calculation would be like. First, calculate the Net present value. Net present value = Present value of future cash flow – Initial investment required Net present value = 250,000 – 200,000 = 50,000 Now calculate the PI. PI = 1 + (Net Present Value / Initial Investment Required) PI = 1 + (50,000 / 200,000) = 1.25 In other instances, you may have a list of the specific expected cash flows for every year the new project is to run. In that case, the same formulas stand, only that you will have a longer process to follow. This process adds another factor called discounted value. This is essentially the present value of the future amount. This is from the earlier-mentioned idea that the current value of money is not equal to its value later in time, e.g. 1 year later. Generally, money to be received later in time is considered to have a lower present value than money held in the present. Discounting therefore reduces the amount of money by a specific percentage. Let’s look at an example. Company X is considering an investment. In its calculations, the expected cash flow for the first 3 years are $70,000, $65,000 and $82,000. The initial investment is $120,000. To calculate the PI, we will first need to get the total present value of those projected earnings. Since the present value has to be a discounted amount of the figures, here is what to do assuming a discount rate of 10%. Present value = Estimated cash flow value / (1 + Discount rate) ^ n Where n is the number of year in which the given cash flow amount is received. We will calculate the discounted values then add them up to get the total present value. Year 1: 70,000 / (1 + 0.1) ^1 = 63,636.36 Year 2: 65,000 / (1 + 0.1) ^2 = 53,719.01 Year 3: 82,000 / (1 + 0,1) ^3 = 61,607.81 Total: 178,963.18 With this as the present value of cash flow, we then calculate the Profitability index. Keep in mind that the Net present value is attained by subtracting the initial investment requirement from the present value of future cash flow, which is what we have as the total discounted amount PI = 1 + (Net Present Value / Initial Investment Required) PI = 1 + ((178,963.18 – 120,000) / 120,000) PI = 1.49 We can verify this using the first formula. PI = Preset value of future cash flow / Initial investment required PI = 178,963.18 / 120,000 PI = 1.49 Being a means of telling which project is worth investing in, the profitability index gives a number which you need to interpret. Just as the ratio of the calculation is important, so is the interpretation of the same. There are three different values or value ranges you can get from any of the formulas you choose. • More than 1 – a PI value of more than 1 means that the project is a good investment. It is a sign that you will get more than you put in for the investment. Any time you calculate PI and get a value of more than 1, then consider the project viable. • Equal to 1 – when you get a PI value which is equal to 1, it means that the investment will give the very amount you put in. This is not a good project to invest in. When things are neutral, consider the chances of loss being higher than those of gains. Stay away from such investments. • Less than 1 – any investment whose PI is less than 1 is a risky one. A value of less than 1 means that the returns are less than the initial investment. The project is a guaranteed loss-making investment. Run away. The purpose of any formula devised to check investments for profitability is to provide guidance and assurance. Here are some good reasons as to why you should use the profitability index. Shows a project’s worth The very nature of the profitability index will give you a good idea of the worth of the project you are looking to invest in. Being a ratio of the project benefits to investment cost, it cannot get clearer than this. This borrows from the concept of cost-benefit analysis. With this concept, anything you do has to be evaluated to find out whether the benefits outweigh the costs involved. Although the cost-benefit analysis is applicable in all areas of life, there is no situation better suited for it than the investment world. Using the profitability index, you are able to see the kind of return you may get from an investment. The calculation process itself will give you helpful information and will force you to make some detailed considerations. For example, to get the estimated yearly cash flow from the project, you will have to do some math. This is different from other investment appraisals which consider the market rate and other external factors. Also, the calculation results have a clear way of showing whether a particular investment is worth your money. Accounts for risk involved The profitability index takes into account the ever-present risk factor which other formulas may not. Although the formulas have no variable called risk, the risk is factored in using the discounted When an expected cash flow is discounted by a certain percentage, this is the risk factor being taken care of. Those amounts get reduced depending on the riskiness of the investment. Thus a project which seems more risky will have the discount rate being higher than for the project which is less risky. The higher the discount rate, the more the expected cash flow is reduced. This gives an accurate picture of the true amount which can be practically realized as a return. This also helps in correctly comparing different projects so as to minimize the risk of making the wrong choice. Can help know how investment will affect the company’s value? Every move made by a company has an impact on the company’s overall value. If a company is expanding and the market indicates that this is a good move, the company’s value increases. If a company is involved in research and development and it announces a major breakthrough, its value will go up. On the other hand, if a company experiences a reduction in profits, its value goes down. If this happens for two consecutive years, stock prices get affected and investors may sell their shares. Something similar happens with new projects undertaken by companies. If it is a good move and it’s bound to increase their profits, the company’s total value goes up. In this case, the profitability index can help a company avoid getting into a situation which will affect its value. Since the index clearly points to the profitability of a project, it enables the decision-makers sustain their company’s economic status. Takes the time value of money into consideration This is another big advantage of using the profitability index. Investors know that the value of money is not always the same over time. However, not many stock valuation and investment-analyzing techniques factor this in their calculations. Because of that, most individuals and companies only look at the expected rate of return. From this they cultivate the amount of money they will receive after the investment period is over. This happens even for long-term investments. The obvious danger in this is that the money will be received but its value will very likely be low. The money may not be able to do much. This means that the returns will actually be less than what they were expected to be. The profitability index takes this into account and reduces the expected returns accordingly. This is done through the discount rate. This rate checks the risk factor of the investment. But at the same time, it looks into the approximate value of the money to be received in view of the time invested. As such, whereas other profitability formulas will tell you how much you can make, this one will tell you the same but give you a more accurate picture in terms of return value. Makes it easy to pick an investment project When faced with multiple choices for an investment, this index will not only tell you which ones are worth considering but it also ranks them for you. Calculating the profitability index of various projects will give you their viability in straight figures. Just looking at these figures will help you decide which one to go for. You will literally know which is number 1, 2, 3 etc. Moreover, since you have access to the figures involved in terms of the expected yearly cash flows, you can decide which one is giving you the best returns. For example, you may be looking at projects A and B. Project A has a PI of 1.5 and over the span of 3 years, has expected cash flows of $115,000, $107,000 and $113,000. Project B on the other hand has a PI of 1.4 with expected cash flows of $235,000 in year one, $256,000 in year two and $248,000 in the third year. You will have to consider other factors playing out in your company’s decision-making process. With that, you might decide to choose project B which gives you more cash flow. The project might however be requiring a higher initial investment. With the above advantages, some disadvantages also exist. These are the limitations you will experience out of using the profitability index to assess investment projects. Some of these are discussed below. Estimated future cash flow cannot be guaranteed The profitability index does a good job in analyzing the expected returns. Despite that, it cannot give a guarantee that those amounts it comes up with are what you will receive. Those figures remain to be estimates, albeit accurate estimates. It will thus be upon the company, or you who is working for it, to understand that the given figures could still change. The issue of estimated figures is common knowledge to investors although they still need to keep it in mind as they make their choices. Different projects can have the same PI PI is a great indicator of a projects profitability. But what happens when you have two or more projects with the same PI? Which one do you pick? This can happen when the calculations of the variables used in the formula match. For example, consider these two examples. Project A has an initial investment of $200,000 and the present value of future cash flow is $250,000. Project B has an initial investment of $400,000 with the present value of future cash flow being $500,000. Both these projects will have a PI of 1.25. Which one do you go for? This shows a shortcoming on this index’s side. If it cannot totally differentiate between options, then it might increase the difficulty of making a choice. In such a case, other considerations will have to be made. Cannot compare projects of different durations If you want to compare the project profitability which have different life spans, the profitability index would not be ideal. When projects run for a long time, their cash flow will provide some extra returns compared to those which run for a short time. Yet you might be able to only choose one project. This will pose a challenge and the PI of both projects may not be the best ratio to rely on. The profitability index gives you an opportunity to determine the profitability of an investment. It helps you further by ranking the various projects you are considering, thus making it easier to choose one. As with other formulas though, it is necessary to remember that these formulas are just but guides. The real returns may vary. Related posts Linking sales compensation to performance is a much discussed but not always well-executed concept. … Are you looking for a new hire? Or new talent in a specific area of expertise? Maybe you’re just … If you ask analysts, they would have varying opinions on what to credit for Apple’s success. Some … job opportunities Let's find the one for you Start here! Already a member? Log in
{"url":"https://cleverism.com/understanding-profitability-index-method/","timestamp":"2024-11-04T11:56:59Z","content_type":"text/html","content_length":"119805","record_id":"<urn:uuid:52fd4d23-8091-4d6b-861f-7688f58ac27a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00227.warc.gz"}
Triangles Class 10 - NCERT Solutions [with Videos, MCQ] - 2024 NCERT Click on any of the links below to start learning from Teachoo... Get NCERT Solutions of Chapter 6 Class 10 Triangles free at teachoo. Solutions to all NCERT Exercise Questions, Examples, Theorems, Optional Exercises are available with Videos of each and every Click on a link below to start doing the chapter.
{"url":"https://www.teachoo.com/subjects/cbse-maths/class-10th/ch6-10th-triangles/","timestamp":"2024-11-11T03:05:32Z","content_type":"text/html","content_length":"114650","record_id":"<urn:uuid:960889a5-5e44-4478-81c6-aa2bf2eacc30>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00681.warc.gz"}
If you are looking for an internship as a post doc in Poland, we encourage you to choose the University of Warsaw. Watch interviews with scientists from the Faculty of Mathematics, Informatics and Mechanics who are members of the ERC grant TUgbOAT research team. Find out what opportunities for professional and personal development are offered by working at the faculty. Mateusz Rapicki, PhD (watch the wideo) Michał Pawłowski, PhD student fellow (watch the video) Postdoc at the Institute of Informatics, MIM UW (Poland) See what ERC grant postdocs say about their work at the University of Warsaw, Poland. Mathieu Mari, PhD (France) “I didn’t know anything about Poland in general. I found it would be a good adventure because I talked with my colleagues back in Paris and they told me that the University of Warsaw was one of the strongest place in Europe for computer science. I checked by myself and realised that it looks very interesting, so I accepted the offer and I came to the University.” Runtian Ren, PhD (China) “The Faculty of Informatics, Mathematics and Mechanics here in the University of Warsaw is very famous all around the world. If you are doing research related to computer science, you know this place for sure and also I know that Piotr is very famous on doing algorithmic research.“ Join the best Polish team in Machine Learning (ERC grant TUgbOAT) If you are interested in machine learning and artificial intelligence but you are not convinced to work in Poland watch an interview with Jakub Pawlewicz (Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, Poland). Is working at the Institute of Informatics attractive for a foreigner scientist? This and many other questions are answered by Anish Mukherjee (India), post-doctoral fellow in ERC TUgbOAT grant of prof. Piotr Sankowski. Watch an interview and find out how the participation in ERC grant can help to meet scientific, professional and personal goals. Postdoc position in theoretical computer science We announce at the Institute of Informatics, University of Warsaw, Poland. The positions are supported by the ERC Consolidator Grant TUgbOAT: “Towards Unification of Algorithmic Tools” led by Piotr Sankowski. The TUgbOAT’ focus is on basic algorithmic problems. Example topics include: * algorithms for finding matchings in graphs; * online algorithms in various settings; * studying and algorithmically exploiting properties of data. The theoretical computer science group in Warsaw is strong and growing. Apart from the algorithms group members specializing in parameterized, approximation and graph algorithms (Łukasz Kowalik, Marcin Mucha, Marcin Pilipczuk, Michał Pilipczuk, Piotr Sankowski), we have also a leading research group in logic and automata (Mikołaj Bojańczyk, Bartosz Klin, Sławomir Lasota). We are looking for outstanding candidates with a Ph.D. (or soon to obtain a Ph.D.) in Computer Science or Mathematics who have already proven their high scientific potential in the area of algorithms or graph theory through publications in proceedings of highly ranked international conferences and/or journals. Background in the specific areas of projects in question will be an advantage. The gross annual salary is around 100,000 PLN. For comparison, this translates to around twice the average salary in Poland. The position comes with generous travel support and no teaching duties. To apply, send a CV to Piotr Sankowski <sank@mimuw.edu.pl>. Questions and informal inquiries are welcome. We are looking for YOU to join our award-winning scientific team! Have you just graduated your PhD and are considering a post-doc position in theory of informatics? Science is your passion and you would like to spend most of your post-doc researching on whatever interests you? You are in the right place. At MIM UW we offer you: 1. Great FREEDOM OF CHOICE related to what to work on; 2. Just A FEW or NO teaching duties; 3. A lot of TIME FOR RESEARCH; 4. Chance to cooperate with VERY EXPERIENCED and TALENTED scientists; 5. FRIENDLY environment; 6. Excellent SUPPORT from our administrative staff; If you still hesitate, here are two interviews with former post-docs in ERC GRANT TUgbOAT. Watch a video and find out more about benefits of working at MIM UW. Watch an interview with Krzysztof Fleszar and find out how participation in the ERC GRANT has changed his life. Adam Karczmarz told us about his experience as a post-doc at the Faculty of Mathematics, Informatics and Mechanics, University of Warsaw in ERC GRANT. The Curse of Euclidean Metric: Square Roots The deadline was approaching without mercy and there was, of course, still some polishing to be done for our SODA paper. But then we run into an issue. To make things worse, this issue turned out to be a hard one, a fundamental known open problem in computational geometry. The good thing is, I liked the problem so much that I decided to dedicate it this post. This is the story about the Sum of Square Roots problem and how we bypassed (ignored) it without solving it. How to identify m numbers using m/log m checks Here’s an old trick that we found useful for proving some tight complexity lower bounds. You are given m coins, each of weight either a or b, and a modern scale that can tell you the total weight of any chosen subset of coins. How many weighings do you need to identify which coin is which? Checking each coin individually uses m weighings, but can you do less? It turns out that this many is in fact enough, and this generalizes to various other settings with less restricted weights. This is the basis for two of our recent results: a tight complexity lower bound for Integer Linear Programming with few constraints and for multicoloring (a.k.a. b-fold coloring), assuming the Exponential Time Hypothesis. The trick allows us to use constraints that check the value of some number between 0 and m to indeed extract about log(m) bits of new information from each, in a way that is general enough to check m clauses of a 3-CNF-SAT instance using only O(m/ logm) constraints. In any weighing, we try some unknown number of weight-a coins between 0 and m, so this results in one of m+1 possible values, giving us at most log(m+1) bits of information. In total we need m bits of information to identify each coin, so clearly we will need at least Ω(m/logm) weighings. Read more Prophet inequality and auction design Suppose you want to sell a car and there are 10 agents willing to buy it. You are not sure how much they could pay but for each of them you know a probability distribution of how high the offer will be. For example, a car salon would always pay 10K but some person might offer 5Kor 15K with equal probability. The best you could do is to first negotiate with all of them and then pick the highest bid. Unfortunately, you cannot do so – after seeing each offer you must irrevocably choose either to sell the car or to refuse the offer. What is the best strategy to maximize your revenue in this The streaming k-mismatch problem The central problem of text processing is pattern matching: given two strings (a pattern and a text), find all fragments of the text matching the pattern. In exact matching, solely equal strings match; however, this condition is relaxed in many variants of approximate pattern matching. In the k-mismatch problem, we allow for up to k substitutions, that is, strings of the same length are assumed to match even if there are at most k mismatching characters between them.
{"url":"https://tugboat.mimuw.edu.pl/blog/","timestamp":"2024-11-11T22:51:54Z","content_type":"text/html","content_length":"67485","record_id":"<urn:uuid:b23eab77-82ab-4b42-9c2c-e9353661a700>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00117.warc.gz"}
Object Tracking in 75 Lines of Code - Sparrow Computing Tracking objects in video is a thoroughly studied problem in computer vision that has important applications in industries like sports, retail and security. There are several possible approaches to this problem, but a popular one that’s both simple to implement and effective in practice is called tracking-by-detection. The tracking-by-detection paradigm relies heavily on high quality object detectors. This means it can leverage advances in deep learning that have dramatically improved the performance of these In this post, we’ll walk through an implementation of a simplified tracking-by-detection algorithm that uses an off-the-shelf detector available for PyTorch. If you want to play with the code, check out the algorithm or the visualization on GitHub. How it works Here’s the algorithm. For each frame: 1. Run the detector to find the objects in the image. 2. Extract features for the objects you care about. 3. Compute the pairwise cost between each object from the previous frame and each object in the current frame. 4. Assign matches between the two frames in a way that minimizes the overall cost. Here’s an implementation in Python (or check out the repo): import json import fire import imageio import numpy as np import torch from scipy.optimize import linear_sum_assignment from torchvision.models.detection import fasterrcnn_resnet50_fpn from torchvision.transforms.functional import to_tensor def track( video_path: str, output_path: str = "out.json", score_threshold: float = 0.5, class_index: int = 1, # Track people by default ) -> None: """Track the objects for a specific class in a given video""" # Initialization device = "cuda" if torch.cuda.is_available() else "cpu" model = fasterrcnn_resnet50_fpn(pretrained=True).eval().to(device) reader = imageio.get_reader(video_path) # Tracking loop active_tracklets = [] finished_tracklets = [] prev_boxes = [] for i, frame in enumerate(reader): height, width = frame.shape[:2] # Detection x = to_tensor(frame).to(device) result = model(x[None])[0] # Feature extraction: (x1, y1, x2, y2) in image coordinates # where class == class_index and score > score_threshold mask = torch.logical_and( result["labels"] == class_index, result["scores"] > score_threshold boxes = result["boxes"][mask].data.cpu().numpy() / np.array( [width, height, width, height] prev_indices = [] boxes_indices = [] if len(boxes) > 0 and len(prev_boxes) > 0: # Pairwise cost: euclidean distance between boxes cost = np.linalg.norm(prev_boxes[:, None] - boxes[None], axis=-1) # Bipartite matching prev_indices, boxes_indices = linear_sum_assignment(cost) # Add matches to active tracklets for prev_idx, box_idx in zip(prev_indices, boxes_indices): np.round(boxes[box_idx], 3).tolist() # Finalize lost tracklets lost_indices = set(range(len(active_tracklets))) - set(prev_indices) for lost_idx in sorted(lost_indices, reverse=True): # Activate new tracklets new_indices = set(range(len(boxes))) - set(boxes_indices) for new_idx in new_indices: {"start": i, "boxes": [np.round(boxes[new_idx], 3).tolist()]} # "Predict" next frame for comparison prev_boxes = np.array([tracklet["boxes"][-1] for tracklet in active_tracklets]) with open(output_path, "w") as f: "fps": reader.get_meta_data()["fps"], "tracklets": finished_tracklets + active_tracklets, if __name__ == "__main__": Let’s walk through the important bits here. Object detection The torchvision package provides pre-trained weights and implementations for several computer vision models, including Faster R-CNN. It’s easy to run these models on arbitrary images: device = "cuda" if torch.cuda.is_available() else "cpu" model = fasterrcnn_resnet50_fpn(pretrained=True).eval().to(device) x = to_tensor(frame).to(device) result = model(x[None])[0] This snippet runs inference on a GPU if it’s available, otherwise it uses the CPU. It also uses the to_tensor() helper from torchvision to convert from an RGB image with shape (height, width, channels) with 8-bit integers to a (channels, height, width) tensor with 32-bit floats between 0.0 and 1.0. The result you get back from calling the model object will be a dictionary with PyTorch tensors for boxes, labels (in the form of class indices from the COCO dataset) and scores (confidence values for each detection). I’ve got another example of using torchvision for inference at this image classification quick start. Feature extraction Next, we need to get features for each object. And good news: the coordinates of a bounding box are useful features for object tracking! We just want to make sure we’re only including objects that have the correct class and a high confidence score. For this we access the result dictionary. mask = torch.logical_and( result["labels"] == class_index, result["scores"] > score_threshold boxes = result["boxes"][mask].data.cpu().numpy() / np.array( [width, height, width, height] Here we’re creating a boolean tensor called mask for objects where the label is equal to the class label we set (defaults to 1 for the person class) and the confidence is high enough (defaults to 0.5). Then we pull those boxes off the GPU (if we’re using it). We also divide by the width and height of the image to keep the coordinates in relative pixel coordinates. This is a useful default because it prevents you from needing to worry about the size of the image, which is somewhat arbitrary. Pairwise cost Once we have object features, we can compute pairwise costs with objects from the previous frame (assuming we’re beyond the first frame). cost = np.linalg.norm(prev_boxes[:, None] - boxes[None], axis=-1) This is a one-liner for computing the Euclidean distance between pairs of boxes. If prev_boxes is (7, 4) and boxes is (8, 4) then the resulting cost matrix will be (7, 8). Euclidean distance is convenient, but the NumPy axes trick works for arbitrary functions and it works in PyTorch directly (read more about it here). Intersection over union is probably a better metric for comparing box coordinates and can be vectorized (but is more verbose than Euclidean distance). Another popular approach here is to use appearance features from the objects themselves. This makes it easier to recover when tracklets (a tracklet is a sequence of boxes belonging to the same object) are lost. Given a cost matrix, we need a way to figure out a set of assignments that minimizes cost. Typically, we also want to constrain the solution to make sure no object gets more than one assignment. The Munkres assignment algorithm (also known as the Hungarian algorithm) does exactly this. SciPy comes with this algorithm available in the linear_sum_assignment function. prev_indices, boxes_indices = linear_sum_assignment(cost) After this call, the nth element of prev_indices will be the index of the previous boxes that lines up with the current box indexed by the nth element of boxes_indices. This means if we zip prev_indices and box_indices we will be able to access the correct index from both sets of objects together. Finally, there’s some bookkeeping to do on every step, which looks different depending on the data structure you use for your tracklets. The three cases that need to be handled are: 1. There’s a match between an existing object and a new object. 2. There’s an existing object without a match (this tracklet will be considered lost). 3. There’s a new object without a match (this starts a new tracklet). For case (1), we just append the new object to the array of boxes in the matching tracklet. For case (2), we move the tracklet from the list of active tracklets to the list of lost or finished tracklets. For case (3), we create a new tracklet where start is the frame we’re currently on. The format of the tracklet data structure in this implementation is simple: we just keep lists of active and finished tracklets. Each tracklet is a dictionary that looks like the following: "start": <the frame where the tracklet starts>, "boxes": <a (n_frames, 4) array of box coordinates> Validating the implementation of an algorithm like object tracking is tough, especially when you don’t have another implemenation you can compare against or ground truth data. Unit tests definitely help and are a good idea for anything you plan on using for a while. Another thing you should be doing as much as possible is visualizing your results. But this is a little tricky when the input to the algorithm is video. Matplotlib does have some animation tooling, but it’s not very intuitive to use and the result isn’t interactive. But you can go a long way with basic knowledge of JavaScript and React. To make sure this algorithm was working as expected, I put together a simple React app that plots tracklets on top of a video. You can check out the app here with my test video in it. You can also load your own video and output. Check out the code for the visualization here. And that’s all there is to it. There are a lot of ways to make this algorithm more sophisticated, but if you can afford to run object detection on every frame, you can go a long way by just improving your detections (mostly by adding high quality labeled data).
{"url":"https://sparrow.dev/simple-pytorch-object-tracking/","timestamp":"2024-11-09T00:02:11Z","content_type":"text/html","content_length":"40201","record_id":"<urn:uuid:90d18a99-39c7-4a55-9dc4-f18349facd8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00467.warc.gz"}
Pipe Wall Thickness Pipe wall thickness calculation using Barlow´s formula Pipe Wall Thickness Barlow´s Formula is used to calculate the pipe pressure considering its diameter, wall thickness, and hoop stress (in the pipe material). Thus, it can be used to calculate whichever one of those parameters as a function of the other three. In addition to some other simplifications, an important theoretical assumption made for the use of Barlow´s formula is that the pipe wall behaves like a membrane (or thin wall pipe), meaning the hoop stress in the pipe wall is distributed uniformly in all its thickness. There are no moments of any type within the pipe wall.One parameter to ensure that the membrane behavior happens in the pipe wall is the diameter over thickness ratio (D/t) to be bigger or equal to 20 ^1 ^2, although some authors consider 16 ^3. However, the decision of using or not of the formula is usually not based on its section geometry (D/t ratio), but on the service of the pipe, considering the type of fluid, industry, and physical conditions, like for instance, the ASME (American Association of Mechanical Engineers) does. • P: Pipe pressure • S: Hoop stress • t: Pipe wall thickness • D: Outside diameter Following that service criteria, the ASME B31.4 code (Pipeline Transportation Systems for Liquids and Slurries) applies the formula in this manner: • A: Allowance for threading, grooving, corrosion ASME B31.8 code (Gas Transmission and Distribution Piping Systems) applies it in this manner: and to calculate the minimum wall thickness including an allowance: it should be expressed like this: • F: Design Factor • E: Longitudinal Joint Factor • T: Temperature Derating Factor • A: Allowance for threading, grooving, corrosion ASME B31.9 code (Building Services Piping) applies it in this manner: • E: Longitudinal Joint Factor • A: Allowance for threading, grooving, corrosion On the other hand, as opposed to the thin wall assumption or membrane theory, there is the curved plate or thick wall pipe formulas derived from Lame´s theory whose use is more complicated, sometimes with iterations, and requires a careful approach like, for instance, in ASME B 31.1 code (Power Piping), ASME B 31.3 code (Process Piping), and ASME B 31.5 (Refrigeration Piping and Heat Transfer PLANETCALC, Pipe Wall Thickness
{"url":"https://embed.planetcalc.com/8189/?thanks=1","timestamp":"2024-11-13T07:29:41Z","content_type":"text/html","content_length":"71284","record_id":"<urn:uuid:98777396-681a-4961-bb97-86a141bace05>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00601.warc.gz"}
A right prism is a polyhedron with the same polygon as its top and base, and rectangles as its remaining faces. The rectangular sides are perpendicular (at right angles) to the top and base, as shown in the diagram of the rectangular prism below. In this topic we will use prism when we mean right prism. Cylinders are not prisms because they do not have rectangular side faces and a circle is not a polygon. Every cross-section of a prism that is parallel to the base is the same as its top and base. Prisms are usually named according to the shape at the top and base.
{"url":"https://amsi.org.au/ESA_middle_years/Year6/Year6_2cS/Year6_2cS_R1_pg1.html","timestamp":"2024-11-07T04:28:52Z","content_type":"text/html","content_length":"3769","record_id":"<urn:uuid:fab499e1-ca37-4798-b717-3455a1979ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00601.warc.gz"}
If tan2θ=2tan2ϕ+1, then cos2θ+sin2ϕ equals... | Filo Not the question you're searching for? + Ask your question Was this solution helpful? Found 8 tutors discussing this question Discuss this question LIVE for FREE 8 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Trigonometric Functions View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If , then equals Updated On Aug 13, 2022 Topic Trigonometric Functions Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 2 Upvotes 333 Avg. Video Duration 2 min
{"url":"https://askfilo.com/math-question-answers/if-tan-2-theta2-tan-2-phi1-then-cos-2-thetasin-2-phi-equals","timestamp":"2024-11-08T01:31:35Z","content_type":"text/html","content_length":"375905","record_id":"<urn:uuid:0887cfae-c39c-4489-a380-f61fd7b0f91e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00467.warc.gz"}
Graph Sampling for Covariance Estimation In this paper, the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the nonparametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed Dive into the research topics of 'Graph Sampling for Covariance Estimation'. Together they form a unique fingerprint.
{"url":"https://faculty.kaust.edu.sa/en/publications/graph-sampling-for-covariance-estimation","timestamp":"2024-11-13T04:19:21Z","content_type":"text/html","content_length":"54842","record_id":"<urn:uuid:ce4e958f-b46c-4a14-978e-698d3d78f371>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00195.warc.gz"}
Least common multiple Algebra Tutorials! least common multiple Related topics: Home hoe to solve inequalities | freemath books pdf | factorization test papers | finding the nth term | occupation that use trigonometry in everyday life | polynomial long Solving Quadratic division solver | excel slope with 4 points intercept | calculator for linear equations with fractions Equations by Completing the Square Graphing Logarithmic Author Message Division Property of GSNovir Posted: Wednesday 27th of Dec 08:11 Exponents Hey friends , I have just completed one week of my high school , and am getting a bit tensed about my least common multiple course work. I just Adding and Subtracting don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. Rational Expressions With Like Denominators Rationalizing the Registered: Denominator 05.11.2002 Multiplying Special From: Bridge USS Polynomials Enterprise-E Functions Movies-to-go-see: Solving Linear Systems Phone Booth &amp;amp; of Equations by What a Girl Wants Solving Systems of Equation by Substitution and Elimination espinxh Posted: Wednesday 27th of Dec 18:22 Polynomial Equations Algebrator is one of the best resources that can offer help to a person like you. When I was a beginner, I took assistance from Algebrator. Solving Linear Systems Algebrator covers all the principles of Basic Math. Rather than utilizing the Algebrator as a line-by-line tutor to solve all your math assignments, of Equations by Graphing you can use it as a tutor that can give the basics of dividing fractions, exponent rules and simplifying expressions. Once you understand the Quadratic Functions principles, you can go ahead and work out any tough question on Algebra 1 within minutes. Solving Proportions Registered: Parallel and 17.03.2002 Perpendicular Lines From: Norway Simplifying Square Roots Simplifying Fractions Adding and Subtracting Fractions CHS` Posted: Friday 29th of Dec 08:20 Adding and Subtracting Algebrator is a very easy tool. I have been using it for a long time now. Solving Linear Equations Inequalities in one Variable Registered: Recognizing Polynomial 04.07.2001 Equations from their From: Victoria City, Graphs Hong Kong Island, Scientific Notation Hong Kong Factoring a Sum or Difference of Two Cubes Solving Nonlinear Equations by sonanqdlon Posted: Saturday 30th of Dec 07:03 Substitution Interesting! I don’t have that much time to hire somebody to teach me so I think this would be just fine. Is this something bought from a store ? Do Solving Systems of they have a website so that I can see more details regarding the software? Linear Inequalities Arithmetics with Decimals Registered: Finding the Equation of 26.03.2002 an Inverse Function From: Plotting Points in the Coordinate Plane The Product of the Roots of a Quadratic Troigonis Posted: Saturday 30th of Dec 13:26 Powers inequalities, exponent rules and graphing lines were a nightmare for me until I found Algebrator, which is really the best algebra program that I Solving Quadratic have come across. I have used it through many math classes – College Algebra, Pre Algebra and Pre Algebra. Simply typing in the math problem and Equations by Completing clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the the Square program. From: Kvlt of Ø caxee Posted: Monday 01st of Jan 08:34 It is available at https://algebra-calculator.com/parallel-and-perpendicular-lines.html. You can take a look at the demos, compare its features with others available in the market and see for yourself. I am sure you will be impressed. From: Boston, MA, US least common multiple Related topics: Home hoe to solve inequalities | freemath books pdf | factorization test papers | finding the nth term | occupation that use trigonometry in everyday life | polynomial long Solving Quadratic division solver | excel slope with 4 points intercept | calculator for linear equations with fractions Equations by Completing the Square Graphing Logarithmic Author Message Division Property of GSNovir Posted: Wednesday 27th of Dec 08:11 Exponents Hey friends , I have just completed one week of my high school , and am getting a bit tensed about my least common multiple course work. I just Adding and Subtracting don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. Rational Expressions With Like Denominators Rationalizing the Registered: Denominator 05.11.2002 Multiplying Special From: Bridge USS Polynomials Enterprise-E Functions Movies-to-go-see: Solving Linear Systems Phone Booth &amp;amp; of Equations by What a Girl Wants Solving Systems of Equation by Substitution and Elimination espinxh Posted: Wednesday 27th of Dec 18:22 Polynomial Equations Algebrator is one of the best resources that can offer help to a person like you. When I was a beginner, I took assistance from Algebrator. Solving Linear Systems Algebrator covers all the principles of Basic Math. Rather than utilizing the Algebrator as a line-by-line tutor to solve all your math assignments, of Equations by Graphing you can use it as a tutor that can give the basics of dividing fractions, exponent rules and simplifying expressions. Once you understand the Quadratic Functions principles, you can go ahead and work out any tough question on Algebra 1 within minutes. Solving Proportions Registered: Parallel and 17.03.2002 Perpendicular Lines From: Norway Simplifying Square Roots Simplifying Fractions Adding and Subtracting Fractions CHS` Posted: Friday 29th of Dec 08:20 Adding and Subtracting Algebrator is a very easy tool. I have been using it for a long time now. Solving Linear Equations Inequalities in one Variable Registered: Recognizing Polynomial 04.07.2001 Equations from their From: Victoria City, Graphs Hong Kong Island, Scientific Notation Hong Kong Factoring a Sum or Difference of Two Cubes Solving Nonlinear Equations by sonanqdlon Posted: Saturday 30th of Dec 07:03 Substitution Interesting! I don’t have that much time to hire somebody to teach me so I think this would be just fine. Is this something bought from a store ? Do Solving Systems of they have a website so that I can see more details regarding the software? Linear Inequalities Arithmetics with Decimals Registered: Finding the Equation of 26.03.2002 an Inverse Function From: Plotting Points in the Coordinate Plane The Product of the Roots of a Quadratic Troigonis Posted: Saturday 30th of Dec 13:26 Powers inequalities, exponent rules and graphing lines were a nightmare for me until I found Algebrator, which is really the best algebra program that I Solving Quadratic have come across. I have used it through many math classes – College Algebra, Pre Algebra and Pre Algebra. Simply typing in the math problem and Equations by Completing clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the the Square program. From: Kvlt of Ø caxee Posted: Monday 01st of Jan 08:34 It is available at https://algebra-calculator.com/parallel-and-perpendicular-lines.html. You can take a look at the demos, compare its features with others available in the market and see for yourself. I am sure you will be impressed. From: Boston, MA, US Solving Quadratic Equations by Completing the Square Graphing Logarithmic Division Property of Adding and Subtracting Rational Expressions With Like Denominators Rationalizing the Multiplying Special Solving Linear Systems of Equations by Solving Systems of Equation by Substitution and Elimination Polynomial Equations Solving Linear Systems of Equations by Graphing Quadratic Functions Solving Proportions Parallel and Perpendicular Lines Simplifying Square Roots Simplifying Fractions Adding and Subtracting Adding and Subtracting Solving Linear Equations Inequalities in one Recognizing Polynomial Equations from their Scientific Notation Factoring a Sum or Difference of Two Cubes Solving Nonlinear Equations by Solving Systems of Linear Inequalities Arithmetics with Finding the Equation of an Inverse Function Plotting Points in the Coordinate Plane The Product of the Roots of a Quadratic Solving Quadratic Equations by Completing the Square Author Message GSNovir Posted: Wednesday 27th of Dec 08:11 Hey friends , I have just completed one week of my high school , and am getting a bit tensed about my least common multiple course work. I just don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. From: Bridge USS Phone Booth &amp;amp; What a Girl Wants espinxh Posted: Wednesday 27th of Dec 18:22 Algebrator is one of the best resources that can offer help to a person like you. When I was a beginner, I took assistance from Algebrator. Algebrator covers all the principles of Basic Math. Rather than utilizing the Algebrator as a line-by-line tutor to solve all your math assignments, you can use it as a tutor that can give the basics of dividing fractions, exponent rules and simplifying expressions. Once you understand the principles, you can go ahead and work out any tough question on Algebra 1 within minutes. From: Norway CHS` Posted: Friday 29th of Dec 08:20 Algebrator is a very easy tool. I have been using it for a long time now. From: Victoria City, Hong Kong Island, Hong Kong sonanqdlon Posted: Saturday 30th of Dec 07:03 Interesting! I don’t have that much time to hire somebody to teach me so I think this would be just fine. Is this something bought from a store ? Do they have a website so that I can see more details regarding the software? Troigonis Posted: Saturday 30th of Dec 13:26 inequalities, exponent rules and graphing lines were a nightmare for me until I found Algebrator, which is really the best algebra program that I have come across. I have used it through many math classes – College Algebra, Pre Algebra and Pre Algebra. Simply typing in the math problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the program. From: Kvlt of Ø caxee Posted: Monday 01st of Jan 08:34 It is available at https://algebra-calculator.com/parallel-and-perpendicular-lines.html. You can take a look at the demos, compare its features with others available in the market and see for yourself. I am sure you will be impressed. From: Boston, MA, US Posted: Wednesday 27th of Dec 08:11 Hey friends , I have just completed one week of my high school , and am getting a bit tensed about my least common multiple course work. I just don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. Posted: Wednesday 27th of Dec 18:22 Algebrator is one of the best resources that can offer help to a person like you. When I was a beginner, I took assistance from Algebrator. Algebrator covers all the principles of Basic Math. Rather than utilizing the Algebrator as a line-by-line tutor to solve all your math assignments, you can use it as a tutor that can give the basics of dividing fractions, exponent rules and simplifying expressions. Once you understand the principles, you can go ahead and work out any tough question on Algebra 1 within minutes. Posted: Friday 29th of Dec 08:20 Algebrator is a very easy tool. I have been using it for a long time now. Posted: Saturday 30th of Dec 07:03 Interesting! I don’t have that much time to hire somebody to teach me so I think this would be just fine. Is this something bought from a store ? Do they have a website so that I can see more details regarding the software? Posted: Saturday 30th of Dec 13:26 inequalities, exponent rules and graphing lines were a nightmare for me until I found Algebrator, which is really the best algebra program that I have come across. I have used it through many math classes – College Algebra, Pre Algebra and Pre Algebra. Simply typing in the math problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the program. Posted: Monday 01st of Jan 08:34 It is available at https://algebra-calculator.com/parallel-and-perpendicular-lines.html. You can take a look at the demos, compare its features with others available in the market and see for yourself. I am sure you will be impressed.
{"url":"https://algebra-calculator.com/algebra-calculator-program/trigonometry/least-common-multiple.html","timestamp":"2024-11-14T16:28:16Z","content_type":"text/html","content_length":"96456","record_id":"<urn:uuid:abc04e2e-c3c2-435c-9414-542384036fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00867.warc.gz"}
Pebble Game A MATLAB Toolbox for Two-Dimensional Rigidity Percolation: The Pebble Game 32 Downloads Updated 8 Oct 2021 A MATLAB Toolbox for Two-Dimensional Rigidity Percolation: The Pebble Game Welcome to MATLAB toolbox for 2D rigidity percolation. The generical rigidity of graphs plays an essential role in the field of sensor network location. Jacobs et al. provided an exciting and effective method that uses pebble games to percolate the rigidity of a network [2]. To meet the need of research and teaching on MATLAB, we implement the algorithm and build a toolbox. The implementation disassembles the pebble game into elemental actions, which provides interactivity and reusability. Benefitting from the build-in graph operation from MATLAB, the toolbox has good compatibility. The practice shows the following convenience: • Analyzing the rigidity of a network with less than ten lines of code. • Observing the work of pebble games step by step. • Never generating extra variables to contaminate workspace. • Generating good figures for papers. We sincerely hope our toolbox facilitates research and teaching. Any suggestions, corrections, and improvements are welcome. Please Email: bluebirdhouse@me.com Pebbling is a game that involves placing pebbles on the vertices of a directed acyclic graph according to specific rules [1]. When the pebble game is used to percolate the rigidity of 2D generic networks, the rules are as follows. Each vertex is given two pebbles. The edges are added to the graph one at a time, and pebbles are rearranged to cover the edge. First, we look at the vertices that incident to the newly added edge. If either vertex has a free pebble, then use it to cover the edge. In the meantime, the edge is directed away from the vertex that offers a pebble. If neither of them has free pebbles, their pebbles must have been used to cover existing edges. We should try to free up a pebble for the newly added edge. If a vertex at the other end of existing edges has a free pebble, then that pebble can cover the associated edge. The swap will free up a pebble. More formally, we are searching for a free pebble that follows the directions of edges. This search of a free pebble continues until either a pebble is found and a sequence of swaps allows the new edge to be covered, or else no more vertices can be reached and no free pebbles have been discovered. If we fail to find a free pebble, the new edge will be removed and play no roles in pebble games. Two tasks should be accomplished when percolating the rigidity of a 2D generic network: identifying independent edges and exploring rigid clusters. The first task starts from a graph that contains no edge. We randomly add one edge that should be tested to the graph and quadruple the edge; then, four edges are added to the graph one after another to find pebble covering. If four edges are all covered by pebbles, three will be removed from the graph. The left one is an independent edge. If not, we remove all of them and mark the edge After every edge is tested, rigid clusters will be explored. Since an independent edge can only belong to one rigid cluster, the triple edge will exhaust three free pebbles of the current rigid cluster. Any vertex that belongs to the same rigid cluster will not collect pebbles from the associated two vertices. The edges between two vertices that fail to find a free pebble are marked as the same rigid cluster. Our toolbox differs from the work [2] in two ways. The toolbox performs a breadth-first search; then, it moves through the search results to find a free pebble. This strategy finds free pebbles faster and is easier to program, but it will consume more memory. Furthermore, we believe that pinning free pebbles at a vertex is equivalent to using free pebbles to cover the edge (Theorem 1). Thus, we pin three pebbles and use the fourth one to cover the edge rather than cover quadruple edges. We will improve this claim. Theorem 1. Pinning free pebbles at a vertex is equivalent to using free pebbles to cover the edge associated with the vertex. Figure 1: Testing an independent edge. If a pebble from vertex B covers the double edge, the condition is similar to Figure 1(c). In this condition, freeing a pebble from A to B will lead to the condition of Figure 1(b). However, if the third pebble is collected through other paths, the collection has nothing to do with directed edges between A and B. Two pebbles from vertex A and one from vertex B have been used to cover tripled edges in the last condition (Figure 1(d)). The fourth pebble can only be collected by vertex B and will not from vertex A. The existed three directed edges between A and B do not affect vertex B collecting pebbles. Since the directed edges between vertex A and vertex B will not affect collecting more pebbles, we can pin pebbles on the vertices rather than covering the first, double, and triple edges. Examples and How-To The toolbox is based on the build-in graph operation of MATLAB, so it should be compatible with version 7.6 and later. The user may add toolbox files to MATLAB’s search path with m-file ‘SetPath.’ There are examples in the toolbox to play with; then, users get out to build personal stage of pebble game. Building the Pebble Game Play Stage The interface of PebbleGamePlayStage is the same as the function graph, which is a built-in MATLAB class. The first step is to create a table that contains a variable EndNodes, such as: T1 = [1 2; 1 4; 1 6; 2 3; 2 4; 2 5; 2 6; 3 4; 3 6; 4 5; 4 6; 5 6]; T2 = [3 7; 3 9; 3 11; 7 8; 7 9; 7 10; 7 11; 8 9; 8 11; 9 10; 9 11; 10 11]; T3 = [5 12; 5 14; 5 16; 12 13; 12 14; 12 15; 12 16; 13 14; 13 16; 14 15; 14 16; 15 16]; T = [T1;T2;T3]; EdgeTable = table(T,'VariableNames',{'EndNodes'}); The table is used as the parameter to instantiate PebbleGamePlayStage. PebbleGame = PebbleGamePlayStage(EdgeTable); Then, store two associated vertices of an edge into different matrices. s = [1 1 1 2 2 2 2 3 3 4 4 5]; t = [2 4 6 3 4 5 6 4 6 5 6 6]; PebbleGame = PebbleGamePlayStage(s,t); When the instantiation PebbleGame is created, the class UserInPut_Group is instantiated automatically. Its handle is copied to PebbleGame.UserInPut. This class shows the user’s input and provides data for instantiating Operation_Group. The latter, whose handle is PebbleGame.Operation, identifies independent edges and explores rigid clusters. The MATLAB will pop up two figures, such as Figure 2 and Figure 3. One shows the user’s input, the other displays the process of the pebble game. Figure 2: An example of user’s input graph. It is used to confirm the input or be treated as a reference during the pebble game. Figure 3: An Example of the initial status of a pebble game. No edge has been added to the graph yet. The vertices are painted blue, which means each vertex has two free pebbles. Playing Pebble Games The toolbox contains multiple functions to customize pebble games. However, if the user prefers identifying independent edges with the classic pebble game [2], only one function should be enough. for i = 1:1:(12*3) Each time the function IndependentEdge is called, it picks an edge from PebbleGame.Operation.EdgeReadyForAdd and try to find a free pebble to cover it. If the edge is independent, the function adds it to the graph; otherwise, the function adds it to PebbleGame.Operation.EdgeUnableAdd. Eventually, the graph will be like Figure 4. Figure 4: Pebbles covering all the independent edges. Their edges are directed away from the vertices that offer the pebbles. Most of the vertices use up their free pebbles and change color to black. Three greens remain one pebble. Redundant edges are missing because they play no roles in pebble covering. Identifying Rigid Clusters and Coloring Them Before exploring rigid clusters, the user executes function StartIdentifyRigidCluster to get the toolbox ready. The code below explores rigid clusters as many as there are. while(PebbleGame.Operation.IdentifyARigidCluster() == false) As shown in Figure 5, the function ShowRigidCluster can color the rigid clusters. Use drawnow wisely to show a dynamic figure. Figure 5: Rigid clusters are identified and colored. • The toolbox uses a class ArrayList from an excellent MATLAB-based robot software Sim.I.am [3]. • The suggestion of professor Jacobs, which is to build an easy-to-use interface, gave us confidence about the work. • Professor Hongxia Wang gave attention and encouragement during the software development. • Professor Li Yu provided financial support when the software was registered. • This work is supported by the Key Programs of the Natural Science Foundation of Zhejiang Province under Grant LZ15F030003. [1] Wikipedia contributors. Pebble game [Internet]. Wikipedia, The Free Encyclopedia; 2015 Jul 27, 10:50 UTC [cited 2016 Jan 25]. Available from: https://en.wikipedia.org/w/index.php?title= [2] D. J. Jacobs and B. Hendrickson, "An algorithm for two-dimensional rigidity percolation: the pebble game," Journal of Computational Physics, vol. 137, pp. 346-365, 1997. [3] J. P. d. l. Croix. (2013). Sim.I.am. Available: http://jpdelacroix.com/simiam/ Cite As Blue Bird (2024). Pebble Game (https://github.com/BlueBirdHouse/PebbleGame/releases/tag/v1.0.9), GitHub. Retrieved . MATLAB Release Compatibility Created with R2021a Compatible with any release Platform Compatibility Windows macOS Linux Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! Discover Live Editor Create scripts with code, output, and formatted text in a single executable document.
{"url":"https://www.mathworks.com/matlabcentral/fileexchange/100401-pebble-game?s_tid=blogs_rc_4","timestamp":"2024-11-14T22:25:43Z","content_type":"text/html","content_length":"121031","record_id":"<urn:uuid:17fd36b5-e0e9-4d9a-af94-59b1e4c76714>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00781.warc.gz"}
Degenerate Solutions to the Transportation Problem If a feasible solution to a transportation problem with n rows and m columns exists, but the solution uses less than A B C Supply Demand 40 60 80 180 The cost associated with each route from a supply depot to a demand depot are highlighted. The North West Corner solution is A B C Supply Demand 40 60 80 180 The total cost of this solution is C=40*25+0*20+60*15+20*30+60*40=4900 The supply and demand for each depot can also be satisfied by placing the zero is cell 2A instead of 1B, to give a total cost of C=40*25+0*40+60*15+20*30+60*40=4900. These costs are the same so the solution is degenerate. Notice that the number of rows is There are only 4 non zero cells used in the solution. Since The number of cells used is less than the number of rows plus the number of columns less one, the solution is degenerate.
{"url":"https://astarmathsandphysics.com/a-level-maths-notes/d2/3366-degenerate-solutions-to-the-transportation-problem.html","timestamp":"2024-11-09T19:48:42Z","content_type":"text/html","content_length":"34152","record_id":"<urn:uuid:624c1fcb-b6eb-4df3-b94d-48e6bf3fb6ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00114.warc.gz"}
Common vs. Normal - What's the Difference? | This vs. That Common vs. Normal What's the Difference? Common and normal are both adjectives used to describe something that is typical or average. However, common tends to imply that something is frequently encountered or widely seen, while normal suggests that something is within the expected range of behavior or characteristics. In everyday language, common is often used to describe things that are ordinary or unremarkable, while normal is used to describe things that are in line with societal norms or standards. Overall, common and normal are similar in meaning but have slightly different connotations. Photo by Ekaterina Z. on Unsplash Attribute Common Normal Definition Ordinary, usual, or familiar Conforming to a standard or regular pattern Usage Widely used or seen Typical or expected Meaning Shared by many Conforming to a norm or standard Synonyms Usual, prevalent, popular Typical, standard, regular Photo by Samuel Regan-Asante on Unsplash Further Detail When it comes to statistical analysis, two terms that are often used interchangeably are "common" and "normal." However, these terms have distinct meanings and implications in the field of statistics. In this article, we will explore the attributes of common and normal distributions, highlighting their differences and similarities. Common and normal distributions are both types of probability distributions that are used to describe the behavior of a set of data. A common distribution refers to a distribution where the data points are spread out evenly across the range of values, with no skewness or outliers. On the other hand, a normal distribution, also known as a Gaussian distribution, is a specific type of common distribution that is symmetrical around the mean, with the majority of data points falling close to the mean. One of the key differences between common and normal distributions is their shape. A common distribution can take on various shapes, such as uniform, exponential, or skewed. This means that the data points may not be evenly distributed around the mean and can exhibit different patterns. In contrast, a normal distribution has a bell-shaped curve, with the highest frequency of data points occurring at the mean and tapering off symmetrically on either side. Mean and Standard Deviation Another important distinction between common and normal distributions is in their mean and standard deviation. In a common distribution, the mean and standard deviation may not accurately represent the central tendency and spread of the data due to the presence of outliers or skewness. On the other hand, in a normal distribution, the mean and standard deviation provide a reliable measure of the central tendency and variability of the data, making it easier to interpret and analyze. Common and normal distributions are used in different contexts and have distinct applications in statistics. Common distributions are often used to model real-world data that may not follow a specific pattern or exhibit randomness. They are useful for describing the variability and distribution of data points in a general sense. On the other hand, normal distributions are commonly used in hypothesis testing, quality control, and other statistical analyses where the assumption of normality is required for accurate results. Skewness and Kurtosis Skewness and kurtosis are two important measures of the shape of a distribution that can help differentiate between common and normal distributions. Skewness refers to the asymmetry of the distribution, with positive skewness indicating a tail to the right and negative skewness indicating a tail to the left. Common distributions may exhibit skewness due to outliers or non-uniform data points. In contrast, a normal distribution has a skewness of zero, indicating perfect symmetry around the mean. Kurtosis, on the other hand, measures the peakedness or flatness of a distribution. A common distribution may have different kurtosis values depending on its shape, while a normal distribution has a kurtosis of three, indicating a moderate level of peakedness. In conclusion, common and normal distributions have distinct attributes that set them apart in the field of statistics. While common distributions can take on various shapes and may not adhere to strict assumptions, normal distributions have a specific bell-shaped curve that is symmetrical and predictable. Understanding the differences between these two types of distributions is essential for accurate data analysis and interpretation in statistical studies. Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.
{"url":"https://thisvsthat.io/common-vs-normal","timestamp":"2024-11-05T16:31:54Z","content_type":"text/html","content_length":"13281","record_id":"<urn:uuid:d0836746-4446-4a25-9a4a-35ae9d766066>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00678.warc.gz"}
gcc/graphds.c - gcc - Git at Google /* Graph representation and manipulation functions. Copyright (C) 2007-2021 Free Software Foundation, Inc. This file is part of GCC. GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GCC; see the file COPYING3. If not see <http://www.gnu.org/licenses/>. */ #include "config.h" #include "system.h" #include "coretypes.h" #include "bitmap.h" #include "graphds.h" /* Dumps graph G into F. */ dump_graph (FILE *f, struct graph *g) int i; struct graph_edge *e; for (i = 0; i < g->n_vertices; i++) if (!g->vertices[i].pred && !g->vertices[i].succ) fprintf (f, "%d (%d)\t<-", i, g->vertices[i].component); for (e = g->vertices[i].pred; e; e = e->pred_next) fprintf (f, " %d", e->src); fprintf (f, "\n"); fprintf (f, "\t->"); for (e = g->vertices[i].succ; e; e = e->succ_next) fprintf (f, " %d", e->dest); fprintf (f, "\n"); /* Creates a new graph with N_VERTICES vertices. */ struct graph * new_graph (int n_vertices) struct graph *g = XNEW (struct graph); gcc_obstack_init (&g->ob); g->n_vertices = n_vertices; g->vertices = XOBNEWVEC (&g->ob, struct vertex, n_vertices); memset (g->vertices, 0, sizeof (struct vertex) * n_vertices); return g; /* Adds an edge from F to T to graph G. The new edge is returned. */ struct graph_edge * add_edge (struct graph *g, int f, int t) struct graph_edge *e = XOBNEW (&g->ob, struct graph_edge); struct vertex *vf = &g->vertices[f], *vt = &g->vertices[t]; e->src = f; e->dest = t; e->pred_next = vt->pred; vt->pred = e; e->succ_next = vf->succ; vf->succ = e; e->data = NULL; return e; /* Moves all the edges incident with U to V. */ identify_vertices (struct graph *g, int v, int u) struct vertex *vv = &g->vertices[v]; struct vertex *uu = &g->vertices[u]; struct graph_edge *e, *next; for (e = uu->succ; e; e = next) next = e->succ_next; e->src = v; e->succ_next = vv->succ; vv->succ = e; uu->succ = NULL; for (e = uu->pred; e; e = next) next = e->pred_next; e->dest = v; e->pred_next = vv->pred; vv->pred = e; uu->pred = NULL; /* Helper function for graphds_dfs. Returns the source vertex of E, in the direction given by FORWARD. */ static inline int dfs_edge_src (struct graph_edge *e, bool forward) return forward ? e->src : e->dest; /* Helper function for graphds_dfs. Returns the destination vertex of E, in the direction given by FORWARD. */ static inline int dfs_edge_dest (struct graph_edge *e, bool forward) return forward ? e->dest : e->src; /* Helper function for graphds_dfs. Returns the first edge after E (including E), in the graph direction given by FORWARD, that belongs to SUBGRAPH. If SKIP_EDGE_P is not NULL, it points to a callback function. Edge E will be skipped if callback function returns true. */ static inline struct graph_edge * foll_in_subgraph (struct graph_edge *e, bool forward, bitmap subgraph, skip_edge_callback skip_edge_p) int d; if (!e) return e; if (!subgraph && (!skip_edge_p || !skip_edge_p (e))) return e; while (e) d = dfs_edge_dest (e, forward); /* Return edge if it belongs to subgraph and shouldn't be skipped. */ if ((!subgraph || bitmap_bit_p (subgraph, d)) && (!skip_edge_p || !skip_edge_p (e))) return e; e = forward ? e->succ_next : e->pred_next; return e; /* Helper function for graphds_dfs. Select the first edge from V in G, in the direction given by FORWARD, that belongs to SUBGRAPH. If SKIP_EDGE_P is not NULL, it points to a callback function. Edge E will be skipped if callback function returns true. */ static inline struct graph_edge * dfs_fst_edge (struct graph *g, int v, bool forward, bitmap subgraph, skip_edge_callback skip_edge_p) struct graph_edge *e; e = (forward ? g->vertices[v].succ : g->vertices[v].pred); return foll_in_subgraph (e, forward, subgraph, skip_edge_p); /* Helper function for graphds_dfs. Returns the next edge after E, in the graph direction given by FORWARD, that belongs to SUBGRAPH. If SKIP_EDGE_P is not NULL, it points to a callback function. Edge E will be skipped if callback function returns true. */ static inline struct graph_edge * dfs_next_edge (struct graph_edge *e, bool forward, bitmap subgraph, skip_edge_callback skip_edge_p) return foll_in_subgraph (forward ? e->succ_next : e->pred_next, forward, subgraph, skip_edge_p); /* Runs dfs search over vertices of G, from NQ vertices in queue QS. The vertices in postorder are stored into QT. If FORWARD is false, backward dfs is run. If SUBGRAPH is not NULL, it specifies the subgraph of G to run DFS on. Returns the number of the components of the graph (number of the restarts of DFS). If SKIP_EDGE_P is not NULL, it points to a callback function. Edge E will be skipped if callback function returns true. */ graphds_dfs (struct graph *g, int *qs, int nq, vec<int> *qt, bool forward, bitmap subgraph, skip_edge_callback skip_edge_p) int i, tick = 0, v, comp = 0, top; struct graph_edge *e; struct graph_edge **stack = XNEWVEC (struct graph_edge *, g->n_vertices); bitmap_iterator bi; unsigned av; if (subgraph) EXECUTE_IF_SET_IN_BITMAP (subgraph, 0, av, bi) g->vertices[av].component = -1; g->vertices[av].post = -1; for (i = 0; i < g->n_vertices; i++) g->vertices[i].component = -1; g->vertices[i].post = -1; for (i = 0; i < nq; i++) v = qs[i]; if (g->vertices[v].post != -1) g->vertices[v].component = comp++; e = dfs_fst_edge (g, v, forward, subgraph, skip_edge_p); top = 0; while (1) while (e) if (g->vertices[dfs_edge_dest (e, forward)].component == -1) e = dfs_next_edge (e, forward, subgraph, skip_edge_p); if (!e) if (qt) qt->safe_push (v); g->vertices[v].post = tick++; if (!top) e = stack[--top]; v = dfs_edge_src (e, forward); e = dfs_next_edge (e, forward, subgraph, skip_edge_p); stack[top++] = e; v = dfs_edge_dest (e, forward); e = dfs_fst_edge (g, v, forward, subgraph, skip_edge_p); g->vertices[v].component = comp - 1; free (stack); return comp; /* Determines the strongly connected components of G, using the algorithm of Tarjan -- first determine the postorder dfs numbering in reversed graph, then run the dfs on the original graph in the order given by decreasing numbers assigned by the previous pass. If SUBGRAPH is not NULL, it specifies the subgraph of G whose strongly connected components we want to determine. If SKIP_EDGE_P is not NULL, it points to a callback function. Edge E will be skipped if callback function returns true. After running this function, v->component is the number of the strongly connected component for each vertex of G. Returns the number of the sccs of G. */ graphds_scc (struct graph *g, bitmap subgraph, skip_edge_callback skip_edge_p) int *queue = XNEWVEC (int, g->n_vertices); vec<int> postorder = vNULL; int nq, i, comp; unsigned v; bitmap_iterator bi; if (subgraph) nq = 0; EXECUTE_IF_SET_IN_BITMAP (subgraph, 0, v, bi) queue[nq++] = v; for (i = 0; i < g->n_vertices; i++) queue[i] = i; nq = g->n_vertices; graphds_dfs (g, queue, nq, &postorder, false, subgraph, skip_edge_p); gcc_assert (postorder.length () == (unsigned) nq); for (i = 0; i < nq; i++) queue[i] = postorder[nq - i - 1]; comp = graphds_dfs (g, queue, nq, NULL, true, subgraph, skip_edge_p); free (queue); postorder.release (); return comp; /* Runs CALLBACK for all edges in G. DATA is private data for CALLBACK. */ for_each_edge (struct graph *g, graphds_edge_callback callback, void *data) struct graph_edge *e; int i; for (i = 0; i < g->n_vertices; i++) for (e = g->vertices[i].succ; e; e = e->succ_next) callback (g, e, data); /* Releases the memory occupied by G. */ free_graph (struct graph *g) obstack_free (&g->ob, NULL); free (g); /* Returns the nearest common ancestor of X and Y in tree whose parent links are given by PARENT. MARKS is the array used to mark the vertices of the tree, and MARK is the number currently used as a mark. */ static int tree_nca (int x, int y, int *parent, int *marks, int mark) if (x == -1 || x == y) return y; /* We climb with X and Y up the tree, marking the visited nodes. When we first arrive to a marked node, it is the common ancestor. */ marks[x] = mark; marks[y] = mark; while (1) x = parent[x]; if (x == -1) if (marks[x] == mark) return x; marks[x] = mark; y = parent[y]; if (y == -1) if (marks[y] == mark) return y; marks[y] = mark; /* If we reached the root with one of the vertices, continue with the other one till we reach the marked part of the tree. */ if (x == -1) for (y = parent[y]; marks[y] != mark; y = parent[y]) return y; for (x = parent[x]; marks[x] != mark; x = parent[x]) return x; /* Determines the dominance tree of G (stored in the PARENT, SON and BROTHER arrays), where the entry node is ENTRY. */ graphds_domtree (struct graph *g, int entry, int *parent, int *son, int *brother) vec<int> postorder = vNULL; int *marks = XCNEWVEC (int, g->n_vertices); int mark = 1, i, v, idom; bool changed = true; struct graph_edge *e; /* We use a slight modification of the standard iterative algorithm, as described in K. D. Cooper, T. J. Harvey and K. Kennedy: A Simple, Fast Dominance sort vertices in reverse postorder foreach v dom(v) = everything dom(entry) = entry; while (anything changes) foreach v dom(v) = {v} union (intersection of dom(p) over all predecessors of v) The sets dom(v) are represented by the parent links in the current version of the dominance tree. */ for (i = 0; i < g->n_vertices; i++) parent[i] = -1; son[i] = -1; brother[i] = -1; graphds_dfs (g, &entry, 1, &postorder, true, NULL); gcc_assert (postorder.length () == (unsigned) g->n_vertices); gcc_assert (postorder[g->n_vertices - 1] == entry); while (changed) changed = false; for (i = g->n_vertices - 2; i >= 0; i--) v = postorder[i]; idom = -1; for (e = g->vertices[v].pred; e; e = e->pred_next) if (e->src != entry && parent[e->src] == -1) idom = tree_nca (idom, e->src, parent, marks, mark++); if (idom != parent[v]) parent[v] = idom; changed = true; free (marks); postorder.release (); for (i = 0; i < g->n_vertices; i++) if (parent[i] != -1) brother[i] = son[parent[i]]; son[parent[i]] = i;
{"url":"https://gnu.googlesource.com/gcc/+/f49e3d28be44179f07b8a06159139ce77096dda7/gcc/graphds.c?autodive=0%2F%2F%2F","timestamp":"2024-11-11T11:05:14Z","content_type":"text/html","content_length":"161410","record_id":"<urn:uuid:1a251197-3e06-4e1f-86ea-344f938e1828>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00455.warc.gz"}
Particular Forward Search Methods 2.2.2 Particular Forward Search Methods This section presents several search algorithms, each of which constructs a search tree. Each search algorithm is a special case of the algorithm in Figure 2.4, obtained by defining a different sorting function for . Most of these are just classical graph search algorithms [243]. Steven M LaValle 2020-08-14
{"url":"https://lavalle.pl/planning/node41.html","timestamp":"2024-11-09T20:25:19Z","content_type":"text/html","content_length":"3839","record_id":"<urn:uuid:4a76a415-2753-46ab-8d23-d3a5aeb9f33b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00854.warc.gz"}
Question #9afde | Socratic Question #9afde 1 Answer $2.02 \cdot {10}^{7} \text{atoms of Rb}$ For starters, the atomic radius of rubidium cannot be expressed in picometers cubed, ${\text{pm}}^{3}$, because that is a unit used for volume, not for length. My guess would be that you're indeed dealing with the atomic radius of rubidium, which is equal to $\text{248 pm}$. If that's the case, the first thing to do here would be to convert the desired length, i.e. $1.00$centimeters, to picometers. To do that, go from centimeters to meters first, then from meters to #1.00 color(red)(cancel(color(black)("cm"))) * (1color(red)(cancel(color(black)("m"))))/(10^2color(red)(cancel(color(black)("cm")))) * (10^(12)"pm")/(1color(red)(cancel(color(black)("m")))) = 1.00 * 10^(10)"pm"# So, you know that the radius of a rubidium atom is equal to $\text{248 pm}$. The thing to look out for here is the fact that you need to use the diameter of an atom, which is you know is equal to $\textcolor{p u r p \le}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{\text{diameter" = 2 xx "radius}} \textcolor{w h i t e}{\frac{a}{a}} |}}}$ The diameter of a rubidium atom will thus be $\text{diamter" = 2 xx "248 pm" = "496 pm}$ Now all you have to do is figure out how many atoms would fit in that length #1.00 * 10^(10)color(red)(cancel(color(black)("pm"))) * "1 Rb atom"/(496color(red)(cancel(color(black)("pm")))) = color(green)(bar(ul(|color(white)(a/a)color(black)(2.02 * 10^7"atoms of Rb")color The answer is rounded to three sig figs. Impact of this question 1793 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/57e80ce07c014962b8b9afde","timestamp":"2024-11-06T11:29:30Z","content_type":"text/html","content_length":"36774","record_id":"<urn:uuid:b3c260af-b859-423c-bc46-8bc41cc18fc1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00446.warc.gz"}
Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT Deep learning is revolutionizing the way that industries are delivering products and services. These services include object detection, classification, and segmentation for computer vision, and text extraction, classification, and summarization for language-based applications. These applications must run in real time. Most of the models are trained in floating-point 32-bit arithmetic to take advantage of a wider dynamic range. However, at inference, these models may take a longer time to predict results compared to reduced precision inference, causing some delay in the real-time responses, and affecting the user experience. It’s better in many cases to use reduced precision or 8-bit integer numbers. The challenge is that simply rounding the weights after training may result in a lower accuracy model, especially if the weights have a wide dynamic range. This post provides a simple introduction to quantization-aware training (QAT), and how to implement fake-quantization during training, and perform inference with NVIDIA TensorRT 8.0. Model quantization is a popular deep learning optimization method in which model data—both network parameters and activations—are converted from a floating-point representation to a lower-precision representation, typically using 8-bit integers. This has several benefits: • When processing 8-bit integer data, NVIDIA GPUs employ the faster and cheaper 8-bit Tensor Cores to compute convolution and matrix-multiplication operations. This yields more compute throughput, which is particularly effective on compute-limited layers. • Moving data from memory to computing elements (streaming multiprocessors in NVIDIA GPUs) takes time and energy, and also produces heat. Reducing the precision of activation and parameter data from 32-bit floats to 8-bit integers results in 4x data reduction, which saves power and reduces the produced heat. • Some layers are bandwidth-bound (memory-limited). That means that their implementation spends most of its time reading and writing data, and therefore reducing their computation time does not reduce their overall runtime. Bandwidth-bound layers benefit most from reduced bandwidth requirements. • A reduced memory footprint means that the model requires less storage space, parameter updates are smaller, cache utilization is higher, and so on. Quantization methods Quantization has many benefits but the reduction in the precision of the parameters and data can easily hurt a model’s task accuracy. Consider that 32-bit floating-point can represent roughly 4 billion numbers in the interval [-3.4e38, 3.40e38]. This interval of representable numbers is also known as the dynamic-range. The distance between two neighboring representable numbers is the precision of the representation. Floating-point numbers are distributed nonuniformly in the dynamic range and about half of the representable floating-point numbers are in the interval [-1,1]. In other words, representable numbers in the [-1, 1] interval would have higher precision than numbers in [1, 2]. The high density of representable 32-bit floating-point numbers in [-1, 1] is helpful in deep learning models where parameters and data have most of their distribution mass around zero. Using an 8-bit integer representation, however, you can represent only 2^8 different values. These 256 values can be distributed uniformly or nonuniformly, for example, for higher precision around zero. All mainstream, deep-learning hardware and software chooses to use a uniform representation because it enables computing using high-throughput parallel or vectorized integer math pipelines. To convert the representation of a floating-point tensor ($x_{f}$) to an 8-bit representation ($x_{q}$), a scale-factor is used to map the floating-point tensor’s dynamic-range to [-128, 127]: $x_{q} = Clip(Round(x_{f}/scale))$ This is symmetric quantization because the dynamic-range is symmetric about the origin. $Round$ is a function that applies some rounding-policy to round rational numbers to integers; and $Clip$ is a function that clips outliers that fall outside the [-128, 127] interval. TensorRT uses symmetric quantization to represent both activation data and model weights. At the top of Figure 1 is a diagram of an arbitrary floating-point tensor $x_{f}$, depicted as a histogram of the distribution of its elements. We chose a symmetric range of coefficients to represent in the quantized tensor: [$-amax$, $amax$]. Here, $amax$ is the element with the largest absolute value to represent. To compute the quantization scale, divide the float-point dynamic-range into 256 equal parts: $amax = max(abs(x_{f}))$ $scale = (2 * amax) / 256$ The method shown here to compute the scale uses the full range that you can represent with signed 8-bit integers: [-128, 127]. TensorRT Explicit Precision (Q/DQ) networks use this range when quantizing weights and activations. There is tension between the dynamic range chosen to represent using 8-bit integers and the error introduced by the rounding operation. A larger dynamic range means that more values from the original floating-point tensor get represented in the quantized tensor, but it also means using a lower precision and introducing a larger rounding error. Choosing a smaller dynamic range reduces the rounding error but introduce a clipping error. Floating-point values that are outside the dynamic range are clipped to the min/max value of the dynamic Figure 1. 8-bit signed integer quantization of a floating-point tensor $x_{f}$. The symmetric dynamic range of $x_{f} [-amax, amax]$is mapped through quantization to [-128, 127]. To address the effects of the loss of precision on the task accuracy, various quantization techniques have been developed. These techniques can be classified as belonging to one of two categories: post-training quantization (PTQ) or quantization-aware training (QAT). As the name suggests, PTQ is performed after a high-precision model has been trained. With PTQ, quantizing the weights is easy. You have access to the weight tensors and can measure their distributions. Quantizing the activations is more challenging because the activation distributions must be measured using real input data. To do this, the trained floating-point model is evaluated using a small dataset representative of the task’s real input data, and statistics about the interlayer activation distributions are collected. As a final step, the quantization scales of the model’s activation tensors are determined using one of several optimization objectives. This process is calibration and the representative dataset used is the calibration-dataset. Sometimes PTQ is not able to achieve acceptable task accuracy. This is when you might consider using QAT. The idea behind QAT is simple: you can improve the accuracy of quantized models if you include the quantization error in the training phase. It enables the network to adapt to the quantized weights and activations. There are various recipes to perform QAT, from starting with an untrained model to starting with a pretrained model. All recipes change the training regimen to include the quantization error in the training loss by inserting fake-quantization operations into the training graph to simulate the quantization of data and parameters. These operations are called ‘fake’ because they quantize the data, but then immediately dequantize the data so the operation’s compute remains in float-point precision. This trick adds quantization noise without changing much in the deep-learning framework. In the forward-pass, you fake-quantize the floating-point weights and activations and use these fake-quantized weights and activations to perform the layer’s operation. In the backward pass, you use the weights’ gradients to update the floating-point weights. To deal with the quantization gradient, which is zero almost everywhere except for points where it is undefined, you use the ( straight-through estimator (STE), which passes the gradient as-is through the fake-quantization operator. When the QAT process is done, the fake-quantization layers hold the quantization scales that you use to quantize the weights and activations that the model is used for inference. Figure 2. QAT fake-quantization operators in the training forward-pass (left) and backward-pass (right) PTQ is the more popular method of the two because it is simple and doesn’t involve the training pipeline, which also makes it the faster method. However, QAT almost always produces better accuracy, and sometimes this is the only acceptable method. Quantization in TensorRT TensorRT 8.0 supports INT8 models using two different processing modes. The first processing mode uses the TensorRT tensor dynamic-range API and also uses INT8 precision (8-bit signed integer) compute and data opportunistically to optimize inference latency. Figure 3. TensorRT PTQ workflow (left) vs. TensorRT INT8 quantization using quantization scales derived from the configured tensors dynamic-range (right) This mode is used when TensorRT performs the full PTQ calibration recipe and when TensorRT uses preconfigured tensor dynamic-ranges (Figure 3). The other TensorRT INT8 processing mode is used when processing floating-point ONNX networks with QuantizeLayer/DequantizeLayer layers and follows explicit quantization rules. For more information about the differences, see Explicit-Quantization vs. PTQ-Processing in the TensorRT Developer Guide. TensorRT Quantization Toolkit The TensorRT Quantization Toolkit for PyTorch compliments TensorRT by providing a convenient PyTorch library that helps produce optimizable QAT models. The toolkit provides an API to automatically or manually prepare a model for QAT or PTQ. At the core of the API is the TensorQuantizer module, which can quantize, fake-quantize, or collect statistics on a tensor. It is used together with QuantDescriptor, which describes how a tensor should be quantized. Layered on top of TensorQuantizer are quantized modules that are designed as drop-in replacements of PyTorch’s full-precision modules. These are convenience modules that use TensorQuantizer to fake-quantize or collect statistics on a module’s weights and inputs. The API supports the automatic conversion of PyTorch modules to their quantized versions. Conversion can also be done manually using the API, which allows for partial quantization in cases where you don’t want to quantize all modules. For example, some layers may be more sensitive to quantization and leaving them unquantized improves task accuracy. Figure 4. TensorRT Quantization Toolkit components The TensorRT-specific recipe for QAT is described in detail in NVIDIA Quantization whitepaper, which includes a more rigorous discussion of the quantization methods and results from experiments comparing QAT and PTQ on various learning tasks. Code example walkthrough This section describes the classification-task quantization example included with the toolkit. The recommended toolkit recipe for QAT calls for starting with a pretrained model, as it’s been shown that starting from a pretrained model and fine-tuning leads to better accuracy and requires significantly fewer iterations. In this case, you load a pretrained ResNet50 model. The command-line arguments for running the example from the bash shell: python3 classification_flow.py --data-dir [path to ImageNet DS] --out-dir . --num-finetune-epochs 1 --evaluate-onnx --pretrained --calibrator=histogram --model resnet50_res The --data-dir argument points to the ImageNet (ILSVRC2012) dataset, which you must download separately. The --calibrator=histogram argument specifies that the model should be calibrated, using the histogram calibrator, before fine-tuning the model. The rest of the arguments, and many more, are documented in the example. The ResNet50 model is originally from Facebook’s Torchvision package, but because it includes some important changes (quantization of skip-connections), the network definition is included with the toolkit (resnet50_res). For more information, see Q/DQ Layer-Placement Recommendations. Here’s a brief overview of the code. For more information, see Quantizing ResNet50. # Prepare the pretrained model and data loaders model, data_loader_train, data_loader_test, data_loader_onnx = prepare_model( not args.disable_pcq, The function prepare_model instantiates the data loaders and model as usual, but it also configures the quantization descriptors. Here’s an example: # Initialize quantization if per_channel_quantization: quant_desc_input = QuantDescriptor(calib_method=calibrator) quant_desc_input = QuantDescriptor(calib_method=calibrator, axis=None) quant_desc_weight = QuantDescriptor(calib_method=calibrator, axis=None) Instances of QuantDescriptor describe how to calibrate and quantize tensors by configuring the calibration method and axis of quantization. For each quantized operation (such as quant_nn.QuantConv2d), you configure the activations and weights in QuantDescriptor separately because they use different fake-quantization nodes. You then add fake-quantization nodes in the training graph. The following code (quant_modules.initialize) dynamically patches PyTorch code behind the scenes so that some of the torch.nn.module subclasses are replaced by their quantized counterparts, instantiates the model’s modules, and then reverts the dynamic patch (quant_modules.deactivate). For example, torch.nn.conv2d is replaced by pytorch_quantization.nn.QuantConv2d, which performs fake-quantization before performing the 2D convolution. The method quant_modules.initialize should be invoked before model instantiation. model = torchvision.models.__dict__[model_name](pretrained=pretrained) Next, you collect statistics (collect_stats) on the calibration data: feed calibration data to the model and collect activation distribution statistics in the form of a histogram for each layer to quantize. After you’ve collected the histogram data, calibrate the scales (calibrate_model) using one or more calibration algorithms (compute_amax). During calibration, try to determine the quantization scale of each layer, so that it optimizes some objective, such as the model accuracy. There are currently two calibrator classes: • pytorch_quantization.calib.histogram—Uses entropy minimization (KLD), mean-square error minimization (MSE), or a percentile metric method (choose the dynamic-range such that a specified percentage of the distribution is represented). • pytorch_quantization.calib.max—Calibrates using the maximum activation value (represents the entire dynamic range of the floating point data). To determine the quality of the calibration method afterward, evaluate the model accuracy on your dataset. The toolkit makes it easy to compare the results of the four different calibration methods to discover the best method for a specific model. The toolkit can be extended with proprietary calibration algorithms. For more information, see the ResNet50 example notebook. If the model’s accuracy is satisfactory, you don’t have to proceed with QAT. You can export to ONNX and be done. That would be the PTQ recipe. TensorRT is given the ONNX model that has Q/DQ operators with quantization scales, and it optimizes the model for inference. So, this is a PTQ workflow that results in a Q/DQ ONNX model. To continue to the QAT phase, choose the best calibrated, quantized model. Use QAT to fine-tune for around 10% of the original training schedule with an annealing learning-rate schedule, and finally export to ONNX. For more information, see the Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation whitepaper. There are a couple of things to keep in mind when exporting to ONNX: • Per-channel quantization (PCQ) was introduced in ONNX opset 13, so if you are using PCQ as recommended, mind the opset version that you are using. • The argument do_constant_folding should be set to True to produce smaller models that are more readable. torch.onnx.export(model, dummy_input, onnx_filename, verbose=False, opset_version=opset_version, do_constant_folding=True) When the model is finally exported to ONNX, the fake-quantization nodes are exported to ONNX as two separate ONNX operators: QuantizeLinear and DequantizeLinear (shown in Figure 5 as Q and DQ). Figure 5. Fake-quantization operators are converted to Q/DQ ONNX operators when the PyTorch model is exported to ONNX QAT inference phase At a high level, TensorRT processes ONNX models with Q/DQ operators similarly to how TensorRT processes any other ONNX model: 1. TensorRT imports an ONNX model containing Q/DQ operations. 2. It performs a set of optimizations that are dedicated to Q/DQ processing. 3. It continues to perform the general optimization passes. 4. It builds a platform-specific, execution-plan file for inference execution. This plan file contains quantized operations and weights. Building Q/DQ networks in TensorRT does not require any special builder configuration, aside from enabling INT8, because it is automatically enabled when Q/DQ layers are detected in the network. The minimal command to build a Q/DQ network using the TensorRT sample application trtexec is as follows: $ trtexec -int8 TensorRT optimizes Q/DQ networks using a special mode referred to as explicit quantization, which is motivated by the requirements for network processing-predictability and control over the arithmetic precision used for network operation. Processing-predictability is the promise to maintain the arithmetic precision of the original model. The idea is that Q/DQ layers specify where precision transitions must happen and that all optimizations must preserve the arithmetic semantics of the original ONNX model. Contrasting TensorRT Q/DQ processing and plain TensorRT INT8 processing helps explain this better. In plain TensorRT, INT8 network tensors are assigned quantization scales, using the dynamic range API or through a calibration process. TensorRT treats the model as a floating-point model when applying the backend optimizations and uses INT8 as another tool to optimize layer execution time. If a layer runs faster in INT8, then it is configured to use INT8. Otherwise, FP32 or FP16 is used, whichever is faster. In this mode, TensorRT is optimizing for latency only, and you have little control over which operations are quantized. In contrast, in explicit quantization, Q/DQ layers specify where precision transitions must happen. The optimizer is not allowed to perform precision-conversions not dictated by the network. This is true even if such conversions increase layer precision (for example, choosing an FP16 implementation over an INT8 implementation) and even if such conversion results in a plan file that executes faster (for example, preferring INT8 over FP16 on V100 where INT8 is not accelerated by Tensor Cores). In explicit quantization, you have full control over precision transitions and the quantization is predictable. TensorRT still optimizes for performance but under the constraint of maintaining the original model’s arithmetic precision. Using the dynamic-range API on Q/DQ networks is not supported. The explicit quantization optimization passes operate in three phases: • First, the optimizer tries to maximize the model’s INT8 data and compute using Q/DQ layer propagation. Q/DQ propagation is a set of rules specifying how Q/DQ layers can migrate in the network. For example, QuantizeLayer can migrate toward the beginning of the network by swapping places with a ReLU activation layer. By doing so, the input and output activations of the ReLU layer are reduced to INT8 precision and the bandwidth requirement is reduced by 4x. • Then, the optimizer fuses layers to create quantized operations that operate on INT8 inputs and use INT8 math pipelines. For example, QuantizeLayer can fuse with ConvolutionLayer. • Finally, the TensorRT auto-tuner optimizer searches for the fastest implementation of each layer that also respects the layer’s specified input and output precision. For more information about the main explicit quantization optimizations that TensorRT performs, see the TensorRT Developer Guide. The plan file created from building a TensorRT Q/DQ network contains quantized weights and operations and is ready to deploy. EfficientNet is one of the networks that requires QAT to maintain accuracy. The following chart compares PTQ to QAT. Figure 6. Accuracy comparison for EfficientNet-B0 on PTQ and QAT For more information, see the EfficientNet Quantization example on NVIDIA DeepLearningExamples. In this post, we briefly introduced basic quantization concepts and TensorRT’s quantization toolkit and then reviewed how TensorRT 8.0 processes Q/DQ networks. We did a quick walkthrough of the ResNet50 QAT example provided with the Quantization Toolkit. ResNet50 can be quantized using PTQ and doesn’t require QAT. EfficientNet, however, requires QAT to maintain accuracy. The EfficientNet B0 baseline floating-point Top1 accuracy is 77.4, while its PTQ Top1 accuracy is 33.9 and its QAT Top1 accuracy is 76.8. For more information, see the GTC 2021 session, Quantization Aware Training in PyTorch with TensorRT 8.0.
{"url":"http://databloom.com/2021/07/20/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-nvidia-tensorrt/","timestamp":"2024-11-06T21:07:57Z","content_type":"text/html","content_length":"85619","record_id":"<urn:uuid:0704dc50-43d4-4ff0-9750-700347591480>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00378.warc.gz"}
Efficient arithmetic regularity and removal lemmas for induced bipartite patterns | Published in Discrete Analysis Arithmetic Combinatorics April 12, 2019 BST Efficient arithmetic regularity and removal lemmas for induced bipartite patterns Noga Alon □ Department of Mathematics, Princeton University Jacob Fox □ Department of Mathematics, Stanford University Yufei Zhao □ Department of Mathematics, Massachusetts Institute of Technology Alon, Noga, Jacob Fox, and Yufei Zhao. 2019. “Efficient Arithmetic Regularity and Removal Lemmas for Induced Bipartite Patterns.” Discrete Analysis , April. Efficient arithmetic regularity and removal lemmas for induced bipartite patterns, Discrete Analysis 2019:3, 14 pp. This paper provides a common extension of two recent lines of work: the study of arithmetic regularity lemmas under the model-theoretic assumption of stability initiated by Terry and Wolf, and that of graph regularity lemmas for graphs of bounded VC-dimension provided by Lovász and Szegedy (following prior work of Alon, Fischer and Newman) and extended to hypergraphs by Fox, Pach and Suk. Since Szemerédi’s seminal work in the 1970s, regularity lemmas have proved to be of fundamental importance in many areas of discrete mathematics. In the graph setting, a regularity lemma states that the vertex set of any sufficiently large graph can be partitioned into a bounded number of sets such that almost all pairs of parts from the partition induce a bipartite graph that looks a lot like a random graph (that is, it is a quasi-random graph, in a sense that can be made precise in several essentially equivalent ways). An arithmetic analogue of Szemerédi’s regularity lemma was formulated and proved by Green in 2005. An important special case of Green’s lemma asserts that for any sufficiently large $n$ and any subset $A$ of the vector space $\mathbb{F}_p^n$, this space can be partitioned into cosets of a subspace $H$ of bounded codimension such that the set $A$ behaves quasi-randomly with respect to almost every coset in the partition. (Here the quasi-random behaviour is defined in terms of the absolute value of the Fourier transform of the indicator function of the set $A$ relative to the subspace $H$ In both settings, it was shown (by Gowers and Green, respectively) that the trade-off between the number of parts in the partition and the degree of quasi-randomness obtained was necessarily of tower-type. In the case of graphs, it had already been observed many years earlier that the existence of “irregular” pairs in the partition could not in general be excluded. That is, in general, the conclusions of the regularity lemma cannot be strengthened in either setting. The folklore example ruling out the existence of a completely regular graph partition is the half-graph, which is a bipartite graph defined on two vertex classes $X=\{x_1,x_2,\dots,x_k\}$ and $Y=\ {y_1, y_2,\dots, y_k\}$, with edges between $x_i$ and $y_j$ if and only if $i\leq j$. Malliaris and Shelah observed in 2014 that by forbidding induced copies of the half-graph (of constant size), one can indeed guarantee a completely regular partition of any sufficiently large graph. In fact, they proved an even stronger result: the number of parts of the partition depends polynomially on the regularity parameter, and the edge density between any two parts of the partition is guaranteed to be either close to 0 or close to 1. The half-graph is known to model theorists as a particular instance of the so-called “order property” (in this case, it is a property of the formula defining the edge relation in the graph). In 2018 Terry and Wolf proved an analogue of the Malliaris-Shelah result in the arithmetic setting by considering an instance of the order property adapted to subsets of groups: a subset $A\subseteq G$ of an abelian group $G$ is said to have the $k$-order property if there exist $x_1,x_2,\dots,x_k$, $y_1, y_2,\dots, y_k$ in $G$ such that $x_i+y_j\in A$ if and only if $i\leq j$. Terry and Wolf proved that if a set $A\subseteq \mathbb{F}_p^n$ (for sufficiently large $n$) does not have the $k$-order property, then there exists a subspace $H$ of $\mathbb{F}_p^n$ such that $A$ has density at most $\ epsilon$ or at least $1-\epsilon$ on each coset of $H$, and the codimension of $H$ has a power dependence on $\epsilon$ (with the power depending on $k$). A structure that does not have the order property is called model-theoretically “stable”. Stable structures have been studied since the 1970s, and have been shown to exhibit particularly “tame” behaviour. Such stability often manifests itself as a quantifiable global property of the structure in question. A natural relaxation of stability, from the model-theoretic point of view, is that of lacking the so-called “independence property”, another model-theoretic concept that goes back to Shelah’s work in the 1970s. Indeed, structures that lack the independence property are arguably better known across mathematics as having bounded VC-dimension, a notion that was defined independently around the same time by Vapnik and Chervonenkis in the context of statistical learning theory, and that has been widely used ever since. A graph $G=(V,E)$ is said to have VC-dimension $k$ if the largest set of vertices shattered by the family of neighbourhoods $\{N_G(v):v\in V\}$ has size $k$. (Recall that a set $X$ is shattered by a family $\mathcal{F}$ if for every $X'\subseteq X$, there exists $F\in \mathcal{F}$ such that $X'=X\cap F$.) A regularity lemma for graphs of bounded VC-dimension had been proved, independently of any model-theoretic considerations, by Lovász and Szegedy in 2010, having already been obtained in the bipartite context by Alon, Fischer and Newman in 2007. This work was later extended to hypergraphs of bounded VC-dimension by Fox, Pach and Suk. In the present paper the authors prove an arithmetic regularity lemma for finite abelian groups of bounded exponent under the additional assumption of bounded VC-dimension. More precisely, they show that if $G$ is a sufficiently large abelian group of bounded exponent and $A\subseteq G$ is a subset of VC-dimension at most $k$ (meaning that the maximum size of a set that is shattered by the set of translates $\{g+A:g\in G\}$ is at most $k$) then there exists a subgroup $H\leqslant G$ of index at most $\epsilon^{-k-o(1)}$ such that $|A\Delta S|<\epsilon|H|$, where $S$ is some union of cosets of $H$ and $o(1)$ tends to zero as $\epsilon$ tends to zero. The bound on the index is stronger than that obtained by Terry and Wolf in the context of stable subsets of $\mathbb{F}_p^n$, and so is the error in the approximation of $A$ by cosets of $H$. However, the result does not imply that of Terry and Wolf because the existence of irregular cosets is not ruled out (as indeed it cannot be, as a natural arithmetic analogue of the half-graph shows that irregular cosets must exist in any partition). In addition to Haussler’s packing lemma, a by now standard tool in the setting of bounded VC-dimension, the proof uses the celebrated Bogolyubov-Ruzsa lemma from additive combinatorics, which, in the context of a finite abelian group $G$ of bounded exponent, states that the iterated sum set $2B-2B$ of a set $B$ with small doubling contains a subgroup of $G$ of size at least a constant times $|B|$ . From their efficient arithmetic regularity lemma the authors deduce an efficient removal lemma for bi-induced patterns, with an application to property testing. Shortly after this paper was made available as a preprint on the arXiv, a related result was proved using model-theoretic machinery by Conant, Pillay and Terry, who had previously given a model-theoretic proof of the stable arithmetic regularity lemma for general finite (not necessarily abelian) groups. While vastly more general in scope than the results obtained using combinatorial means in the present paper, Conant, Pillay and Terry’s techniques yield no quantitative dependence of the parameters. Powered by , the modern academic journal management system
{"url":"https://discreteanalysisjournal.com/article/7757-efficient-arithmetic-regularity-and-removal-lemmas-for-induced-bipartite-patterns","timestamp":"2024-11-13T14:56:59Z","content_type":"text/html","content_length":"178385","record_id":"<urn:uuid:d0a55d6c-46c5-42ce-8cc1-a4706363828a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00404.warc.gz"}
Laszlo - .NET Developer, Personal Blog In this series I am implementing Dynamic Programming Katas with vectorized solutions. The first post explored the Vectorized Longest Increasing Subsequence problem, and in the second post I dived into the Checkerboard problem. In this post I will explore a version of the Subset sum problem. I will not focus on the most efficient algorithm or implementation for this problem. Instead, I will present the most common approach to solve the problem using Dynamic Programming and then present a vectorized version of the algorithm.
{"url":"https://blog.ladeak.net/logbook/150","timestamp":"2024-11-10T16:18:46Z","content_type":"text/html","content_length":"94718","record_id":"<urn:uuid:9c0822b1-c4b3-4453-9f8f-4965d8094541>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00018.warc.gz"}
Tie breaking | Method of Equal Shares How to break ties? Summary: In rare cases, two or more projects may be tied (for example because they have exactly the same cost and received exactly the same number of votes). This occurs with probability less than 1%. We describe how to minimize the likelihood of ties when implementating the Method of Equal Shares. However, if a tie still occurs, it may need to be broken by election officials by using Recommended tie-breaking procedure An easy way of definitely avoiding ties is to ensure that no two projects have the same cost (even if costs only differ by €1). Then a tie will never occur if the tie-breaking procedure below is In the execution of the Method of Equal Shares, a tie can occur when two or more projects have the same effective vote count. In isolation, ties are unlikely to occur in large elections (such as those on the city scale). But the Method of Equal Shares is usually run several times in a row to determine the best amount of money that the voters get assigned at the start of the method. (See Completions for details.) This makes it more likely that a tie is encountered, and the tie will need to be broken. We recommend that when there is a tie between two or more projects, because they have the same effective voting count at some step of the method, the tie should be broken by the following procedure: • The project with the lowest cost is selected. • If two or more of the tied projects have the same lowest cost, then the (lowest-cost) project with the highest initial vote count is selected (that is, the raw vote count of the project before the vote count got reduced in the course of running the Method of Equal Shares). • If two or more projects have the lowest cost and the same initial vote count, then the tie is broken by lot, i.e. uniformly at random. As a procedural matter, there are two ways to handle the potential use of randomness: 1. Before computing the election outcome, election officials check whether there are any collections of projects that have the same cost and the same initial vote count. For each such collection of potentially tied projects, they determine a tie-breaking order, for example by drawing names from a hat. 2. Alternatively, compute the election outcome using an implementation of the Method of Equal Shares that ends with an error message if a tie is encountered. Only in this (unlikely) event, proceed like in Option 1. Technical info In the unlikely event that randomness must be used, we recommend that election officials proceed like in Option 2 described above: randomly draw a tie-breaking order, and then run the Method of Equal Shares (including the completion) using that tie-breaking order. While using the completion, it is not a good idea to break ties differently for different values of the initial budget. Example 1 As an example, suppose that the Method of Equal Shares ends up in the following situation, where three projects remain available for selection: Project Cost Initial vote count Effective vote count Project 1 €600 500 200 Project 2 €700 300 200 Project 3 €100 150 100 Projects 1 and 2 have the highest effective vote count, so one of them should be selected. Since they have the same effective vote count, we need to break the tie. Thus, the method selects Project 1, because its cost of €600 is lower than the cost of Project 2, which is €700. Example 2 Here is a similar example. Project Cost Initial vote count Effective vote count Project 1 €400 500 200 Project 2 €400 300 200 Project 3 €400 150 200 This time, all three projects have the same effective vote count, so again we need to break the tie. All three projects have the same cost, so we need to break the tie by looking at the initial vote count. Thus, the method selects Project 1, because its initial vote count of 500 is higher than the initial vote count of both Project 2 and of Project 3. Example 3 Here is a final example. Project Cost Initial vote count Effective vote count Project 1 €400 500 200 Project 2 €400 500 200 Projects 1 and 2 have the same effective vote count, so we need to break the tie. But they both have the same cost and the same initial vote count. Thus, the tie needs to be broken by lot, for example by having the election official flip a coin. Likelihood of ties Using simulations based on election data in the Pabulib library, it is possible to estimate the likelihood of ties occurring. These simulations suggest that it is exceptionally rare (<1%) to need to use randomness to break a tie. This is because in almost all cases, any tie can already be broken based on project costs and initial vote counts. In our simulation, we find that in only 0.6% of cases, there exist two projects with the exact same cost and the exact same vote count. Further, even in the 0.6% of cases with such similar projects, it is often not necessary to break this tie when computing the Method of Equal Shares (for example because the tied projects do not have a large-enough vote count to have a chance of winning). Our simulation suggests that in only 0.15% of cases, there occurs a tie when computing the Method of Equal Shares that needs to be broken by lot. Most of the ties found in our simulation occur in elections in the city of Wrocław, which involve a fairly small number of voters. If we exclude the Wrocław elections from our simulation, then the likelihood of ties is even lower: two projects with the same cost and the same initial vote count occur in only 0.26% of cases, and a tie during the computation of the Method of Equal Shares occurs in only 0.09% of cases. Details of the simulation The simulation was performed in January 2023 using election data from Pabulib based on PB elections that used approval voting. There are 480 such datasets, coming from the Polish cities of Gdansk, Warsaw, Wrocław, and Zabrze. When excluding the Wrocław elections, the simulation involves 383 datasets. In order to get a more precise estimate of the likelihood of a tie occurring, for each dataset, we ran a simulation 100 times. In each iteration, we randomly sampled between 30% and 80% of the voters from the dataset (thereby obtaining a random smaller election), and then checked whether the smaller election contained a pair of projects with the same cost and the same initial vote count. For those instances where such a pair of projects was found, we then checked whether the Method of Equal Shares (using the standard completion method) would encounter a tie when computing the election If we only look at the actual, original elections (without randomly sampling a subset of voters), then a pair of projects with the same cost and the same initial vote count occurs in 1.25% of elections, with Equal Shares encountering a tie in 0.21% of elections. Excluding the Wrocław elections, the numbers are 0.5% and 0.0% respectively. The simulation code is available on GitHub.
{"url":"https://equalshares.net/implementation/tie-breaking/","timestamp":"2024-11-04T11:01:23Z","content_type":"text/html","content_length":"28349","record_id":"<urn:uuid:68fbd50e-f765-4187-b560-6d6ef351ab45>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00050.warc.gz"}
Prediction error sum of squares. 3K Downloads Updated 3 Sep 2013 This m-file returns a useful residual scaling, the prediction error sum of squares (PRESS). To calculate PRESS, select an observation i. Fit the regression model to the remaining n-1 observations and use this equation to predict the withheld observation y_i. Denoting this predicted value by ye_(i), we may find the prediction error for point i as e_(i)=y_i - ye_(i). The prediction error is often called the ith PRESS residual. This procedure is repeated for each observation i = 1,2,...,n, producing a set of n PRESS residuals e_(1),e_(2),...,e_(n). Then the PRESS statistic is defined as the sum of squares of the n PRESS residuals as in, PRESS = i_Sum_n e_(i)^2 = i_Sum_n [y_i - ye_(i)]^2 Thus PRESS uses such possible subset of n-1 observations as an estimation data set, and every observation in turn is used to form a prediction data set. In the construction of this m-file, we use this statistical approach. As we have seen that calculating PRESS requires fitting n different regressions, also it is possible to calculate it from the results of a single least squares fit to all n observations. It turns out that the ith PRESS residual is, e_(i) = e_i/(1 - h_ii) Thus, because PRESS is just the sum of the squares of the PRESS residuals, a simple computing formula is PRESS = i_Sum_n [e_i/(1 - h_ii)]^2 It is easy to see that the PRESS residual is just the ordinary residual weighted according to the diagonal elements of the hat matrix h_ii. Also, for all the interested people, here we just indicate, in an inactive form, this statistical approaching. Data points for which h_ii are large will have large PRESS residuals. These observations will generally be high influence points. Generally, a large difference between the ordinary residual and the PRESS residual will indicate a point where the model fits the data well, but a model built without that point predicts poorly. This is also known as leave-one-out cross-validation (LOOCV) in linear models as a measure of the accuracy. [an anon's suggestion] In order to improve the matrix script for avoiding the squares condition number in the regression parameter estimation are by using a pivoted QR factorization of X. Syntax: function x = press(D) D - matrix data (=[X Y]) (last column must be the Y-dependent variable). (X-independent variables). x - prediction error sum of squares (PRESS). Cite As Antonio Trujillo-Ortiz (2024). press (https://www.mathworks.com/matlabcentral/fileexchange/14564-press), MATLAB Central File Exchange. Retrieved . MATLAB Release Compatibility Created with R14 Compatible with any release Platform Compatibility Windows macOS Linux Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! Discover Live Editor Create scripts with code, output, and formatted text in a single executable document. Version Published Release Notes 1.5.0.0 Text was improved. [The anon suggestion was taken into account.] Download 1.3.0.0 Text was improved. Download 1.1.0.0 Code was improved. Taking into account the mathematical suggestions done by Bart and John D'Errico (08-26-2013). Download 1.0.0.0 Text was improved. Download
{"url":"https://se.mathworks.com/matlabcentral/fileexchange/14564-press?s_tid=prof_contriblnk","timestamp":"2024-11-05T23:25:39Z","content_type":"text/html","content_length":"99715","record_id":"<urn:uuid:65e2439a-cc61-4267-b8ae-c6aa949b19d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00496.warc.gz"}
when a person subtracts the total dollar amount she owes from the total 1 point dollar amount of what she owns, she will know her ( $$$ | Question AI When a person subtracts the total dollar amount she owes from the total 1 point dollar amount of what she owns, she will know her ( owned - owed = ?) Liquidity Net worth Portfolio value Liabilities Not your question?Search it Sharron VelezMaster · Tutor for 5 years The question is asking about the concept of net worth in finance. Net worth is calculated by subtracting the total amount of liabilities (what you owe) from the total amount of assets (what you own). In this case, the person is subtracting the total dollar amount she owes (her liabilities) from the total dollar amount of what she owns (her assets). This calculation will give her her net worth. Therefore, the correct answer is "Net worth". Step-by-step video When a person subtracts the total dollar amount she owes from the total 1 point dollar amount of what she owns, she will know her ( owned - owed = ?) Liquidity Net worth Portfolio value Liabilities All Subjects Homework Helper
{"url":"https://www.questionai.com/questions-tsGthC3Rks/person-subtracts-total-dollar-amount-owes-total-1-point","timestamp":"2024-11-09T13:24:37Z","content_type":"text/html","content_length":"75213","record_id":"<urn:uuid:0831166b-901a-4f8b-9010-52195765d5ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00613.warc.gz"}
Rounding Calculator About Rounding Calculator What is rounding? To estimate a certain number in a context, rounding is a process. If the following digit is less than 5, round the number down; if it is 5 or more, round it up. To round a number, glance at the next digit in the correct position. An accurate rounding of decimal figures is referred to as "rounding decimals." Decimals can be rounded to the closest whole, tenth, or hundredth. To estimate a result fast and easily, rounding decimals is helpful. Finding out the approximate average grade for a class of students is also helpful. Why to round numbers? When we round a number, we make it easier to understand while maintaining its value. Why is rounding a number ever necessary? There are several causes. First off, adding, subtracting, multiplying, and dividing whole numbers is significantly simpler. Rounding is a fantastic method to make these calculations simpler if you only need an approximation of the result and do not need it to be precise. Consider the scenario where you need to figure out the basic square footage of a room in your house that you are renting out for your internet advertisement. The room is 10.43 feet wide and 12.65 feet long, according to your measurement. What method would you use to calculate the area? You might calculate this by multiplying these two figures to obtain the precise area of the room. However, in this case, you really don't need to be that precise, so you can just round the room's dimensions to 13 by 10 feet. The space is then 130 square feet or thereabouts. That was much simpler to compute and is precise enough to give prospective tenants an estimate of the room's size. Additionally, there are situations when rounding the number makes more sense because you simply don't need the extra digits. For instance, since there is no such thing as a fraction of a penny, it would be absurd to have more than two decimal places when computing a monetary sum. You would then need to round the number to two decimal points. What are rules for rounding numbers? How can we choose between various approximations of a number which one is more appropriate? Should we go with the smaller number or choose one that is higher than the one provided? There are a few fundamental guidelines that must be adhered to when rounding numbers. • We must first determine our rounding digit. The affected digit will be this one in the end. • Then, in order to determine whether the rounding digit will be retained, we must examine the digit immediately to its right. • We do not alter the rounding digit if the digit immediately to the right is less than 5. To the right of the rounding digit, all the digits are transformed to 0, though. • The rounding digit is increased by one if the digit to the right is five or greater, and all the digits to the right are turned to zero. What is round up and round down? Although the term "rounding" is a general one, we typically use the terms "round up" or "round down" to indicate whether the number has gone up or down after being rounded. The supplied number is said to be rounded up when the rounded number is increased, and it is said to be rounded down when the rounded number is dropped. When we round numbers to the closest ten, we must look at the one place, which is the digit immediately to the right of the tens place. For instance, the result of rounding the number 7486 to the closest ten is 7490. When we round numbers to the closest hundred, we must look at the tens place, which is the digit immediately following the hundreds place. For instance, 7456 becomes 7500 when it is rounded to the closest hundred. What are uses and examples of rounded numbers? In order to convert 1/3 to decimal, all the threes to the right of the decimal point are shortened. Most of the time, you will use the rounded figures 0.33 or 0.333 when you need to work with 1/3 in decimal. In fact, when there isn't a precise decimal equivalent for the fraction, you typically just need two or three digits to the right of the decimal point. How do you represent 1/6 in decimal form? Don't forget to round up! How much is the result of multiplying 25 by 75? I give you 1875. Add 0.25 to 0.75 now. You receive 0.1875. To the right of the decimal point, you had 2 digits to begin with, and you finished up with 4. Frequently, you will simply round the result up to 0.19. Do you pay a state sales tax when you shop? How much sales tax do you pay if something costs $56.30 and the sales tax is 3%? $56.30 times 0.03 equals $1.689; tax is $1.69. Assume that you and your pals are planning to eat at a fast food joint and that you want to make sure you have enough cash on hand to cover the cost. You can quickly and easily add the sums after rounding the price of each meal to the nearest dollar. When possible, it's preferable to round integers rather than decimal values. In most cases, it is important to round numbers to the nearest multiple of 10, 100, 1,000, or million. For instance, the census bureau estimated that 8,214,426 people lived in New York City in 2006. It's difficult to recall that number, so a decent approximation would be to say that there are 8 million people living in New York City. It doesn't really matter what the precise figure is, though. Finally, since exact numbers aren't always necessary, it's frequently just simpler to work with rounded figures. It is simply more convenient to budget 30 minutes for travel if the trip takes roughly 25 minutes. It's simpler to work with an estimate of 8 million rather than an actual number of 8,214,426 when discussing the population of New York City.
{"url":"https://calculatorsee.com/rounding-calculator/","timestamp":"2024-11-13T02:48:37Z","content_type":"text/html","content_length":"39607","record_id":"<urn:uuid:fe96e497-4d78-4c05-8a81-d974941febd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00175.warc.gz"}
BEGIN:VCALENDAR PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:PUBLISH BEGIN:VEVENT DTSTAMP:20241114T214204Z DTSTART;TZID=Europe/ London:20240520T234500 DTEND;TZID=Europe/London:20240520T234500 SUMMARY:(Synthese topical collection) Conceptual spaces: A mathematical framework for concept engineering UID:20250103T135435Z-iCalPlugin-Grails@philevents-web-7887cbbdfb-6g2pf TZID:Europe/London DESCRIPTION: Conceptual spaces: A mathematical framework for concept engineering Guest Editors:&nbsp\;Nasim Mahoozi\,&nbsp\;Paula Quinon Topical Collection Description: The theory of conceptual spaces\, introduced by Peter G&auml\;rdenfors in &ldquo\;Conceptual spaces: the geometry of thought&rdquo\; (2000)\, has become one of the key mathematical frameworks for modeling human cognition that has implications for a wide range of disciplines. This approach posits that concepts can be represented within geometrically or topologically structured similarity spaces. Within conceptual spaces\, points represent objects and the distance between the objects represents their similarity- the closer the points\, the greater their similarity. Concepts are defined as convex regions in a conceptual space. Over the past two decades\, the conceptual spaces framework has evolved\, expanding its explanatory and predictive capabilities in response to new theoretical insights and practical applications. & nbsp\;We are pleased to announce a special topical collection of Synthese focusing on recent advances in conceptual spaces. Contributors from a range of disciplines are invited to submit original research papers that offer theoretical inquiries or case studies that explore both the potential and limitations of conceptual spaces. Appropriate Topics for Submission include\, among others: 1. Conceptual Spaces and Language 2. Conceptual Spaces and Cognition 3. Conceptual Spaces and Knowledge Representation &nbsp\;4. Conceptual Spaces and Artificial Intelligence 5. Conceptual Spaces and Robotics 6. Conceptual Spaces and Cognitive Science 7. Conceptual Spaces and the Nature of Concepts 8. Conceptual Spaces and Epistemology and Reasoning 9. Conceptual Spaces and Feminism&nbsp\; 10. Conceptual Spaces and Mathematical Tools for Conceptual Engineering 11. Conceptual Spaces and Philosophy 12. Conceptual Spaces and Ethics and Aesthetics The&nbsp\;deadline&nbsp\;for submissions is&nbsp\;May 20th\, 2024. Submissions to this topical collection will be reviewed in the same process as applied to regular Synthese submissions. &nbsp\;Papers should be submitted via Synthese&rsquo\;s editorial manager at: When the system asks you to &ldquo\;Choose Article Type&rdquo\;\, please scroll down in the pull-down menu to choose this special issue: &nbsp\;&ldquo\;T.C.:&nbsp\;Conceptual spaces: A mathematical framework for concept engineering&rdquo\;. When preparing your paper\, please read the journal&rsquo\;s &lsquo\;Instructions for authors&rsquo\; at: &nbsp\; For further information\, please contact the guest editors: &nbsp\;nasim.cswp@gmail.com.
{"url":"https://philevents.org/event/iCal/118049","timestamp":"2024-11-14T21:42:04Z","content_type":"text/calendar","content_length":"4932","record_id":"<urn:uuid:e954ad59-5fbc-42a3-ad08-3c8b3368f3fe>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00877.warc.gz"}
The Stacks project Lemma 10.135.6. Let $A \to B \to C$ be surjective local ring homomorphisms. Assume $A$ and $B$ are regular local rings. The following are equivalent 1. $\mathop{\mathrm{Ker}}(A \to C)$ is generated by a regular sequence, 2. $\mathop{\mathrm{Ker}}(A \to C)$ is generated by $\dim (A) - \dim (C)$ elements, 3. $\mathop{\mathrm{Ker}}(B \to C)$ is generated by a regular sequence, and 4. $\mathop{\mathrm{Ker}}(B \to C)$ is generated by $\dim (B) - \dim (C)$ elements. Comments (3) Comment #2968 by Dario Weißmann on Aren't all these conditions equivalent to $C$ is a regular local ring? If we added this condition we could skip the induction (lemmas 10.105.4 and 10.105.3 give the equivalence already). It seems to me that having three rings in the lemma is kind of redundant. Comment #3093 by Johan on Nope. Counter example is $C = k[x]/(x^2)$, $B = k[[x]]$, $A = k[[x, y]]$. Comment #3100 by Dario Weißmann on Yes, thank you! There are also: • 2 comment(s) on Section 10.135: Local complete intersections Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00SE. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 00SE, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/00SE","timestamp":"2024-11-12T12:29:43Z","content_type":"text/html","content_length":"17558","record_id":"<urn:uuid:79078bb2-2e2d-44f8-821c-c5140f51719c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00193.warc.gz"}
Measured the tunneling characteristics of a series of Al-oxide-(Ag//1// minus //xEr//x)Pb proximity effect sandwiches for normal metal thicknesses ranging from 500 A to 3000 A and parallel magnetic fields of 0-1000 gauss. Second derivative characteristics (d**2V/dI**2) reveal the Pb phonon structure through electron interference effects and also reveal structure at the Ag phonon frequencies. Application of a magnetic field less than the critical field of the sandwich reduces the Ag phonon structure. Introduction of impurities reduces the interferences effects with little effect on the Ag phonon structure. Thus the two contributions to the tunneling characteristics can be separated. Comparison with the theory of McMillan and Rowell then allows determination of V//F. Comparison of the theoretical and experimental curves for the interference term shows a frequency dependent mean free path from phonon emission and allows an independent measure of the electron-phonon coupling in the normal metal. Original language English (US) Pages (from-to) 521-525 Number of pages 5 Journal J Phys (Paris) Volume 39 Issue number 5 State Published - 1978 ASJC Scopus subject areas Dive into the research topics of 'FERMI VELOCITY OF SILVER FROM PROXIMITY EFFECT TUNNELING.'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/fermi-velocity-of-silver-from-proximity-effect-tunneling","timestamp":"2024-11-12T13:00:40Z","content_type":"text/html","content_length":"49080","record_id":"<urn:uuid:f778265f-7096-4686-b9f2-c3ec71b166e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00161.warc.gz"}
NPTEL Artificial Intelligence : Search Methods For Problem Solving Week 2 Assignment Answers 2024 » DBC ItanagarNPTEL Artificial Intelligence : Search Methods For Problem Solving Week 2 Assignment Answers 2024 NPTEL Artificial Intelligence : Search Methods For Problem Solving Week 2 Assignment Answers 2024 NPTEL Artificial Intelligence : Search Methods For Problem Solving Week 2 Assignment Answers 2024 GA – State Space Example 1. What information is available in Week 2 notes (bunch of PDFs) published in the course page? • About node order and tie breaker • How to model a problem as a state space search problem • Some properties of state spaces • DFS, BFS, DBDFS algorithms and ancillary functions • DFID-N, DFID-C algorithms and ancillary functions • Examples of list and tuple operations • Solved lecture examples • Solutions to practice assignment problems • Some problems to demonstrate node order Answer :- Click Here 2. Which of the following states satisfy the capacity constraints of the 12-9-7 water jug puzzle? • (3, 2, 7) • (3, 4, 5) • (1, 9, 2) • (0, 4, 8) Answer :- Click Here 3. From the state (7, 0, 5), which of the following states can be reached in one or more moves? • (3, 2, 7) • (3, 4, 5) • (1, 9, 2) • (0, 4, 8) Answer :- 4. From the state (3,2,7) the least number of moves required to reach (2,3,7) is ______________ . Enter an integer. Answer Format: 17 Answer :- 5. From the state (2,3,7) the least number of moves required to reach (3,2,7) is _________ . Enter an integer. Answer Format: 17 Answer :- 6. Say true or false. In the 12-9-7 water jug puzzle, it is possible to measure all the integral volumes 1L, 2L, …, up to 12L, where each volume measured is present in a single jug. • True • False • Cannot be determined Answer :- 7. The state space shown above ________________ . • is the full state space for the 12-9-7 water-jug puzzle • is the subspace reachable from the state (12,0,0) • satisfies the “reversible state space” property • does not satisfy the “reversible state space” property Answer :-Click Here GA – CLOSED LIST 1. Which of the following are possible valid orders in which Depth First Search visits the next 5 nodes starting from K? • K E J P O L • K J I N P Q • K E B A D J • K P Q R O S • K F L J I N Answer :- Click Here 2. Which of the following are possible valid orders in which Breadth First Search visits the next 5 nodes starting from K? • K E J P O L • K J I N P Q • K E L O P J • K P Q R O S • K F L J I N Answer :- 3. Which of the following are possible valid orders in which Depth First Search visits the next 5 nodes starting from G? • G H G L F C • G H M L F C • G L Q P K L • G L Q W X Z • G L Q W X W Answer :- 4. Which of the following are possible valid orders in which Breadth First Search visits the next 5 nodes starting from G? • G H G L F C • G H M F L C • G L Q P K L • G F L M H Q • G F L M H C Answer :- GA – DFS, BFS and DBDFS 1. The BFS search tree is _________ . • Tree 1 • Tree 2 • Tree 3 • Tree 4 Answer :- Click Here 2. When BFS finds the goal node, i.e., when the GoalTest returns true, the NODEs present in OPEN list are ____________________ . The OPEN list contains pairs (NODE,PARENT), ignore the PARENTs and list only the NODEs in the order it occurs in the OPEN list (head to tail). Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- 3. List the nodes inspected by BFS. Enter the nodes in the order they were inspected. Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- 4. What is the path found by BFS? Enter the path as a comma separated list. Answer Format: S,X,Y,Z,G Answer :- 5. The DFS search tree is ___________________ . • Tree 1 • Tree 2 • Tree 3 • Tree 4 Answer :- 6. When DFS finds the goal node, i.e., when the GoalTest returns true, the NODEs present in OPEN list are ________________ . The OPEN list contains pairs (NODE,PARENT), ignore the PARENTs and list only the NODEs in the order it occurs in the OPEN list (head to tail). Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- 7. List the nodes inspected by DFS. Enter the nodes in the order they were inspected. Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- 8. What is the path found by DFS? Enter the path as a comma separated list. Answer Format: S,X,Y,Z,G Answer :- 9. When DBDFS (depthBound=2) finds the goal node, i.e., when the GoalTest returns true, the NODEs present in OPEN list are ____________ . The OPEN list contains pairs (NODE,PARENT), ignore the PARENTs and list only the NODEs in the order it occurs in the OPEN list (head to tail). Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- 10. List the nodes inspected by DBDFS (depthBound=2). Enter the nodes in the order they were inspected. Enter a comma separated list of nodes, or enter NIL if the list is empty. Answer Format: X,Y,Z Answer :- Click Here 11. What is the path found by DBDFS (depthBound=2)? Enter the path as a comma separated list, or enter NIL if no path was found. Answer Format: S,X,Y,Z,G Answer :- 12. Which of the following search algorithm(s) always find(s) a path to the goal (if there exists one) in an infinite search space. • DFS • BFS • DFID-C • None of the above Answer :- 13. Mark the algorithm(s) for which space requirements for OPEN grow linearly with depth. • DFS • BFS • DFID-C • None of the above Answer :- 14. Mark the algorithm(s) that always find(s) the shortest path from start state to goal state in any search space. • DFS • BFS • DFID-C • None of the above Answer :- Click Here Facebook Twitter Whatsapp Whatsapp Copy Link Leave a comment Leave a comment Latest News
{"url":"https://dbcitanagar.com/nptel-artificial-intelligence-search-methods-for-problem-solving-week-2-assignment-answers/","timestamp":"2024-11-03T07:11:57Z","content_type":"text/html","content_length":"185112","record_id":"<urn:uuid:c286e9a3-bf4c-490b-99b7-dbca7da17a38>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00770.warc.gz"}
Math Worksheets - Homeschooling In Louisiana Math Worksheets Resources for free math worksheets and printable math worksheets. You'll find worksheets by subject and grade levels. Math Worksheets and Printables All Math Worksheets When it comes to math, practice makes perfect. Even after completing a homework assignment or working on all of the problems in a textbook, some children still need more practice to build their skills. Worksheets give children a way to improve their math skills by introducing them to a wide variety of problems. Math Game Time’s free math worksheets provide children with plenty of opportunities to practice applying their math skills. Some worksheets include standard equations, while others bring in word problems and real-life scenarios. Children will also find a selection of logic and puzzle based worksheets, including Sudoku and other fun games ranging from Pre-Kindergarten through 7th grade. Free Math Facts Worksheets Want to generate completely customized math tests? You can create math worksheets and tests that target basic math facts. Old-fashioned timed math tests are a critical component to mathematics Math Worksheets Center A large collection of printable K-12 math worksheets. Includes complete math explanations and lessons and instructions on all core K-12 math skills. Printable Worksheets, Activities, and Tests for the Differentiated Classroom Resource creation software for the differentiated classroom from Schoolhouse Technologies saves you the time to do what you do best: teach! Quickly and easily create printable worksheets, activities, and tests for a wide range of student levels and abilities. Deliver an effective individualized educational experience and have more time for teaching. Math Resource Studio is the teaching tool you need to create virtually unlimited, professional-quality, printable math worksheets and reviews. Generate math worksheet activities to provide practice in: number operations, number concepts, algebra, geometry, fractions, numeration, time, money, math puzzles, ratio and percent, measurement, and more. Make Your Own Math Worksheets The Teacher's Corner has developed several math worksheet makers that will make thousands of worksheets very quickly to meet your needs. From basic math to number sense, to algebra, they have all kinds of worksheets for you to choose from. Math Worksheets for Kids These are math worksheets for preschoolers and above. there are over 100 printable kids worksheets designed to help them learn everything from early math skills like numbers and patterns to theri basic addition, subtraction, multiptheir basic addition, subtraction, multiplication and division skills. These math worksheets are perfect for any teacher, parent or homeschooler and make a great compliment to any math lesson plan. Homeschool Math Blog Math teaching ideas, links, worksheets, reviews, articles, news, Math Mammoth, and more--anything that helps you to teach math. Math Worksheet Creator Free elementary math worksheets covering number sense and place value, addition, multiplication, money, subtraction, division, fractions, and more. Math Is Fun Worksheets Test your math skills! Ace that test! See how far you can get! View these worksheets on-screen, and then print them, with or without answers. Every worksheet has thousands of variations, so you never run out of practice material. Subjects include: addition, subtraction, multiplication, division, kindergarten math, decimals, decimal fractions, fractions, percents, and telling time worksheets. Worksheet Works Math Worksheets Get hundreds of worksheets covering all levels of math from numbers to algebra and geometry. Also covers calculator skills, word problems, exponents and scientific notation, time, money, and Homeschool Math Free Math Worksheets Here you can generate printable math worksheets for a multitude of topics: all the basic operations, clock, money, measuring, fractions, decimals, percent, proportions, ratios, factoring, equations, expressions, geometry, square roots, and more. There are also pages that list worksheets by grade levels. Math Printables Math drill sheets, math paper, number lines, graph paper, fraction manipulatives, money and coin worksheets, and math fact cards. Math Worksheets 4 Kids Free Printable Math Worksheets and Online Tests These free math worksheets range in difficulty from kindergarten to 12th grade. They are sorted by both topics and grades for easy navigation. Answer keys are included at the end of each worksheet. The Math Worksheet Site With The Math Worksheet Site you can create an endless supply of printable math worksheets. The intuitive interface gives you the ability to easily customize each worksheet to target your student's specific needs. Every worksheet is created when you request it, so they are different every time. This way you can add the practice that your student needs to a curriculum you already like, or you can be freed from the constraints of a workbook or textbook that gives either too much or too little practice if you would rather direct the studies yourself. Math Printables A great collection of printable worksheets, multiplication tables and charts, math charts, flash cards, and number lines. Preschool Math Printables These printables are free and help provide extra support for teaching preschool math. You'll find calendars, number tracing cards, number mats, patterns, and more. Math Worksheets Support learning at home with these helpful printable worksheets. Browse by grade level or subject to find the perfect worksheet challenge for your child. SuperKids Math Worksheet Creator Have you ever wondered where to find math drill worksheets? Make your own at SuperKids for free! Simply select the type of problem, the maximum and minimum numbers to be used in the problems, then click on the button! A worksheet will be created to your specifications, ready to be printed for use. Math Worksheet Wizard The Math Worksheet Wizard is a free resource for teachers and homeschooling moms and dads. You can make an unlimited number of printable math worksheets for children for the classroom or for homework, simply by clicking a button. What makes this site unique is that every time you create a worksheet, you get different, randomly generated problems. Homeschool Math HomeschoolMath.net is a comprehensive math resource site for homeschooling parents and teachers: find free worksheets, math ebooks for elementary grades, an extensive link list of games, a homeschool math curriculum guide, interactive tutorials & quizzes, and teaching tips articles. The resources emphasize understanding of concepts instead of just mechanical memorization of rules. Math Lessons and Worksheets Math worksheets covering topics including: early math, arithmetic, simple patterns, sorting, number bingo, number and math dominoes, square root wheel, printable graph paper, logic puzzles, and Math Goodies Worksheets Create your own math worksheets with the Worksheet Generator. Each worksheet is interactive, with a timer and instant scoring. Generate an unlimited number of custom math worksheets instantly. Preschool Math Worksheets and Printables Preschool math worksheets help your young learner develop essential early math skills. From number recognition to counting to addition, preschoolers learn a lot about numbers in one year. Our collection of preschool math worksheets is designed for kids who want to build skills that will prepare them for kindergarten and beyond. These preschool math worksheets help present new number topics to kids in a way that's fun and easy to understand. Help foster a culture of learning by doing these preschool math worksheets with your child. Free math Worksheet For practicing some math skills, there is just nothing more effective than a pencil and paper. These are printable free math worksheets for grades K-6. Featured Resources As an Amazon Associate, we earn from qualifying purchases. We get commissions for purchases made through links on this site. Sing, Spell, Read & Write Sing, Spell, Read & Write is a phonics-based program that uses a 36-step program of carefully sequenced steps to teach reading, writing, and spelling. More product information can be found The Work-at-Home Sourcebook This indispensable directory contains information not found in any other book on the subject. The Work-at-Home Sourcebook is the only book available which gives specific information for finding, applying for, and getting home work with AT&T, J. C. Penney, and more than 1,000 other companies that routinely hire qualified home workers. Contact information, job descriptions and requirements, and details on pay and benefits are included. Other chapters cover handicrafts, franchises, telecommuting, l... Conquering Chronic Disorganization The real-life stories of chronically disorganized people and the ground breaking, easy-to-learn organizing methods used to end their chronic disorganization in the area of residential clutter, office organizing, paper management, storage, and time management. Conquerings pages includes an extensive index, user-friendly summaries, quick tips, helpful photographs, and a resource section of products and organizations. The Way They Learn The learning-styles expert, Cynthia Ulrich Tobias, gives parents a better understanding of the types of learning approaches that will help their children do better in school and at home. She offers practical advice for teaching in response to your child's strengths, even if his or her learning style is different from yours.
{"url":"https://www.homeschoolinginlouisiana.com/subjects/math/math-worksheets","timestamp":"2024-11-06T04:48:00Z","content_type":"text/html","content_length":"47673","record_id":"<urn:uuid:f7146afe-f1fc-4564-908a-faf3046acd9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00457.warc.gz"}
Subtracting decimals workbook page subtracting decimals workbook page Related topics: rules adding and subtracting integers math test generator 2-step Algebra Equations formula for changing decimals to fractions the rational numbers pre algebra percentage quadratic formulas Author Message BoalDoggee Posted: Thursday 10th of Aug 08:46 I just discovered that I am already late in solving some issues with subtracting decimals workbook page. I had kept it pending hoping that I will stumble upon good quality learning material or somebody who can be of assistance with graphing inequalities, linear algebra and factoring. May be, I did not search hard enough. But at the moment I am running late . It is simply taxing to locate an excellent tutor to quickly train me. It is beyond what I can give. And I need to find a fast way out. Can someone who has had the same problem let me know as to what I can do? It will be a great help. Registered: 19.08.2004 From: In my chair Back to top espinxh Posted: Friday 11th of Aug 11:53 Don’t fear, try Algebrator! I was in a same situation sometime back, when my friend advised that I should try Algebrator. And I didn’t just pass my test; I went on to score really well in it . Algebrator has a really simple to use GUI but it can help you solve the most difficult of the problems that you might face in math at school. Give it a go and I’m sure you’ll do well in your test. Registered: 17.03.2002 From: Norway Back to top Mov Posted: Friday 11th of Aug 17:40 Hey, Algebrator is one awesome thing! I started using it when I was in my college . It’s been years since then, but I still use it occasionally. Take my word for it, it will really help you. Registered: 15.05.2002 Back to top Xaskil~ Posted: Saturday 12th of Aug 16:33 I'm so glad I got these answers so fast, I can't wait to check out Algebrator. Can you tell me one more thing, where could I find this software ? I'm not so good at searching for things like this, so it would be good if you could give me a URL. Thanks a lot! Registered: 10.04.2002 From: Houston, Tx Never too far to Travel and see Otep Back to top Dnexiam Posted: Sunday 13th of Aug 11:39 You should be able to find it here : https://softmath.com/reviews-of-algebra-help.html. Give it a go because Algebrator has a unconditional money back offer , Hope it helps you too! Registered: 25.01.2003 From: City 17 Back to top
{"url":"https://www.softmath.com/algebra-software-4/subtracting-decimals-workbook.html","timestamp":"2024-11-11T16:37:37Z","content_type":"text/html","content_length":"40661","record_id":"<urn:uuid:24a7a331-81f9-479f-94b1-62a81dbaf586>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00252.warc.gz"}
SBIC: Structural Bayesian Information Criterion (SBIC) for model selection in candidate models R-package implementation for the method presented in the paper “Information Enhanced Model Selection for Gaussian Graphical Model with Application to Metabolomic Data” by Zhou et al. 2020. In light of the low signal-to-noise nature of many large biological data sets, we propose a novel method to learn the structure of association networks using a Gaussian graphical model combined with prior knowledge. Our strategy includes two parts. In the rst part, we propose a model selection criterion called structural Bayesian information criterion (SBIC), in which the prior structure is modeled and incorporated into the Bayesian information criterion (BIC). It is shown that the popular extended BIC (EBIC) is a special case of SBIC. In second part, we propose a two-step algorithm to construct the candidate model pool. The algorithm is data-driven and the prior structure is embedded into the candidate model automatically. Theoretical investigation shows that under some mild conditions SBIC is a consistent model selection criterion for the high-dimensional Gaussian graphical model. Simulation studies validate the superiority of the proposed algorithm over the existing ones and show the robustness to the model misspecification. Application to relative concentration data from infant feces collected from subjects enrolled in a large molecular epidemiological cohort study validates that prior knowledge on metabolic pathway involvement is a statistically signicant factor for the conditional dependence among metabolites. Furthermore, new relationships among metabolites are identied through the proposed algorithm which have not been discovered by conventional pathway analysis. Some of them have been widely recognized in the literature. The development version from GitHub with: This is a basic example which shows you how to solve a common problem: #> Attaching package: 'SBICgraph' #> The following object is masked from 'package:stats': #> simulate library(network) # for visualization #> network: Classes for Relational Data #> Version 1.16.1 created on 2020-10-06. #> copyright (c) 2005, Carter T. Butts, University of California-Irvine #> Mark S. Handcock, University of California -- Los Angeles #> David R. Hunter, Penn State University #> Martina Morris, University of Washington #> Skye Bender-deMoll, University of Washington #> For citation information, type citation("network"). #> Type help("network-package") to get started. # simulate data using the incorporated simulate function p <- 200 m1 <- 100 m2 <- 30 d <- simulate(n=100, p=p, m1=m1, m2=m2) data<- d$data real<- d$realnetwork priori<- d$priornetwork Compare between We can visualize the networks prior_net <- network(priori) real_net <- network(real) par(mfrow = c(1,2)) plot(prior_net, main = "Prior network") plot(real_net, main = "Real network") Then we can fit SBIC using one function lambda<- exp(seq(-10,10, length=30)) # calculating the error rate from the number of edges in the true graph and the number of discordant pairs r1 <- m2/m1 r2 <-m2/(p*(p-1)/2-m1) r <- (r1+r2)/2 model<- sggm(data = data, lambda = lambda, M=priori, prob = r) Comparing the estimated network to the true and prior network. Our comparison function above calcualtes the Positive selection rate (PSR) and the False positive rate (FDR) print("Comparing estimated model with the real network") #> [1] "Comparing estimated model with the real network" comparison(real = real, estimate = model$networkhat) #> $PSR #> [1] 0.36 #> $FDR #> [1] 0.4285714 print("Comparing the prior network with the real network") #> [1] "Comparing the prior network with the real network" comparison(real = real, estimate = priori) #> $PSR #> [1] 0.7 #> $FDR #> [1] 0.3
{"url":"http://cran.rediris.es/web/packages/SBICgraph/readme/README.html","timestamp":"2024-11-02T07:58:40Z","content_type":"application/xhtml+xml","content_length":"16551","record_id":"<urn:uuid:47f4b01e-2549-4f57-bfe5-0e87d91f50f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00845.warc.gz"}
Philosophy of mathematics - Logicism, Intuitionism, Formalism | Britannica While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. During the first half of the 20th century, the philosophy of mathematics was dominated by three views: logicism, intuitionism, and formalism. Given this, it might seem odd that none of these views has been mentioned yet. The reason is that (with the exception of certain varieties of formalism) these views are not views of the kind discussed above. The views discussed above concern what the sentences of mathematics are really saying and what they are really about. But logicism and intuitionism are not views of this kind at all, and insofar as certain versions of formalism are views of this kind, they are versions of the views described above. How then should logicism, intuitionism, and formalism be characterized? In order to understand these views, it is important to understand the intellectual climate in which they were developed. During the late 19th and early 20th centuries, mathematicians and philosophers of mathematics became preoccupied with the idea of securing a firm foundation of mathematics. That is, they wanted to show that mathematics, as ordinarily practiced, was reliable or trustworthy or certain. It was in connection with this project that logicism, intuitionism, and formalism were developed. The desire to secure a foundation for mathematics was brought on in large part by the British philosopher Bertrand Russell’s discovery in 1901 that naive set theory contained a contradiction. It had been naively thought that for every concept, there exists a set of things that fall under that concept; for instance, corresponding to the concept “egg” is the set of all the eggs in the world. Even concepts such as “mermaid” are associated with a set—namely, the empty set. Russell noticed, however, that there is no set corresponding to the concept “not a member of itself.” For suppose that there were such a set—i.e., a set of all the sets that are not members of themselves. Call this set S. Is S a member of itself? If it is, then it is not (because all the sets in S are not members of themselves); and if S is not a member of itself, then it is (because all the sets not in S are members of themselves). Either way, a contradiction follows. Thus, there is no such set as S. Logicism is the view that mathematical truths are ultimately logical truths. This idea was introduced by Frege. He endorsed logicism in conjunction with Platonism, but logicism is consistent with various anti-Platonist views as well. Logicism was also endorsed at about the same time by Russell and his associate, British philosopher Alfred North Whitehead. Few people still endorse this view, although there is a neologicist school, the main proponents of which are the British philosophers Crispin Wright and Robert Hale. Intuitionism is the view that certain kinds of mathematical proofs (namely, nonconstructive arguments) are unacceptable. More fundamentally, intuitionism is best seen as a theory about mathematical assertion and denial. Intuitionists embrace the nonstandard view that mathematical sentences of the form “The object O has the property P” really mean that there is a proof that the object O has the property P, and they also embrace the view that mathematical sentences of the form “not-P” mean that a contradiction can be proven from P. Because intuitionists accept both of these views, they reject the traditionally accepted claim that for any mathematical sentence P, either P or not-P is true; and because of this, they reject nonconstructive proofs. Intuitionism was introduced by L.E.J. Brouwer, and it was developed by Brouwer’s student Arend Heyting and somewhat later by the British philosopher Michael Dummett. Brouwer and Heyting endorsed intuitionism in conjunction with psychologism, but Dummett did not, and the view is consistent with various nonpsychologistic views—e.g., Platonism and nominalism. There are a few different versions of formalism. Perhaps the simplest and most straightforward is metamathematical formalism, which holds that ordinary mathematical sentences that seem to be about things such as numbers are really about mathematical sentences and theories. In this view, “4 is even” should not be literally taken to mean that the number 4 is even but that the sentence “4 is even” follows from arithmetic axioms. Formalism can be held simultaneously with Platonism or various versions of anti-Platonism, but it is usually conjoined with nominalism. Metamathematical formalism was developed by Haskell Curry, who endorsed it in conjunction with a sort of nominalism. Mathematical Platonism: for and against Philosophers have come up with numerous arguments for and against Platonism, but one of the arguments for Platonism stands out above the rest, and one of the arguments against Platonism also stands out as the best. These arguments have roots in the writings of Plato, but the pro-Platonist argument was first clearly formulated by Frege, and the locus classicus of the anti-Platonist argument is a 1973 paper by the American philosopher Paul Benacerraf.
{"url":"https://www.britannica.com/science/philosophy-of-mathematics/Logicism-intuitionism-and-formalism","timestamp":"2024-11-10T19:48:25Z","content_type":"text/html","content_length":"92141","record_id":"<urn:uuid:ac0ab8c2-adb1-453f-8ce2-d1a63e5391ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00231.warc.gz"}
Aerodynamic Performance Evaluation of a Wind Turbine Blade by Computational and Experimental Method Volume 03, Issue 06 (June 2014) Aerodynamic Performance Evaluation of a Wind Turbine Blade by Computational and Experimental Method DOI : 10.17577/IJERTV3IS060103 Download Full-Text PDF Cite this Publication Irshadhussain Master, Azim Aijaz Mohammad, Ratnesh Parmar, 2014, Aerodynamic Performance Evaluation of a Wind Turbine Blade by Computational and Experimental Method, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03, Issue 06 (June 2014), • Open Access • Total Downloads : 1291 • Authors : Irshadhussain Master, Azim Aijaz Mohammad, Ratnesh Parmar • Paper ID : IJERTV3IS060103 • Volume & Issue : Volume 03, Issue 06 (June 2014) • Published (First Online): 07-06-2014 • ISSN (Online) : 2278-0181 • Publisher Name : IJERT • License: This work is licensed under a Creative Commons Attribution 4.0 International License Text Only Version Aerodynamic Performance Evaluation of a Wind Turbine Blade by Computational and Experimental Method Irshadhussain I. Master Azim Aijaz Mohammad Ratnesh T. Parmar M.Tech. Student, Asst. Professor, Asst. Professor, Department of Department of Department of, Mechanical Engineering, Mechanical Engineering, Mechanical Engineering, Patel College of Science Patel College of Science Babaria Inst. Of Tech, and Technology, and Technology, Vadodara,Gujarat, Rajiv Gandhi Proudyogiki Rajiv Gandhi Proudyogiki India. Vishwavidyalaya, Vishwavidyalaya, Ratibad, Bhopal 462036 Ratibad, Bhopal 462036 India. India. Abstract–Lift and Drag forces along with the angle of attack are the important parameters in a wind turbine system. These parameters decide the efficiency of the wind turbine. In this paper an attempt is made to study the Lift and Drag forces in a wind turbine blade at various sections and the effect of angle of attack on these forces. In this paper NACA 4420 airfoil profile is considered for analysis of wind turbine blade. The wind turbine blade is modelled and several sections are created from root to tip. The Lift and Drag forces are calculated at different sections for angle of attack from 0° to 20° for low Reynolds number. The analysis showed that angle of attack of 6° has high Lift/Drag ratio. The CFD analysis is also carried out at various sections of blade at different angle of attack. The pressure and velocity distributions are also plotted. The airfoil NACA 4420 is analyzed based on computational fluid dynamics to identify its suitability for its application on wind turbine blades and good agreement is made between results. Wind energy is an abundant resource in comparison with other renewable resources. Moreover, unlike the solar energy, the utilization could not be affected by the climate and weather. Wind turbine invented by engineers in order to extract the energy from the wind. Because the energy in the wind is converted to electric energy, the machine also called wind generator. A wind turbine consists of several main parts, i.e., the rotor, generator, driven chain, control system and so on. The rotor is driven by the wind and rotates at pre- defined speed in terms of the wind speed, so that the generator can produce electric energy output under the regulation of the control system. In order to extract the maximum kinetic energy from wind, researcher put much effort on the design of effective blade geometry. In the early stage, the airfoil of helicopters were used in 2 wind turbine blade design, but now, many specialized airfoils have been invented and used for wind turbine blade design. Moreover, a rotor blade may have different airfoil in different sections in order to improve the efficiency, so that modern blade is more complicated and efficient comparing to early wind turbine blades. In my early stage, the research on wind turbine blade design was limited on theoretical study. Field testing and wind tunnel testing which need a lot of effort and resources. Due to the development of computer aided design codes, they provide another way to design and analyze the wind turbine blades. Aerodynamic performance of wind turbine blades can be analyzed using computational fluid dynamics (CFD), which is one of the branches of fluid dynamics that uses numerical methods and algorithm to solve to solve and analyze problems of fluid flow. Meanwhile, finite element method (FEM) can use for the blade structure analysis. Comparing to traditional theoretical and experimental methods, numerical method saves money and time for the performance analysis of optimal design of wind turbine blades. H.V.Mahawadiwar et al [1] carried out Computational Fluid Dynamics (CFD) analysis of wind turbine blade with complete drawing and details of sub-system. The blade material is Cedar wood, strong and light weight. CAD model of the blade profile using Pro-E software is created and the flow analysis of the wind turbine blade mesh is created in the GAMBIT software. CFD analysis of the wind turbine blade is carried out in the FLUENT software. Form this study they conclude as following: 1. Value of numerical power increases as angle of attack increases from 0° to 7°, after 7° the value of numerical power reduced. Hence critical angle of attack for this blade is 7°. 2. The maximum value of coefficient of performance (Cpmax = 0.271) was saw at angle of attack 7° and at velocity of air 8 m/s. 3. This blade can generate maximum power of 620 W at maximum Cp, angle of attack 7° and velocity of air 8 m/s. Chris Kaminsky, Austin Filush, Paul Kasprzak and Wael Mokhtar [2] carried out the work of a VAWT using the NAC A0012-34 airfoil. The system was modelled in Solid Works. They are use of the STAR CCM software to CFD analyzes the air flow around a vertical axis wind turbine to perform. Analysis has been done in three ways as show: 1. To determine CFD analysis analyzed the 2D flow over the chosen airfoil. 2. Determine the analysis looked at the flow over a 3D representation of the airfoil. 3. Finally, a full VAWT assembly was created and analyzed at various wind directions at the The airfoil then the 2D and 3D simulations used different angles of attack (0 to 15 degrees) and speeds (15 & 30 mph) to determine. The full assembly included 3 airfoils that were attached into a 5ft high, 3 ft diameter structure. The results of this research on the NACA 001234 airfoil showed it could be a very viable choice for a residential VAWT. The 2D analysis gave a stall angle of about 8°, however, the 3D analysis, it being more accurate, did not provide us with a stall angle. The results for the 3D full assembly analysis of vertical axis wind turbine were incomplete. C. Rajendran, G. Madhu, P. S. Tide et al [3] had carried out the potential of an incompressible NavierStokes CFD method for the analysis of horizontal axis wind turbines. The CFD results are validated against experimental data of the NREL power performance testing activities. Comparisons are shown for the surface pressure distributions at several conditions are show as under taken: 1. Wind Velocity is 12.5m/s. 2. Yaw Angle is 0°. 3. Rotational Speed is 25 rpm. 4. Turbulence Model is – SST. David Hartwanger et al [4] have aims to develop a practical engineering methodology for the CFD-based assessment of multiple turbine installations. They are constructs the 2D experimental model of wind turbine which is of NREL S809 aerofoil series and compared. Their results with 3D CFD model in XFoil 6.3 codes and two ANSYS CFX 11.0 versions. It creates the cylindrical domain whose radius 2L and length 5L where L = turbine radius. For grid generation uses ICEM-CFD (ANSYS) software. In analysis it use – turbulence model. There are two main aims for doing analysis is as under show: 1. The primary aim is to predict the lift and drag for 2D experimental wind turbine. 2. Its secondary aim is to compare the results of Lower CFD idelity to Higher CFD Fidelity model. These two aims fulfil with one boundary condition which is use pressure as an inlet condition. The validation of CFD against 2D blade sections showed that the CFD and XFOIL panel code over-predict peak lift and tend to underestimate stalled flow. The 3D results compared well with experiment over four operating conditions. Results from the 3D corresponding calculated torque output showed good agreement with the 3D CFD model and experimental data. However, for high wind cases the actuator model tended to diverge from the CFD results and experiment. Hansen and Butterfield [5] discussing recent research conducted regarding the aerodynamics of wind turbines. HAWT blades are made up of varying airfoil cross sections. Depending on the distance from the turbine hub, the airfoil's thickness may change. When close to the hub, rotational velocity becomes less significant and the blade cross section uses a high thickness for structural stability. Close to the edge of the rotor, a much thinner airfoil is used to provide a high lift-drag ratio in the larger rotational velocity area. In many turbines designed and operated during the 1970s and 1980s, aviation airfoils were used, due to their high lift coefficients. However, continued operation of these airfoils highlighted potential drawbacks when applied to wind turbines. Because of the failures of stall-controlled aviation airfoils to adapt to varying wind conditions, airfoil selection and design became a critical focus of wind turbine research. Gómez-Iradi, et al [6] A new CFD technique for the analysis of HAWTs was developed and validated. The initial premise for this study was to examine the flow compressibility near the tip of wind turbine blades. Due to this flow compressibility, wind turbines often have changed performance and operate closer to stall conditions. In this study, the geometry was designed using the National Renewable Energy Laboratory (NREL) S809 wind turbine airfoil from 25% of blade span to the blade tip. The solver developed was based upon a second order implicit numerical method with a sliding mesh to account for the relative rotation of the rotor and stationary sections of the turbine. When compared with experimental results from a wind tunnel test, all of the major flow physics, including root and tip vortices, simulated within the project showed qualitative agreement. R.S. Amano, et. al. [7] carried out most blades available for commercial grade wind turbine incorporated a straight span wise profile and airfoil shaped cross-section. This paper explores the possibility of increasing the efficiency of blades at higher wind speeds while maintaining efficiency at lower wind speed. The blades will be made more efficient at higher wind speed by implementing a swept blade profile. This paper explores two methods for optimizing blade for operation where wind speeds average7 m/s. The two methods of optimization that are investigated are first, the straight edge blade optimizes the angle of attack and chord length for a given airfoil cross-section at different position along the blade and second implements a swept blade profile. The following conclusion obtained from the research paper. It was observed that swept edge geometry maintains the maximum efficiency at lower oncoming wind speed and delays the stall point resulting in an increase in power at higher oncoming wind speed. Kentaro hayashi, et. al. [8] carried out Noise reduction of wind turbines has recently become more important due to increasing Largescale turbine developments, stringent noise regulations, and installation of wind farms near residential areas. Wind turbine noise is mainly caused by broadband noise from the blades and can be reduced using noise prediction technologies. Mitsubishi Heavy Industries, Ltd. (MHI) has developed a new method to predict blade noise based on a computational fluid dynamics (CFD) approach and an empirical formula. This method can be used during the preliminary blade design and has been validated using an actual model. In this report, we present a less noisy blade that was developed by applying this approach at the design stage. Horia DUMITRESCU, et. al. [9] carried out In this paper two methods for determining the angle of attack on rotating blade from velocity and pressure measurement are presented. To derive the lift and drag coefficients an angle of attack is required in combination with the normal and tangential force coefficients. A proper inflow angle of attack is not directly available and two simple methods have been proposed to compute correctly the angle of attack for wind turbines. The first method using the measured/ computed velocities requires a iterative calculation, while the second technique using measured/computed pressures no iteration is required and the monitor points can be chosen to be closer to the blade 18 surface. On the other hand, the difficulty of using the pressure-method is to find the separation point where the local circulation changes sign and the distribution of skin friction should be determined from CFD solutions. Therefore, how to determine the effective angle of attack is a key factor to understand the stall flow. S. RAJAKUMAR, et. al. [10] carried out In this paper an attempt is made to study the Lift and Drag forces in a wind turbine blade at various sections and the effect of angle of attack on these forces. In this paper NACA 4420 airfoil profile is considered for analysis of wind turbine blade. The Lift and Drag forces are calculated at different sections for angle of attack from 0 to 12 for low Reynolds number. The analysis showed that angle of attack of 5 has high Lift/Drag ratio. Horia DUMITRESCU, et. al. [11] carried out the short separation bubbles which form near the leading edge of the inboard sections of the blade prior to the onset of leading edge stall have been analyzed in detail, including some effects of viscous inviscd interaction. The transition point is assumed to correspond to the minimum skin friction. The momentum integral technique for the wind turbine blade boundary layer has been extended to include the separated and reattaching shear layer in a leading- edge bubble of a wind turbine blade For cases where separated areas exist, a classical boundary layer approach is in principle no more valid (normal pressure gradient, formation of vortices, etc.). However, provided the flow is not widely separated, good description of the viscous effects is obtained, using inviscid flow calculation input. Based on the described boundary layer method, the physical processes which influence the inboard stall-delay phenomenon have been explained, including the onset of the three-dimensional effects and the increase of the lift coefficients. Aravind Lovelin [12] carried out In this paper a Horizontal axis wind turbine blade profile NACA 63-415 is analyzed for various angles of attack. The coefficient of Lift and drag is calculated for this NACA 63-415 for various angles of attack from 0° to 16° and the maximum ratio is achieved at 2° of angle of attack. The coefficient of Lift increases with increase in angle of attack up to 8°. After 8°, the coefficient of lift decreases and stall begins to occur. The drag force begin of dominate beyond this angle of attack. The rate of increase in lift is more for angle of from 0° to 8° and then it starts to decrease. The drag increase gradually until 5° angle of attack and then rapidly increases. The CFD analysis is carried out using STAR- CCM+ software. These results are compared with the wind tunnel experimental values for validation. Ryoichi Samuel et. al. [13] carried out this paper explores the possibility increasing the number of profitable sites by optimizing wind turbine blade design for low wind speed areas. Wind turbine blade profiles are often constructed using the Blade Element Momentum theory (BEM). This theory will produce the angle of twist and chord length for a given airfoil cross section and rotation speed at a finite number of positions along the span of the blade. From these two dimensional sections a three dimensional shape can be extruded. The BEM theory accomplishes this by treating a given cross section as an independent airfoil which processes wind with a speed and direction that is a vector sum of the eoncoming wind and the wind generated by rotation. Since the direction and magnitude of the 20 wind generated by rotation changes as a function of span wise position, so too must the airfoil cross section. The BEM theory is not entirely accurate if the data for the airfoil cross sections that are used have not been corrected for rotational motion. It is for this reason that CFD analysis is necessary for new blade designs. 1. LIFT AND DRAG Lift on a body is defined as the force on the body in a direction normal to the flow direction. Lift will only be present if the fluid incorporates a circulatory flow about the body such as that which exists about a spinning cylinder. The velocity above the body is increased and so the static pressure is reduced. The velocity beneath is slowed down, giving an increase in static pressure. So, there is a normal force upwards called the lift force. The drag on a body in an oncoming flow is defined as the force on the body in a direction parallel to the flow direction. For a windmill to operate efficiently the lift force should be high and drag force should be low. For small angles of attack, lift force is high and drag force is low. If the angles of attack () increases beyond a certain value, the lift force decreases and the drag force increases. So, the angle of attack plays a vital role. 2. METHOD OF ANALYSIS The aerofoil NACA 4420 is chosen for blade modeling as shown in Fig.1. Fig.1. NACA 4420 Airfoil NACA 4420 profiles are obtained from Design Foil Workshop for various chords. The modeling is done with Solid Works. The blade is modelled for the specification given in Table1 Profile NACA 4420 Root chord length 1651 mm Tip chord length 650 mm Length of blade 10700 mm Hub diameter 337.5 mm Hub length 1465 mm Hub to blade(neck) 1475 mm Table1. Blade specification The velocity triangle of airfoil profile is used to calculate lift and drag forces shown in Fig.2. Fig.2 Blade velocity triangle The value of is found by the following formula. The wind velocity is taken as 8m/s and the speed is taken as 45 r.p.m. = tan -1 (8/ (2r (45/60))) The angle of attack (AOA) is found by the following formula = – The angle of attack value is given as input in the Design Foil Workshop software and the values of CL and CD are found out. The lift and drag forces are calculated by the following formula and the lift to drag ratio (L/D ratio) is also found out. Lift = (1/2)** CL*c*L*Vrel² Drag = (1/2)** CD*c*L*Vrel² Where density of air – 1.225 kg/m³ c Chord length in meter 1m L Length of the blade element – 1m Vrel relative velocity of air in m/s = ((V) ² + (r) ²)0.5 The values of CL and CD were found out for various angles of attack. Lift = (0.5**c*L* CL*Vrel²) Drag = (0.5**c*L* CD*Vrel²) The Lift and Drag forces are calculated for the angle of attack from 0° to 20°. The Lift/Drag ratio is calculated for different angle of attack ranges from 0° to 20° for the velocity ranges from 5 to 20 m/sec. L/D RATIO Angle of attack Vo=5 Vo= 7 Vo= 10 Vo= 12 Vo= 15 Vo=1 Vo= 20 m/s m/s m/s m/s m/s 7 m/s m/s 0 50.7 53.6 55.6 56.2 57.9 59.1 60.3 1 59.7 62.4 64.7 68.5 69.9 71.3 72.8 2 67.2 70.8 73.5 73.0 75.8 78.0 80.4 3 70.0 73.0 76.3 80.6 82.2 84.7 86.5 4 75.4 78.7 78.8 83.9 86.3 88.0 88.1 5 74.3 77.8 81.1 82.6 84.9 85.7 88.0 6 72.3 75.5 78.4 83.6 85.0 83.2 85.3 7 69.2 72.5 75.1 79.7 81.5 83.5 85.0 8 65.8 68.7 71.4 75.5 77.1 78.8 80.1 9 64.4 64.5 66.8 70.7 72.1 74.0 75.1 10 59.6 62.2 61.6 65.2 66.7 68.0 69.3 11 54.6 56.7 58.8 61.9 63.4 64.8 63.6 12 49.7 51.6 53.3 56.2 57.5 58.7 59.7 13 52.5 58.4 67.1 73.8 82.5 86.5 97.7 14 49.6 55.7 64.8 72.0 80.9 84.3 96.8 15 46.8 53.1 62.5 69.3 79.4 82.4 95.9 Table.2. Lift/Drag Ratio Fig.3.Correlation between L/D ratio and angle of attack Fig.4. Correlation between CL and CD Fig.5. Increase in lift for various Angle of Attack Fig.6 Increase in Drag for various Angle of Attack 5.4 Comparison of the Analysis Methods: At the velocity 5 m/s Fig.7 Coefficient of Drag (CD) versus Angle of attack Fig.8 Coefficient of Lift (CL) versus Angle of attack At the velocity 10 m/s Fig.9 Coefficient of Drag (CD) versus Angle of attack Fig.10 Coefficient of Lift (CL) versus Angle of attack At the velocity 15 m/s Fig.11 Coefficient of Drag (CD) veres Angle of attack Fig.12 Coefficient of Lift (CL) versus Angle of attack 3. NUMERICAL METHOD The numerical method utilized for the simulation had a density based solver with implicit formulation, 2-D domain geometry, absolute velocity formulation, and superficial velocity for porous formulation. For this test, a simple solver and an external compressible flow model for the laminar was utilized. The green-gauss cell based was used for the gradient option. There are different equations used for flow, laminar, species, and energy. A simple method was used for the pressure-velocity coupling. For the discretization, a standard pressure was used, and density, momentum and turbulent inetic energy were set to second order upwind. 1. Flow Analysis The computational flow analysis is also performed for NACA 4420 profile. The four sections are considered for flow analysis at the blade from root to tip show in Table.3. Section Distance from hub (m) Chord length (m) 1 2.95 1.651 2 5.275 1.348 3 8.375 0.9469 4 10.7 0.65 Table 3. Sections from hub The maximum L/D ratio is achieved at 6° of angle of attack, for the average velocity of 20 m/sec. Hence the 2-D airfoil sections are created for analysis in ANSYS FLUENT. The aerofoil profile with specific boundary is created in Creo and the computational flow analysis is performed in ANSYS FLUENT. A smart fine mesh is created for the flow area. 2. Geometry and Boundary conditions Inlet velocity for the experiments and simulations is 10 m/sec. A fully laminar flow solution was used in ANSYS FLUENT, where linear laminar equations were used. A simple solver was utilized and the operating pressure was set to zero. Calculations were done for the linear region, i.e. for angles of attack 5 degrees, due to greater reliability of both experimental and computed values in this region. The airfoil profile and boundary conditions are all created. Fig.13 Meshing Fig.14 Geometry with boundary condition Fig.15 Velocity Plot 0° Angle of attack Fig.16 Pressure Plot 0° Angle of attack Fig.17 Velocity Plot 5° Angle of attack Fig.18 Pressure Plot 5° Angle of attack Fig.19 Velocity Plot 10° Angle of attack Fig.20 Pressure Plot 10° Angle of attack Fig.21 Velocity Plot 15° Angle of attack Fig.22 Pressure Plot 15° Angle of attack 4. RESULTS AND DISCUSSION In this paper a Horizontal axis wind turbine blade with NACA 4420 is designed and analysed for different angle of attack and at various sections. The blade with constant angle of attack throughout the length is analyzed to find the maximum L/D ratio. This is done at angle of attack ranging from 0° to 15° for the velocity varies from 5 -20 m/ sec. The maximum L/D ratio is achieved at 6° of angle of attack, for the average velocity of 20 m/sec. It is found that blade with 6° angle of attack has the maximum L/D ratio. The coefficient of Lift and drag is calculated for this NACA 4420 series for the angle of attack 0° to 20°. The coefficient of Lift increases with increase in Angle of attack up to 15°. After 15°, the coefficient of lift decreases and stall occurs at this angle of attack. The lift force at various lengths from hub to tip is analyzed and it is cleared that lift force increases from hub to tip for all range of angle of attack. The lift force increases with increase in angle of attack up to 14° and it starts to decrease after 14°. The drag force begin of dominate beyond this angle of attack. The rate of increase in lift is more for angle of attack from 0° to 10° and between 10° to 15° the rise in lift force is less. But the drag force increases with increase in angle of attack from hub to tip. The rate of increase in drag increase gradually unlike the rate of increase in lift from 0° to 16° of angle of attack and between 10° to 15° the rise in lift force is less. The CFD analysis also carried out using ANSYS FLUENT software. The velocity and pressure distribution at various angles of attack of the blade is shown in Fig. 15-22. These results are coinciding with the wind tunnel experimental values. Hence the results are validated with the experimental work is shown in Fig.7-12. The results demonstrate the pressure distribution over the airfoil. The pressure on the lower surface of the airfoil is greater than that of the incoming flow stream and as a result of that it effectively pushes the airfoil upward, normal to the incoming flow stream. On the other hand, the components of the pressure distribution parallel to the incoming flow stream tend to slow the velocity of the incoming flow relative to the airfoil, as do the viscous stresses. It could be observed that the upper surface on the airfoil experiences a higher velocity compared to the lower surface. By increasing the velocity at higher Mach numbers there would be a shock wave on the upper surface that could cause discontinuity. The authors wish to thank Maharaja Sayajirao University of Baroda, Kalabhavan, Baroda, Gujarat, India for granting permission to carry out this work. The valuable suggestions of Prof. Arvind Mohite, is gratefully acknowledged. 1. H. V. Mahawadiwar, V.D. Dhopte, P.S.Thakare, Dr. R. D. Askhedkar, CFD Analysis of Wind Turbine Blade, International Journal of Engineering Research and Applications, May-Jun 2012, PP- 3188-3194. 2. Chris Kaminsky, Austin Filush, Paul Kasprzak and Wael Mokhtar, A CFD Study of Wind Turbine Aerodynamics, Proceedings of the 2012 ASEE North Central Section Conference 3. C. Rajendran, G. Madhu, P.S. Tide, K. Kanthavel, Aerodynamic Performance Analysis of HAWT Using CFD Technique, European Journal of Scientific Research, ISSN 1450-216X Vol. 65, No. 1 (2011), PP 4. David Hartwanger and Dr. Andrej Howat, 3D Modelling of a Wind Turbine Using CFD, NAFEMS Conference, 2008. 5. Hansen, A. C., and Butterfield, C. P., 1993, "Aerodynamics of Horizontal-Axis Wind Turbines," Annual Review of Fluid Mechanics, 25, pp. 115 – 149. 6. Gómez-Iradi, S., Steijl, R., and Barakos, G. N., "Development and Validation of a CFD Technique for the Aerodynamic Analysis of HAWT," Journal of Solar Energy Engineering, 131, (3). 7. R.S. Amano, R.J. Malloy CFD Analysis on Aerodynamic Design Optimization of Wind Turbine Rotor Blade 2009, m pp. 71-75. 8. Kentaro hayashi, Hiroshi nishino, hiroyuki hosoya, koji fukami, Tooru matsuo, takao kuroiwa low-noise design for wind turbine blades March-2012, pp-74-77 9. Horia dumitrescu, Vladimir cardos florin frunzulica, Alexandru Dumitrache determination of angle of attack for rotating blades 2012 10. S. Rajakumar, Dr.D.Ravindran computational fluid dynamics of wind turbine blade at various angles of attack and low Reynolds number 2010 11. Horia dumitrescu, Vladimir cardos the turbulent boundary Layer on wind turbine blades 2010 pp-125-136 12. Dr. S. P. Vendan, S. Aravind Lovelin, M. Manibharathi and C. RajkumarAnalysis of a Wind Turbine Blade Profile for Tapping Wind Power at the Regions of Low Wind Speed. 13. Ryoichi Samuel Amano and Ryan Malloy Horizontal Axis Wind Turbine Blade Design 2009 A – Swept area of rotor – Angle of attack CD – Drag coefficient CL – Lift coefficient D – Drag force L – Lift force N – RPM of the rotor P – Power developed by rotor r – Radius of rotor R – Resultant force acting on aerofoil V – Free stream velocity VT – Tangential velocity VR -Resultant Velocity – Angular Velocity D – Drag L – Lift rel – relative You must be logged in to post a comment.
{"url":"https://www.ijert.org/aerodynamic-performance-evaluation-of-a-wind-turbine-blade-by-computational-and-experimental-method","timestamp":"2024-11-04T10:41:06Z","content_type":"text/html","content_length":"96348","record_id":"<urn:uuid:439e1276-1acf-427e-9e64-7607c6414dce>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00884.warc.gz"}
$ewenvironment {prompt}{}{} ewcommand {\ungraded }[0]{} ewcommand {\todo }[0]{} ewcommand {\oiint }[0]{{\large \bigcirc }\kern -1.56em\iint } ewcommand {\mooculus }[0]{\textsf {\textbf {MOOC}\ textnormal {\textsf {ULUS}}}} ewcommand {pnoround }[0]{prounddigits {-1}} ewcommand {pnoroundexp }[0]{proundexpdigits {-1}} ewcommand {punitcommand }[1]{\ensuremath {\mathrm {#1}}} ewcommand {\RR } [0]{\mathbb R} ewcommand {\R }[0]{\mathbb R} ewcommand {\N }[0]{\mathbb N} ewcommand {\Z }[0]{\mathbb Z} ewcommand {\sagemath }[0]{\textsf {SageMath}} ewcommand {\d }[0]{\,d} ewcommand {\l }[0]{\ell } ewcommand {\ddx }[0]{\frac {d}{\d x}} ewcommand {\zeroOverZero }[0]{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} ewcommand {\inftyOverInfty }[0]{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} ewcommand {\zeroOverInfty }[0]{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} ewcommand {\zeroTimesInfty }[0]{\ensuremath {\small \boldsymbol {0\cdot \infty }}} ewcommand {\inftyMinusInfty }[0]{\ ensuremath {\small \boldsymbol {\infty -\infty }}} ewcommand {\oneToInfty }[0]{\ensuremath {\boldsymbol {1^\infty }}} ewcommand {\zeroToZero }[0]{\ensuremath {\boldsymbol {0^0}}} ewcommand {\ inftyToZero }[0]{\ensuremath {\boldsymbol {\infty ^0}}} ewcommand {umOverZero }[0]{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} ewcommand {\dfn }[0]{\textbf } ewcommand {\unit }[0]{\mathop {}\!\mathrm } ewcommand {\eval }[1]{\bigg [ #1 \bigg ]} ewcommand {\seq }[1]{\left ( #1 \right )} ewcommand {\epsilon }[0]{\varepsilon } ewcommand {\phi }[0]{\varphi } ewcommand {\iff }[0]{\Leftrightarrow } \ DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \ DeclareMathOperator {\sign }{sign} ewcommand {\arrowvec }[1]{{\overset {\rightharpoonup }{#1}}} ewcommand {\vec }[1]{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} ewcommand {\point }[1]{\left (#1\right )} ewcommand {\pt }[1]{\mathbf {#1}} ewcommand {\Lim }[2]{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} ewcommand {\veci }[0]{{\ boldsymbol {\hat {\imath }}}} ewcommand {\vecj }[0]{{\boldsymbol {\hat {\jmath }}}} ewcommand {\veck }[0]{{\boldsymbol {\hat {k}}}} ewcommand {\vecl }[0]{\vec {\boldsymbol {\l }}} ewcommand {\uvec } [1]{\mathbf {\hat {#1}}} ewcommand {\utan }[0]{\mathbf {\hat {t}}} ewcommand {\unormal }[0]{\mathbf {\hat {n}}} ewcommand {\ubinormal }[0]{\mathbf {\hat {b}}} ewcommand {\dotp }[0]{\bullet } ewcommand {\cross }[0]{\boldsymbol \times } ewcommand {\grad }[0]{\boldsymbol abla } ewcommand {\divergence }[0]{\grad \dotp } ewcommand {\curl }[0]{\grad \cross } ewcommand {\lto }[0]{\mathop {\ longrightarrow \,}\limits } ewcommand {\bar }[0]{\overline } ewcommand {\surfaceColor }[0]{violet} ewcommand {\surfaceColorTwo }[0]{redyellow} ewcommand {\sliceColor }[0]{greenyellow} ewcommand {\ vector }[1]{\left \langle #1\right \rangle } ewcommand {\sectionOutcomes }[0]{} ewcommand {\HyperFirstAtBeginDocument }[0]{\AtBeginDocument }$ We introduce the idea of a vector at every point in space. Types of functions When we started on our journey exploring calculus, we investigated functions $f:\R \to \R$. Typically, we interpret these functions as being curves in the $(x,y)$-plane: We’ve also studied vector-valued functions $\vec {f}:\R \to \R ^n$ . We can interpret these functions as parametric curves in space: We’ve also studied functions of several variables $F:\R ^n \to \R$ . We can interpret these functions as surfaces in $\R ^{n+1}$ . For example if , then $F:\R ^2\to \R$ plots a surface in $\R ^3$ Now we are ready for a new type of function. Vector fields Now we will study vector-valued functions of several variables: We interpret these functions as vector fields, meaning for each point in the $(x,y)$-plane we have a vector. To some extent functions like this have been around us for a while, for if then $\grad G$ is a vector-field. Let’s be explicit and write a definition. vector field $\R ^n$ is a function where for every point in the domain, we assign a vector to the range. Consider the following table describing a vector field $\vec {F}$ What is $\vec {F}(1,7)$ Consider the following picture: Which vector field is illustrated by this picture? $\vec {F}(x,y)=\vector {x,y/2}$ $\vec {F}(x,y)=1/2$ $\vec {F}(x,y)=x+y/2$ $\vec {F}(x,y)=\vector {1,1/2}$ Note that with the first choice, the lengths of the vectors is changing, and that does not appear to be the case with our vector field. The second choice is not a vector field. The third choice is not a vector field. The fourth choice is a constant vector field, and is the correct answer. Properties of vector fields As we will see in the chapters to come, there are two important qualities of vector fields that we are usually on the look-out for. The first is rotation and the second is expansion. In the sections to come, we will make precise what we mean by rotation and expansion. In this section we simply seek to make you aware that these are the fundamental properties of vector fields. Radial fields Very loosely speaking a radial field is one where the vectors are all pointing toward a spot, or away from a spot. Let’s see some examples of radial vector fields. Here we see $\vec {F}(x,y) = \vector {\frac {x}{\sqrt {x^2+y^2}},\frac {y}{\sqrt {x^2+y^2}}}$ . Those vectors are all pointing away from the central point! Here we see $\vec {G}(x,y) = \vector {\frac {-x}{x^2+y^2},\frac {-y}{x^2+y^2}}$ . Those vectors are all pointing toward the central point. Here we see $\vec {H}(x,y,z) = \vector {x,y,z}$ . This is a three-dimensional vector field where all the vectors are pointing away from the central point. Each of the vector fields above is a radial vector field. Let’s give an explicit definition. radial vector field is a field of the form $\vec {F}:\R ^n\to \R ^n$ where where is a real number. Fun fact: Newton’s law of gravitation defines a radial vector field. $\vec {F}(x,y,z) = \vector {x,y,z}$ a radial vector field? yes no Absolutely! This vector field can be rewritten as: where $p=\answer {0}$. Some fields look like they are expanding and are. Other fields look like the are expanding but they aren’t. In the sections to come, we’re going to use calculus to precisely define what we mean by a field “expanding.” This property will be called divergence. Rotational fields Vector fields can easily exhibit what looks like “rotation” to the human eye. Let’s show you a few examples. Here we see $\vec {F}(x,y) = \vector {-y,x}$ . This vector field looks like it has counterclockwise rotation. At this point, we’re going to give some “spoilers.” It turns out that from a local perspective, meaning looking at points very very close to each other, only the first example exhibits “rotation.” While the second example looks like it is “rotating,” as we will see, it does not exhibit “local rotation.” Moreover, in future sections we will see that rotation (even local rotation) in three-dimensional space must always happen around some “axis” like this: In the sections to come, we will use calculus to precisely explain what we mean by “local rotation.” This property will be called Gradient fields In this final section, we will talk about fields that arise as the gradient of some differentiable function. As we will see in future sections, these are some of the nicest vector fields to work with Consider any differentiable function $F:\R ^n\to \R$ . A gradient field is a vector field $\vec {G}:\R ^n\to \R ^n$ where Note, since we are assuming is differentiable, we are also assuming that $\vec {G}$ is defined for all points in $\R ^n$ Let’s take a look at a gradient field. $F(x,y) = \frac {\sin (3x)+\sin (2y)}{1+x^2+y^2}$ . A plot of this function looks like this: The gradient field of looks like this: Note we can see the vector pointing in the initial direction of greatest increase. Let’s see a plot of both together: Remind me, what direction do gradient vectors point? Gradient vectors point to the maximum. Gradient vectors point up. Gradient vectors point in the initial direction of greatest increase. Now consider $F(x,y) = \frac {1}{\sqrt {x^2+y^2}}$ . A plot of this function looks like this: The gradient field of looks like this: Note we can see the vector pointing in the initial direction of greatest increase. Let’s see a plot of both together: The shape of things to come Now we present the beginning of a big idea. By the end of this course, we hope to give you a glimpse of “what’s out there.” For this we’re going to need some notation. Think of $A$ and $B$ as sets of numbers, like $A=\R$ or $A=\R ^n$ or $B=\R$ or $B=\R ^n$. • $C(A,B)$ is the set of continuous functions from $A$ to $B$. • $C^1(A,B)$ is the set of differentiable functions from $A$ to $B$ whose first-derivative is continuous. • $C^2(A,B)$ is the set of differentiable functions from $A$ to $B$ whose first and second derivatives are continuous. • $C^n(A,B)$ is the set of differentiable function from $A$ to $B$ where the first $n$th derivatives are continuous. • $C^\infty (A,B)$ is the set of differentiable functions from $A$ to $B$ where all of the derivatives are continuous. Here is a deep idea: The gradient turns functions of several variables into vector fields. We can write this with our new notation as: The Clairaut gradient test Now we give a method to determine if a field is a gradient field. A vector field $\vec {G}(x,y) = \vector {M(x,y),N(x,y)}$ , where have continuous partial derivatives, is a gradient field if and only if for all Let’s take a second and think about the gradient as a function on functions. Let $C^\infty (A,B)$ be the set of all function from th-derivatives are continuous for all values of . The gradient takes functions of several variables and maps them to vector fields: $\vec {G}(x,y) = \vector {M(x,y),N(x,y)}$ if and only if there is some function $F:\R ^2\to \R$ where but if all the partial derivatives are continuous, then: This is true if and only if $\pp [N]{x} -\pp [M]{y} = 0$ $\vec {G} = \vector {2x+3y,2+3x+2y}$ a gradient field? If so find a potential function. To start, let’s do Clairaut’s gradient test: And so And so we see $\pp [N]{x} -\pp [M]{y} = 0$ , and thus $\vec {G}$ is a gradient field. Now let’s try to find a potential function. To do this, we’ll antidifferentiate—in essence we want to “undo” the gradient. Write with me: where is a function of . In a similar way: We need to make this happen, we set $c(y) = 2y+y^2$ $k(x) = x^2$ . From this we find our potential function is $F(x,y) = x^2+3xy+ y^2 + 2y$ Now try your hand at these questions: $\vec {G} = \vector {x^3,-y^4}$ a gradient field? If so find a potential function. yes no $\vec {G} = \vector {y\cos (x),\sin (x)}$ a gradient field? If so find a potential function. yes no
{"url":"https://ximera.osu.edu/mooculus/calculus3/vectorFields/digInVectorFields","timestamp":"2024-11-03T04:26:10Z","content_type":"text/html","content_length":"96765","record_id":"<urn:uuid:0001244f-f69e-4b6d-a12b-ed7f609ebd91>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00622.warc.gz"}
Projectors, generalized inverses and the BLUE's Radhakrishna Rao, C. (1974) Projectors, generalized inverses and the BLUE's Journal of the Royal Statistical Society - Series B: Statistical Methodology, 36 (3). pp. 442-448. ISSN 1369-7412 Full text not available from this repository. Official URL: http://www.jstor.org/stable/2984930 It is well known that in the Gauss-Markov model (Y, Xβ, σ^2V) with |V| ≠ 0, the BLUE (best linear unbiased estimator) of Xβ is Y[1], the orthogonal projection of Y on M(X), the space spanned by the columns of X, with inner product defined as (x, y)=x'V^−1y. A quadratic function of Y[2], the projection of Y on the orthogonal complement of M(X), provides an estimate of σ^2. It may be seen that Y= Y[1]+Y[2]. When V is singular, the inner product definition as in non-singular case is not possible. In this paper a suitable theory of projection operators is developed for the case |V|=0, and a decomposition Y=Y[1]+Y[2] is obtained such that Y[1] is the BLUE of Xβ and a quadratic function of Y[2] is the MINQUE (Minimum Norm Quadratic Unbiased Estimator) of σ^2 in the sense of Rao (1972). Item Type: Article Source: Copyright of this article belongs to John Wiley and Sons. Keywords: Gauss-Markov Model; Blue; Minque; Projection Operator; G-inverse; Singular Dispersion Matrix ID Code: 54754 Deposited On: 12 Aug 2011 13:19 Last Modified: 12 Aug 2011 13:19 Repository Staff Only: item control page
{"url":"https://repository.ias.ac.in/54754/","timestamp":"2024-11-15T03:07:45Z","content_type":"application/xhtml+xml","content_length":"17424","record_id":"<urn:uuid:f5a175bc-a412-4395-a142-9d9635d9da25>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00361.warc.gz"}
Daily Medium Light Up Puzzle for Sunday 26th September 2010 This is an example of a completed LightUp puzzle. Your aim is to place light bulbs in to the grid so that, 1. All white cells in the grid are lit. 2. One light bulb is not lighting another light bulb. 3. Some of the black cells in the grid will have number clues. This clue tells you how many light bulbs are surrounding that cell. A light bulb will light all the cells horizontally and vertically away from it until it either hits the grid's edge, or reaches a black cell.
{"url":"https://puzzlemadness.co.uk/lightup/medium/2010/09/26","timestamp":"2024-11-11T00:59:32Z","content_type":"text/html","content_length":"54095","record_id":"<urn:uuid:e2d71f74-96bd-49c6-a970-16c9e98bdf96>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00076.warc.gz"}
Exploring the Insertion Sort Algorithm | CodingDrills Exploring the Insertion Sort Algorithm Exploring the Insertion Sort Algorithm Sorting algorithms are an essential part of a programmer's toolkit. They enable us to arrange elements in a collection in a specific order, making it easier to search and retrieve data efficiently. One such sorting algorithm that we will explore in this tutorial is the Insertion Sort algorithm. What is Insertion Sort? Insertion Sort is a simple comparison-based sorting algorithm that builds the final sorted array one item at a time. It is particularly efficient for sorting small datasets or when the array is already partially sorted. The algorithm repeatedly takes the next element and inserts it into its correct position within the already sorted portion of the array. The Algorithm in Action Let's dive into the implementation details of the Insertion Sort algorithm. To begin, we assume that the first element in the array is already sorted. We then iterate through the remaining elements, using each one to extend the sorted portion of the array. Consider the following example array: arr = [5, 2, 4, 6, 1, 3] We start with the first unsorted element, which is 2. We compare it with the previous element (5) and since 5 is greater than 2, we swap their positions. The array now becomes: arr = [2, 5, 4, 6, 1, 3] Moving to the next unsorted element (4), we compare it with the previous element (5) and find that 5 is greater. Therefore, we leave it in its position. The array remains unchanged: arr = [2, 5, 4, 6, 1, 3] Following the same steps for the remaining elements, we eventually obtain the sorted array: arr = [1, 2, 3, 4, 5, 6] Pseudocode for Insertion Sort Now that we understand the concept, let's look at the pseudocode for the Insertion Sort algorithm: for i in range(1, length(arr)): key = arr[i] j = i-1 while j >= 0 and arr[j] > key: arr[j + 1] = arr[j] j = j - 1 arr[j + 1] = key Time Complexity Analysis When analyzing the efficiency of an algorithm, it is crucial to consider its time complexity. For Insertion Sort, the best-case time complexity is O(n) when the array is already sorted. However, in the worst case, when the array is in reverse order, the time complexity is O(n^2). Although Insertion Sort might not be the most efficient sorting algorithm for large datasets, its simplicity and effectiveness for small collections make it a valuable tool in certain scenarios. In this tutorial, we explored the Insertion Sort algorithm, which works by repeatedly inserting the next element into its proper place within the already sorted portion of the array. We discussed the algorithm's implementation, included pseudocode, and analyzed its time complexity. Remember, Insertion Sort is an excellent choice when dealing with small datasets or partially sorted arrays. By understanding the inner workings and trade-offs of different sorting algorithms, you can make informed decisions to optimize the efficiency of your applications. Now that you have a firm grasp of Insertion Sort, try implementing it in your own code and experiment with different scenarios to deepen your understanding. Happy coding! Please note that the provided blog post is written in Markdown format and can be converted to HTML later. Feel free to adjust the formatting according to your needs. Ada AI Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything. I have a question about this topic
{"url":"https://www.codingdrills.com/tutorial/introduction-to-sorting-algorithms/insertion-sort-steps","timestamp":"2024-11-09T22:33:26Z","content_type":"text/html","content_length":"310583","record_id":"<urn:uuid:3c9ce67c-0b8d-4962-807c-645361b1d0b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00315.warc.gz"}
Sacred geometry behind time numbers 2 Kathara grids joined together form the Cubistic matrix Diagram was removed. Time numbers are based on the numbers 12, 24 and 60 but overall they can be turned into base 12 forming: • 12:12=Kathara grid:Kathara grid As we know the Kathara grid has 28 paths so 12:12 encodes • Kathara grid:Kathara grid=28:28 Whats interesting about this geometry is that 28:28=56 and 56 forms the basic structure of the 64 tetrahedron grid also the Kathara grid is encoded in the star tetrahedron and 12:12=Star tetrahedron:Star tetrahedron and from this we can place a star tetrahedron inside the other and form the fruit of life which encodes Metatrons cube/64 tetrahedron grid and this forms a fractaling star tetrahedron which forms the Sri Yantra which corresponds to E8 and the I Ching which are both forms of the 64 tetrahedron grid. The star tetrahedron contained within a star tetrahedron structure also encodes vibrational dimensions as we know and this structure corresponds to time numbers which already correspond to vibrational dimensions because time numbers are basically 2 multidimensional polygons joined together which both correspond to tetractyses(Which correspond to tetrahedron grids and dimensions) and they both form a 3D multidimensional polygon which corresponds to the cosmic tree of life which is formed out of the powers of 3 so it is base 3 and it is the vibrational dimensions. The Hebrew alphabet is base 3 and when its in base 3 it forms the 231 gates and it also forms a 3 by 3 cube so it forms the cubistic matrix and cosmic tree of life.
{"url":"https://www.64tge8st.com/post/2020/01/08/sacred-geometry-behind-time-numbers","timestamp":"2024-11-12T20:07:24Z","content_type":"text/html","content_length":"1050840","record_id":"<urn:uuid:813968fa-eb3d-4c90-9725-0fa2fac31010>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00118.warc.gz"}
Herbst and Greenwood in <em>The Conversation</em>: The Tell-Tale Clue to How Meteorites Were Made Herbst and Greenwood in The Conversation: The Tell-Tale Clue to How Meteorites Were Made Wesleyan faculty frequently publish articles based on their scholarship in The Conversation US, a nonprofit news organization with the tagline, “Academic rigor, journalistic flair.” In a new article, John Monroe Van Vleck Professor of Astronomy Bill Herbst and Assistant Professor of Earth and Environmental Sciences James Greenwood write about the model they’ve proposed for how the most common kind of meteorites form—a mystery that has dogged scientists for decades. The tell-tale clue to how meteorites were made, at the birth of the solar system April 26, 1803 was an unusual day in the small town of L’Aigle in Normandy, France – it rained rocks. Over 3,000 of them fell out of the sky. Fortunately, no one was injured. The French Academy of Sciences investigated and proclaimed, based on many eyewitness stories and the unusual look of the rocks, that they had come from space. The Earth is pummeled with rocks incessantly as it orbits the Sun, adding around 50 tons to our planet’s mass every day. Meteorites, as these rocks are called, are easy to find in deserts and on the ice plains of Antarctica, where they stick out like a sore thumb. They can even land in backyards, treasures hidden among ordinary terrestrial rocks. Amateurs and professionals collect meteorites, and the more interesting ones make it to museums and laboratories around the world for display and study. They are also bought and sold on eBay. Despite decades of intense study by thousands of scientists, there is no general consensus on how most meteorites formed. As an astronomer and a geologist, we have recently developed a new theory of what happened during the formation of the solar system to create these valuable relics of our past. Since planets form out of collisions of these first rocks, this is an important part of the history of the Earth. This meteor crater in Arizona was created 50,000 years ago when an iron meteorite struck the Earth. It is about one mile across. (Photo by Bill Herbst, CC BY-SA) The mysterious chondrules Drew Barringer (left), owner of Arizona meteor crater, his wife, Clare Schneider, and author William Herbst in the Van Vleck Observatory Library of Wesleyan University, where an iron meteorite from the crater is on display. (Photo courtesy of Bill Herbst) About 10% of meteorites are pure iron. These form through a multi-step process in which a large molten asteroid has enough gravity to cause iron to sink to its center. This builds an iron core just like the Earth’s. After this asteroid solidifies, it can be shattered into meteorites by collisions with other objects. Iron meteorites are as old as the solar system itself, proving that large asteroids formed quickly and fully molten ones were once abundant. The other 90% of meteorites are called “chondrites” because they are full of mysterious, tiny spheres of rock known as “chondrules.” No terrestrial rock has anything like a chondrule inside it. It is clear that chondrules formed in space during a brief period of intense heating when temperatures reached the melting point of rock, around 3,000 degrees Fahrenheit, for less than an hour. What could possibly account for that? A closeup of the Semarkona meteorite showing dozens of chondrules. (Image by Kenichi Abe) Researchers have come up with many hypotheses through the last 40 years. But no consensus has been reached on how this brief flash of heating happened. The chondrule problem is so famously difficult and contentious that when we announced to colleagues a few years ago that we were working on it, their reaction was to smile, shake their heads and offer their condolences. Now that we have proposed a solution, we are preparing for a more critical response, which is fine, because that’s the way science advances. The flyby model Our idea is quite simple. Radioactive dating of hundreds of chondrules shows that they formed between 1.8 and 4 million years after the beginning of the solar system – some 4.6 billion years ago. During this time, fully molten asteroids, the parent bodies of the iron meteorites, were abundant. Volcanic eruptions on these asteroids released tremendous amounts of heat into the space around them. Any smaller objects passing by during an eruption would experience a short, intense blast of heat. To test our hypothesis, we split up the challenge. The astronomer, Herbst, crunched the numbers to determine how much heating was necessary and for how long to create chondrules. Then the geologist, Greenwood, used a furnace in our lab at Wesleyan to re-create the predicted conditions and see if we could make our own chondrules. Laboratory technician Jim Zaresky (top) loads a programmable furnace as co-author Jim Greenwood looks on, in his laboratory at Wesleyan. This is where the synthetic chondrules are made. (Photo by Bill Herbst) The experiments turned out to be quite successful. We put some fine dust from Earth rocks with compositions resembling space dust into a small capsule, placed it in our furnace and cycled the temperature through the predicted range. Out came a nice-looking synthetic chondrule. Case closed? Not so fast. Two problems emerged with our model. In the first place, we had ignored the bigger issue of how chondrules came to be part of the whole meteorite. What is their relationship to the stuff between chondrules – called matrix? In addition, our model seemed a bit too chancy to us. Only a small fraction of primitive matter will be heated in the way we proposed. Would it be enough to account for all those chondrule-packed meteorites hitting the Earth? A comparison of a synthetic chondrule (left) made in the Wesleyan lab with a heating curve from the flyby model, with an actual chondrule (right) from the Semarkona meteorite. The crystal structure is quite similar, as shown in the enlargements (bottom row). (Photo by Jim Greenwood) Making whole meteorites To address these issues, we extended our initial model to consider flyby heating of a larger object, up to a few miles across. As this material approaches a hot asteroid, parts of it will vaporize like a comet, resulting in an atmosphere rich in oxygen and other volatile elements. This turns out to be just the kind of atmosphere in which chondrules form, based on previous detailed chemical We also expect the heat and gas pressure to harden the flyby object into a whole meteorite through a process known as hot isostatic pressing, which is used commercially to make metal alloys. As the chondrules melt into little spheres, they will release gas to the matrix, which traps those elements as the meteorite hardens. If chondrules and chondrites form together in this manner, we expect the matrix to be enhanced in exactly the same elements that the chondrules are depleted. This phenomenon, known as complementarity, has, in fact, been observed for decades, and our model provides a plausible explanation for it. The authors’ model for forming chondrules. A small piece of rock (right) — a few miles across or less — swings close to a large hot asteroid erupting lava at its surface. Infrared radiation from the hot lava briefly raises the temperature on the small piece of rock high enough to form chondrules and harden part of that object into a meteorite. (Model by Bill Herbst/Icarus) Perhaps the most novel feature of our model is that it links chondrule formation directly to the hardening of meteorites. Since only well-hardened objects from space can make it through the Earth’s atmosphere, we would expect the meteorites in our museums to be full of chondrules, as they are. But hardened meteorites full of chondrules would be the exception, not the rule, in space, since they form by a relatively chancy process – the hot flyby. We should know soon enough if this idea holds water, since it predicts that chondrules will be rare on asteroids. Both Japan and the United States have ongoing missions to nearby asteroids that will return samples over the next few years. If those asteroids are full of chondrules, like the hardened meteorites that make it to the Earth’s surface, then our model can be discarded and the search for a solution to the famous chondrule problem can go on. If, on the other hand, chondrules are rare on asteroids, then the flyby model will have passed an important test. This article is republished from The Conversation under a Creative Commons license. Read the original article.
{"url":"https://newsletter.blogs.wesleyan.edu/2019/06/07/herbst-and-greenwood-in-the-conversation-the-tell-tale-clue-to-how-meteorites-were-made/","timestamp":"2024-11-03T12:06:58Z","content_type":"text/html","content_length":"170813","record_id":"<urn:uuid:d13de9bf-8ac6-423f-aa39-bf4e923272e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00483.warc.gz"}
Lesson 12 Larger Populations 12.1: First Name versus Last Name (5 minutes) The purpose of this warm-up is for students to begin to see the need for samples of data when the population is too large. In this activity, students are asked to think about a question involving all the students at their school and compare the question to an earlier lesson in which the population was small and it was easy to obtain data for the entire population. Give students 2 minutes of quiet work time followed by a whole-class discussion. Student Facing Consider the question: In general, do the students at this school have more letters in their first name or last name? How many more letters? 1. What are some ways you might get some data to answer the question? 2. The other day, we compared the heights of people on different teams and the lengths of songs on different albums. What makes this question about first and last names harder to answer than those Activity Synthesis The purpose of the discussion is to highlight the methods of getting data for the school more than it is the method of computing the answer. Select some students to share their responses. Students who have elected to sum all the letters in the first names in the school and all the letters in the last names in the school may note that it is a simple comparison to tell whether there are more in first or last names, since you get one single large number for each group. (Comparing data sets.) Students who have elected to calculate the mean for each group and use MAD as a method of comparison may note that while the calculations may take more time, they give you more precise information, such as knowing about how long first names and last names are, as well as a way to compare the two sets. (Using the general rule from the previous lesson.) Students who suggest surveying a small group of students may point out that it would be easier to do the calculation with a smaller group. The information would not be as accurate, but it would take a lot less time and might give a good general idea. It would depend on how accurate you needed your answer to be. (Introduction to sampling.) 12.2: John Jacobjingleheimerschmidt (10 minutes) In this activity, students are asked to compare two groups (length of preferred names and last names) by collecting data from the class. They are asked if the data from the class gives enough information to draw a conclusion about a larger group (MP3). In the following activities, students will be introduced to the idea of sampling. This activity gives students the first chance to experience why sampling might be needed. Compute the mean and MAD for the number of letters in each student’s preferred name (if students do not go by their first name, you may use their nickname, middle name, etc.). Do the same for their last names. Give students 1 minute of quiet work time for the first 2 questions followed by a quick display of information then 5 more minutes of quiet work time and a whole-class discussion. If a digital solution is available, input the data for the class to find the mean and mean absolute deviation for each data set. If a digital solution is not available, this information should be calculated based on the class roster prior to this activity. After students have had a minute to work on answering the first two questions, provide students with the mean and MAD for the names in the Tell students that if they have a preferred name other than their official first name (nickname, middle name, etc.) they may use this in place of the first name. Student Facing Continue to consider the question from the warm-up: In general, do the students at this school have more letters in their first name or last name? How many more letters? 1. How many letters are in your first name? In your last name? 2. Do the number of letters in your own first and last names give you enough information to make conclusions about students' names in your entire school? Explain your reasoning. 3. Your teacher will provide you with data from the class. Record the mean number of letters as well as the mean absolute deviation for each data set. 1. The first names of the students in your class. 2. The last names of the students in your class. 4. Which mean is larger? By how much? What does this difference tell you about the situation? 5. Do the mean numbers of letters in the first and last names for everyone in your class give you enough information to make conclusions about students’ names in your entire school? Explain your Activity Synthesis The purpose of the discussion is for students to see how the data they have might relate to a larger group. In particular, that a sample might give some estimate of a larger population, but the estimate should not be assumed to be exact. Consider asking these questions for discussion: • “Do you expect the mean length of first names for the school to be exactly the same as the mean length for the class?” (Probably not exactly the same. It may be close, though.) • “Do you expect the mean length of first names for the school to be much larger or smaller or about the same as the mean length for the class? Explain your reasoning.” (Unless there are a few outliers in the class, it should be fairly close to the mean from the class.) Speaking, Listening, Conversing: MLR8 Discussion Supports. Use this routine to support whole-class discussion. Display the sentence frames: “The mean length of first names for the school will not be exactly the same as the mean length for the class because ________ .” and “The mean length of first names for the school should be larger/smaller/about the same as the mean length for the class because ________ .” As students share their responses, press for details by asking, “Can you use an example from your name and our class data?” and “Is your answer the same for other classes and schools?” This will support rich and inclusive discussion about how the data from the sample might relate to a larger group. Design Principle(s): Support sense-making; Cultivate conversation 12.3: Siblings and Pets (10 minutes) In this activity, students think a little more deeply about the data we would like to know and how that compares to the data we can collect easily and quickly (MP1). They are presented with a statistical question that does not have an obvious answer. Students are then asked to consider ways they might begin gathering data to answer the question, but are asked to realize that the data they could reasonably collect is not everyone addressed by the question. Following the activity, the discussion defines the terms population and sample. Arrange students in groups of 2. Set up the context by asking students, “Do people who are the only child have more pets?” then to provide a possible explanation for their answer. For example, maybe only children do have more pets because the family can better afford to take care of an animal with only 1 child. Maybe they do not because smaller families may live in smaller places and not have room for a lot of pets. Give students 5 minutes of partner work time followed by 5 minutes of whole-class discussion. Student Facing Consider the question: Do people who are the only child have more pets? 1. Earlier, we used information about the people in your class to answer a question about the entire school. Would surveying only the people in your class give you enough information to answer this new question? Explain your reasoning. 2. If you had to have an answer to this question by the end of class today, how would you gather data to answer the question? 3. If you could come back tomorrow with your answer to this question, how would you gather data to answer the question? 4. If someone else in the class came back tomorrow with an answer that was different than yours, what would that mean? How would you determine which answer was better? Activity Synthesis The purpose of the discussion is to show the difference between the data we would like to have to answer the question and the data we have available. Some questions for discussion: • “If we had all the time and money in the world and wanted to answer this question, who would we need to collect data from?” (Everyone in the world.) • “What would you do with the data collected from everyone to answer the questions?” (Find the mean and MAD of the data from the two sets and compare them like we did in previous lessons.) • “Why is it unreasonable to actually collect all the necessary data to answer the question?” (There are too many people to collect data from. There is not enough time to get to everyone in the world, and I cannot travel everywhere.) • “Since it may be difficult to guess an answer without doing any research, but we cannot get all of the data we want, what data could you get that would help estimate an answer?” (It would be good to ask a few people in different parts of the world and try to get different groups represented.) Define population and sample. A population is the entire pool from which data is taken. Examples include (depending on the question) “all humans in the world,” “all 7th graders at our school,” or “oak trees in North America.” In this usage, it does not have to refer only to groups of people or animals. A sample is the part of the population from which data is actually collected. Examples (related to the population examples) include “5 people from each country,” “the first 30 seventh graders to arrive at our school,” or “8 oak trees from the forest near our school.” Ask students, "What is the population for the question about only children and their pets?" (Everyone in the world.) Note that we would need data from everyone, including those who don't have pets or do have siblings. Ask students, "What might be a sample we could use to answer the question?" (The students in our class, my neighbors, a few people from different countries.) After getting several responses, ask, "What might be the benefits and drawbacks of each of these samples?" (Some may be more convenient, but would not represent the population as well or vice-versa.) Explain: While it is best to have data for the entire population, there are many reasons to use a sample. • More manageable. With very large populations, the amount of data can be hard to collect and work with, so a smaller subset may still be informative and easier to work with. Example: Find the average size of a grain of sand. • Necessary. Sometimes it is impossible to reach the entire population, so a sample is all that is available. Example: Find the average lifespan of tuna. • Speed. Sometimes a rough estimate is all that is needed and a sample of data is enough to estimate the population characteristic. Example: Out of curiosity, what is the median number of apps on • Cost. Sometimes it is very costly to obtain the data needed, so a sample can reduce the cost. Example: Find the average amount of hydrogen in moon rocks. Representation: Develop Language and Symbols. Create a display of important terms and vocabulary. Invite students to suggest language or diagrams to include that will support their understanding of: populations and sample. Supports accessibility for: Conceptual processing; Language Representing, Speaking: MLR2 Collect and Display. To help students make sense of the terms “sample” and “population”, draw a diagram of a few circles inside a larger circle on a visual display. Label the large outer circle “population” and the small inner circles “sample.” As students respond to the question “What is the population for the question about only children and their pets?”, write the population on the visual display. As students respond to the question “What might be a sample we could use to answer the question?”, write the samples in different inner circles on the visual display. Listen for and amplify words and phrases that define these terms, such as “part of” or “entire.” This will help students visualize a sample as part of a population and understand that there are multiple samples inside a population. Design Principle(s): Support sense-making (for representation); Maximize meta-awareness 12.4: Sampling the Population (10 minutes) This activity gives students the opportunity to practice the new vocabulary of population and sample by identifying the population from a set of questions and describing a possible sample that could be used to get some information to begin answering the question. Since these words have a very specific meaning in the context of statistics that is different from the colloquial use of the words, it is important for students to work with the vocabulary in specific situations to understand their meaning (MP6). Arrange students in groups of 2. Allow students 3 minutes of quiet work time followed by 3 minutes of partner discussion then a whole-class discussion. While in partner discussion, suggest students compare their answers and discuss any advantages or disadvantages for the samples they proposed. Student Facing For each question, identify the population and a possible sample. 1. What is the mean number of pages for novels that were on the best seller list in the 1990s? 2. What fraction of new cars sold between August 2010 and October 2016 were built in the United States? 3. What is the median income for teachers in North America? 4. What is the average lifespan of Tasmanian devils? Student Facing Are you ready for more? Political parties often use samples to poll people about important issues. One common method is to call people and ask their opinions. In most places, though, they are not allowed to call cell phones. Explain how this restriction might lead to inaccurate samples of the population. Activity Synthesis The purpose of the discussion is to further solidify the meaning of the terms population and sample for students. Consider asking these questions for discussion: • “For each question, could there be another population than the one you gave?” (No. The population refers to all of the individuals that pertain to the question.) • “For each question, could there be another sample than the one you gave?” (Yes. A sample refers to a few of the individuals from whom data will be collected and does not specify the number or how the individuals are selected.) • “What are some of the advantages and disadvantages you determined for the samples you chose?” (Some are easy to work with, but might miss large sections of the population.) • “What is a question you could ask for which the population would be all of the books in your house?” (For example, “What is the average number of pages in books in my house?”) • “What is a question you could ask for which the sample could be all of the books in your house?” (For example, “What is the average number of pages in all the books ever written?”) Explain that a well-phrased question should only have 1 population (a question that is not well-phrased should be reconsidered so that the purpose of the question is clear), but there are usually many ways to find samples within that population. In future lessons, we will explore some important aspects to consider while selecting a sample. Lesson Synthesis Consider asking these questions to reinforce the ideas from this lesson: • “When the groups become too large, how can we obtain some data to begin answering a question about the group?” • “What are some drawbacks of using samples instead of the entire population?” (The value for the measure of center will not be exact and some variability may be lost. Some groups may not have been included in the sample, so their input is lost.) • “What are some reasons samples are necessary?” (More manageable, impossible to reach the entire population, speed, cost.) • “Someone wants to know what breed of dog is most popular as a pet in the state. What is a sample that could be used?” (A few dog owners from each of the major cities in the state and a few dog owners from the rural areas.) • “The principal of a school has access to the grades for students at the school. If we use these grades as a sample, what is a population that the data could be applied to?” (The entire school district, the state, the United States, or all students around the world.) 12.5: Cool-down - How Many Games? (5 minutes) Student Facing A population is a set of people or things that we want to study. Here are some examples of populations: • All people in the world • All seventh graders at a school • All apples grown in the U.S. A sample is a subset of a population. Here are some examples of samples from the listed populations: • The leaders of each country • The seventh graders who are in band • The apples in the school cafeteria When we want to know more about a population but it is not feasible to collect data from everyone in the population, we often collect data from a sample. In the lessons that follow, we will learn more about how to pick a sample that can help answer questions about the entire population.
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/2/8/12/index.html","timestamp":"2024-11-04T11:17:47Z","content_type":"text/html","content_length":"98770","record_id":"<urn:uuid:0b5dce33-bf14-4ca1-988f-a0452110a23d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00354.warc.gz"}
Daniel's Blog Welcome to my blog. Here, I write about various subjects, including (but not limited to) functional programming, compiler development, programming language theory, and occasionally video games. I hope you find something useful here! Recent posts: • Implementing and Verifying "Static Program Analysis" in Agda, Part 5: Our Programming Language 3794 words, about 18 minutes to read. In the previous several posts, I’ve formalized the notion of lattices, which are an essential ingredient to formalizing the analyses in Anders Møller’s lecture notes. ... • Implementing and Verifying "Static Program Analysis" in Agda, Part 4: The Fixed-Point Algorithm 2308 words, about 11 minutes to read. In the preivous post we looked at lattices of finite height, which are a crucial ingredient to our static analyses. In this post, I will describe the specific algorithm that makes use of these lattices; this algorithm will be at the core of this series. ... • Implementing and Verifying "Static Program Analysis" in Agda, Part 3: Lattices of Finite Height 7169 words, about 34 minutes to read. In the previous post, I introduced the class of finite-height lattices: lattices where chains made from elements and the less-than operator < can only be so long. ... • Implementing and Verifying "Static Program Analysis" in Agda, Part 2: Combining Lattices 5430 words, about 26 minutes to read. In the previous post, I wrote about how lattices arise when tracking, comparing and combining static information about programs. I then showed two simple lattices: the natural numbers, and the (parameterized) “above-below” lattice, which modified an arbitrary set with “bottom” and “top” elements ($\bot$ and $\top$ respectively). ... • Implementing and Verifying "Static Program Analysis" in Agda, Part 1: Lattices 4965 words, about 24 minutes to read. This is the first post in a series on static program analysis in Agda. See the introduction for a little bit more context. ... • Implementing and Verifying "Static Program Analysis" in Agda, Part 0: Intro 862 words, about 5 minutes to read. Some years ago, when the Programming Languages research group at Oregon State University was discussing what to read, the Static Program Analysis lecture notes came up. ... • Microfeatures I Love in Blogs and Personal Websites 3126 words, about 15 minutes to read. Some time ago, Hillel Wayne published an article titled Microfeatures I’d like to see in more languages . In this article, he described three kinds of features in programming languages: fundamental features, deeply engrained features, and nice-to-have convenience features. ... • Integrating Agda's HTML Output with Hugo 4001 words, about 19 minutes to read. One of my favorite things about Agda are its clickable HTML pages. If you don’t know what they are, that’s pages like Data. ... • The "Deeply Embedded Expression" Trick in Agda 2470 words, about 12 minutes to read. I’ve been working on a relatively large Agda project for a few months now, and I’d like to think that I’ve become quite proficient. ... • Bergamot: Exploring Programming Language Inference Rules 2421 words, about 12 minutes to read. Inference Rules and the Study of Programming Languages In this post, I will talk about inference rules, particularly in the field of programming language theory. ...
{"url":"https://danilafe.com/","timestamp":"2024-11-13T08:49:37Z","content_type":"text/html","content_length":"9303","record_id":"<urn:uuid:ac97f214-b5f9-4a91-8df6-8534fd75c97b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00616.warc.gz"}
Writing Feynman slash notation in LaTeX • LaTeX • Thread starter Wrichik Basu • Start date In summary, Feynman slash notation is a method of representing Dirac matrices and vectors in quantum field theory. It is commonly used in LaTeX documents for its compact and visually appealing format. To write Feynman slash notation in LaTeX, the slash package can be used with the \slashed command, which allows for customization of the slash direction, length, and color. Other packages such as feynmf, feynmp, and feyn also provide advanced features for drawing Feynman diagrams and using Feynman slash notation. Common mistakes to watch out for when writing Feynman slash notation in LaTeX include forgetting to enclose the variable or symbol in curly braces and ensuring proper alignment in the equation. I started reading QFT recently. With respect to the Dirac spinors, I was introduced to Feynman's slash notation: There are which say how this can be inserted in LaTeX documents. I checked out the LaTeX guide of PF, but found that it is not listed there. Writing ##\text{##\slashed{p}##} ## gives an error. How is this notation implemented here? Science Advisor Homework Helper BvU said: Thanks. I believe this should be added to the LaTeX guide. Science Advisor Homework Helper FAQ: Writing Feynman slash notation in LaTeX 1. What is Feynman slash notation in LaTeX? Feynman slash notation is a mathematical notation used to represent the Dirac matrices in quantum field theory. It is commonly used to write down equations involving spinors, such as in particle 2. How do I write Feynman slash notation in LaTeX? To write Feynman slash notation in LaTeX, you can use the command \slashed followed by the variable or symbol you want to put a slash through. For example, \slashed{p} will produce a slashed p. You can also use the package "slashed" to make the notation look more professional. 3. Can I customize the appearance of Feynman slash notation in LaTeX? Yes, you can customize the appearance of Feynman slash notation in LaTeX by using options such as color, size, and font. You can also use the \slashed command within other mathematical environments like equations or fractions. 4. Is Feynman slash notation necessary in all LaTeX documents? No, Feynman slash notation is only necessary if you are writing equations involving spinors, such as in quantum field theory or particle physics. If you are not working in these fields, you do not need to use Feynman slash notation in your LaTeX documents. 5. Are there any alternative notations for Feynman slash in LaTeX? Yes, there are some alternative packages that can be used to represent Feynman slash notation in LaTeX, such as "feyn" and "feynmf." However, the most commonly used and recommended way is to use the \slashed command and the "slashed" package.
{"url":"https://www.physicsforums.com/threads/writing-feynman-slash-notation-in-latex.973541/","timestamp":"2024-11-09T09:24:46Z","content_type":"text/html","content_length":"98451","record_id":"<urn:uuid:81753e91-f7fb-413b-afdb-af193699a1d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00509.warc.gz"}
template<typename T > std::vector< T > possibly_extend_from_1_to_N (const std::vector< T > &values, const unsigned int N, const std::string &id_text) DEAL_II_DEPRECATED parse_map_to_double_array (const std::string &key_value_map, const std::vector< std::string > &list_of_keys, const bool expects_background_field, const std::string & std::vector< double > property_name, const bool allow_multiple_values_per_key=false, const std::unique_ptr< std::vector< unsigned int >> &n_values_per_key=nullptr, const bool allow_missing_keys= template<typename T > Table< 2, T > parse_input_table (const std::string &input_string, const unsigned int n_rows, const unsigned int n_columns, const std::string &property_name) template<int dim> std::string > expand_dimensional_variable_names (const std::vector< std::string > &var_declarations) template<int dim> IndexSet extract_locally_active_dofs_with_component (const DoFHandler< dim > &dof_handler, const ComponentMask &component_mask) template<int dim> std::vector< Point< dim > > get_unit_support_points (const SimulatorAccess< dim > &simulator_access) template<int dim> bool point_is_in_triangulation (const Mapping< dim > &mapping, const parallel::distributed::Triangulation< dim > &triangulation, const Point< dim > &point, const MPI_Comm template<int dim> bool polygon_contains_point (const std::vector< Point< 2 >> &point_list, const ::Point< 2 > &point) template<int dim> double signed_distance_to_polygon (const std::vector< Point< 2 >> &point_list, const ::Point< 2 > &point) double distance_to_line (const std::array<::Point< 2 >, 2 > &point_list, const ::Point< 2 > &point) template<int dim> std::array< Tensor< 1, dim >, dim-1 > orthogonal_vectors (const Tensor< 1, dim > &v) Tensor< 2, 3 > rotation_matrix_from_axis (const Tensor< 1, 3 > &rotation_axis, const double rotation_angle) Tensor< 2, 3 > compute_rotation_matrix_for_slice (const Tensor< 1, 3 > &point_one, const Tensor< 1, 3 > &point_two) std::pair< double, double > real_spherical_harmonic (unsigned int l, unsigned int m, double theta, double phi) bool fexists (const std::string &filename) bool fexists (const std::string &filename, const MPI_Comm comm) bool filename_is_url (const std::string &filename) std::string read_and_distribute_file_content (const std::string &filename, const MPI_Comm comm) void collect_and_write_file_content (const std::string &filename, const std::string &file_content, const MPI_Comm comm) int mkdirp (std::string pathname, const mode_t mode=0755) void create_directory (const std::string &pathname, const MPI_Comm comm, const bool silent) void extract_composition_values_at_q_point (const std::vector< std::vector< double >> &composition_values, const unsigned int q, std::vector< double > & std::string expand_ASPECT_SOURCE_DIR (const std::string &location) std::string parenthesize_if_nonempty (const std::string &s) bool string_to_bool (const std::string &s) std::vector< bool > string_to_bool (const std::vector< std::string > &s) unsigned int string_to_unsigned_int (const std::string &s) std::vector< unsigned int > string_to_unsigned_int (const std::vector< std::string > &s) bool has_unique_entries (const std::vector< std::string > &strings) double weighted_p_norm_average (const std::vector< double > &weights, const std::vector< double > &values, const double p) template<typename T > T derivative_of_weighted_p_norm_average (const double averaged_parameter, const std::vector< double > &weights, const std::vector< double > &values, const std::vector< T > & derivatives, const double p) template<int dim> double compute_spd_factor (const double eta, const SymmetricTensor< 2, dim > &strain_rate, const SymmetricTensor< 2, dim > &dviscosities_dstrain_rate, const double SPD_safety_factor) template<int dim> Point< dim > convert_array_to_point (const std::array< double, dim > &array) template<int dim> std::array< double, dim > convert_point_to_array (const Point< dim > &point) std::vector< Operator > create_model_operator_list (const std::vector< std::string > &operator_names) const std::string get_model_operator_options () template<int dim> SymmetricTensor< 2, dim > nth_basis_for_symmetric_tensors (const unsigned int k) template<int dim, typename VectorType > void project_cellwise (const Mapping< dim > &mapping, const DoFHandler< dim > &dof_handler, const unsigned int component_index, const Quadrature< dim > &quadrature, const std::function< void(const typename DoFHandler< dim >::active_cell_iterator &, const std::vector< Point< dim >> &, std::vector< double > &)> &function, VectorType &vec_result) void throw_linear_solver_failure_exception (const std::string &solver_name, const std::string &function_name, const std::vector< SolverControl > &solver_controls, const std::exception &exc, const MPI_Comm mpi_communicator, const std::string &output_filename="") template<typename T > std::size_t > compute_sorting_permutation (const std::vector< T > &vector) template<typename T > std::vector< T > apply_permutation (const std::vector< T > &vector, const std::vector< std::size_t > &permutation_vector) std::vector< Tensor< 2, rotation_matrices_random_draw_volume_weighting (const std::vector< double > &volume_fractions, const std::vector< Tensor< 2, 3 >> &rotation_matrices, const unsigned int 3 > > n_output_matrices, std::mt19937 &random_number_generator) double wrap_angle (const double angle) std::array< double, 3 > zxz_euler_angles_from_rotation_matrix (const Tensor< 2, 3 > &rotation_matrix) Tensor< 2, 3 > zxz_euler_angles_to_rotation_matrix (const double phi1, const double theta, const double phi2) A namespace for utility functions that might be used in many different places to prevent code duplication. std::pair<double,double> aspect::Utilities::real_spherical_harmonic ( unsigned int l, unsigned int m, double theta, double phi A function for evaluating real spherical harmonics. It takes the degree (l) and the order (m) of the spherical harmonic, where \(l \geq 0\) and \(0 \leq m \leq l\). It also takes the colatitude (theta) and longitude (phi), which are in radians. There are an unfortunate number of normalization conventions in existence for spherical harmonics. Here we use fully normalized spherical harmonics including the Condon-Shortley phase. This corresponds to the definitions given in equations B.72 and B.99-B.102 in Dahlen and Tromp (1998, ISBN: 9780691001241). The functional form of the real spherical harmonic is given by \[ Y_{lm}(\theta, \phi) = \sqrt{2} X_{l \left| m \right| }(\theta) \cos m \phi \qquad \mathrm{if} \qquad -l \le m < 0 \] \[ Y_{lm}(\theta, \phi) = X_{l 0 }(\theta) \qquad \mathrm{if} \qquad m = 0 \] \[ Y_{lm}(\theta, \phi) = \sqrt{2} X_{lm}(\theta) \sin m \phi \qquad \mathrm{if} \qquad 0< m \le m \] where \(X_{lm}( \theta )\) is an associated Legendre function. In practice it is often convenient to compute the sine ( \(-l \le m < 0\)) and cosine ( \(0 < m \le l\)) variants of the real spherical harmonic at the same time. That is the approach taken here, where we return a pair of numbers, the first corresponding the cosine part and the second corresponding to the sine part. Given this, it is no longer necessary to distinguish between positive and negative \(m\), so this function only accepts \( m \ge 0\). For \(m = 0\), there is only one part, which is stored in the first entry of the pair. This function uses the Boost spherical harmonics implementation internally, which is not designed for very high order (> 100) spherical harmonics computation. If you use spherical harmonics of a high order be sure to confirm the accuracy first. For more information, see: http://www.boost.org/doc/libs/1_49_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/sf_poly/sph_harm.html Referenced by aspect::Utilities::MapParsing::Options::Options(). template<int dim> double aspect::Utilities::compute_spd_factor ( const double eta, const SymmetricTensor< 2, dim > & strain_rate, const SymmetricTensor< 2, dim > & dviscosities_dstrain_rate, const double SPD_safety_factor This function computes a factor which can be used to make sure that the Jacobian remains positive definite. The goal of this function is to find a factor \(\alpha\) so that \(2\eta(\varepsilon(\mathbf u)) I \otimes I + \alpha\left[a \otimes b + b \otimes a\right]\) remains a positive definite rank-4 tensor (i.e., a positive definite operator mapping rank-2 tensors to rank-2 tensors). By definition, the whole operator is symmetric. In the definition above, \(a=\varepsilon(\mathbf u)\) is the strain_rate and \(b=\frac{\partial\eta(\varepsilon(\mathbf u),p)}{\partial \varepsilon}\) is the derivative of the viscosity with respect to the strain rate and is given by dviscosities_dstrain_rate. Since the viscosity \(\eta\) must be positive, there is always a value of \(\alpha\) (possibly small) so that the result is a positive definite operator. In the best case, we want to choose \(\alpha=1\) because that corresponds to the full Newton step, and so the function never returns anything larger than one. One can do some algebra to determine what the optimal factor is. We did this in the Newton paper (Fraters et al., Geophysical Journal International, 2019) where we derived a factor of \(\frac{2\eta(\ varepsilon(\mathbf u))}{\left[1-\frac{b:a}{\|a\| \|b\|} \right]^2\|a\|\|b\|}\), which we reset to a maximum of one, and if it is smaller then one, a safety_factor scales the value to make sure that 1-alpha won't get to close to zero. However, as later pointed out by Yimin Jin, the computation is wrong, see https://github.com/geodynamics/aspect/issues/5555. Instead, the function now computes the factor as \((2 \eta) / (a:b + b:a)\), again capped at a maximal value of 1, and using a safety factor from below. In practice, \(a\) and \(b\) are almost always parallel to each other, and \(a:b + b:a = 2a:b\), in which case one can drop the factor of \(2\) everywhere in the computations.
{"url":"https://aspect.geodynamics.org/doc/doxygen/namespaceaspect_1_1Utilities.html","timestamp":"2024-11-08T10:37:51Z","content_type":"application/xhtml+xml","content_length":"108313","record_id":"<urn:uuid:a2489192-943c-4d9a-8074-eda50e2c8a9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00466.warc.gz"}
Submitted by Atanu Chaudhuri on Sun, 28/08/2016 - 14:07 Difficult circle geometry problems with answers Solve difficult circle geometry problems in SSC CGL 39 within 15 minutes. Verify your solutions from answers. Learn how to solve quickly from solutions. Answers and link to solutions are at the end. Solve 10 difficult circle geometry and other geometry problems in SSC CGL Set 39 - answering time 15 mins Problem 1. In a rhombus ABCD, AB is produced to F and BA is produced to E such that AB=AE=BF. Then, a. $ED \bot CF$ b. $ED \gt CF$ c. $ED^2 + CF^2=EF^2$ d. $ED || CF$ Problem 2. In a quadrilateral ABCD with unequal sides if the diagonals AC and BD intersect at right angles then, a. $AB^2 + BC^2 = 2(CD^2 + DA^2)$ b. $AB^2 + BC^2 = CD^2 + DA^2$ c. $AB^2 + CD^2 = BC^2 + DA^2$ d. $AB^2 + AD^2 = BC^2 + CD^2$ Problem 3. Two chords AC and BD of a circle with centre at O intersect at right angles at E. If $\angle OAB = 25^0$, then $\angle EBC$ is, a. $15^0$ b. $20^0$ c. $25^0$ d. $30^0$ Problem 4. In a circle of radius 21 cm, an arc subtend an angle of $72^0$ at the centre. The length of the arc is, a. 26.4 cm b. 19.8 cm c. 21.6 cm d. 13.2 cm Problem 5. A circle with centre at O touches two intersecting lines AX and BY. The two points of contact A and B subtend and angle of $65^0$ at any point C on the major arc of the circle. If P is the point of intersection of the two lines, then the measure of $\angle APO$ is, a. $65^0$ b. $25^0$ c. $90^0$ d. $40^0$ Problem 6. Chords AB and CD of a circle intersect at E and are perpendicular to each other. Segments AE, EB and ED are of lengths 2 cm, 6 cm and 3 cm respectively. Then the length of the diameter is, a. $\sqrt{65}$ cm b. $65$ cm c. $\displaystyle\frac{65}{2}$ cm d. $\frac{1}{2}\sqrt{65}$ cm Problem 7. I and O are the incentre and circumcentre of $\triangle ABC$ respectively. The line AI produced intersects the circumcircle at point D. If $\angle ABC=x^0$, $\angle BID = y^0$ and $\angle BOD = z^0$, then $\displaystyle\frac{z+x}{y}$ is, a. 1 b. 2 c. 3 d. 4 Problem 8. The radius of two concentric circles are 17 cm and 10 cm. A straight line ABCD intersects the larger circle at A and D and the smaller circle B and C. If BC = 12 cm, then the length of AD is, a. 24 cm b. 34 cm c. 20 cm d. 30 cm Problem 9. Two circles intersect at A and B. P is a point on produced BA. PT and PQ are tangents to the circles. The relation between PT and PQ is, a. $PT \gt PQ$ b. $PT=PQ$ c. $PT \lt PQ$ d. $PT=2PQ$ Problem 10. O is the circumcentre of triangle ABC. If $\angle BAC = 85^0$ and $\angle BCA = 55^0$, then $\angle OAC$ is, a. $40^0$ b. $60^0$ c. $50^0$ d. $80^0$ Answers to the difficult circle geometry and other geometry problems in SSC CGL Set 39 Problem 1. Option a: $ED \bot CF$. Problem 2. Option c: $AB^2 + CD^2 = BC^2 + DA^2$. Problem 3. Option c: $25^0$ Problem 4. Option a: 26.4 cm. Problem 5. Option b: $25^0$. Problem 6. Option a: $\sqrt{65}$ cm. Problem 7. Option b: 2. Problem 8. Option d: 30 cm. Problem 9. Option b: $PT=PQ$. Problem 10. Option c: $50^0$. To know how you can solve these problems quickly, go through the solutions at, SSC CGL level Solution Set 39, Geometry 7. Guided help on Geometry in Suresolv To get the best results out of the extensive range of articles of tutorials, questions and solutions on Geometry in Suresolv, follow the guide, Suresolv Geometry Reading and Practice Guide for SSC CHSL, SSC CGL, SSC CGL Tier II and Other Competitive exams. The guide list of articles includes ALL articles on Geometry and relevant topics in Suresolv and is up-to-date.
{"url":"https://suresolv.com/ssc-cgl/ssc-cgl-level-question-set-39-geometry-7","timestamp":"2024-11-04T05:00:37Z","content_type":"text/html","content_length":"33084","record_id":"<urn:uuid:73d98ef3-d28e-4d92-8333-dd02c3516bfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00003.warc.gz"}
iCloud documentation is crap I’ve been trying to make use of iCloud in an iOS app over the last couple of weeks, and it’s been absolutely infuriating. There’s a large volume of documentation, but it’s not very good, and it’s incomplete. So let’s run down the laundry list: Getting the entitlements to work is, in general, an incredible pain in the arse. There’s very little in the way of debugging tools, and the error messages you get from various libraries, tools and websites are obtuse to the point of being worse than useless. I was relatively lucky, in that I had only minor issues regarding randomly bogus provisioning profiles and so forth – nothing that couldn’t be fixed by the iOS SDK equivalent of “reinstall Windows and reboot”. But it’s just depressing that this kind of stuff still doesn’t work properly, after all this time and with tens of thousands – hundreds of thousands? – of active developers all reminding Apple how shit it is. It makes me miss working at Apple, where I could just disable code signing entirely on my development devices. I suppose I could still do so via some jailbreaking goodness, but that ultimately only defers the pain until it comes time to submit to the App Store. The Simulator iCloud doesn’t work in the simulator. Flat out doesn’t work. But it doesn’t fail with a meaningful error log or exception. Oh no. Nor is this complete lack of functionality documented anywhere that I saw. I’ve seen it mentioned since – almost exclusively on 3rd party walk-throughs that try to guide you around the landmines, such as this. Le sigh. Core Data vs iCloud Core Data supports iCloud intrinsically. Only not always. The docs tell you to specify NSPersistentStoreUbiquitousContentNameKey in the persistent store options, because “This option is required for ubiquitous content to function”. Only that’s a half-truth. It’s required for SQLite based stores to function. If you specify it for a binary store, Core Data will simply explode after you create your document, with the exception: -[NSBinaryObjectStore connection]: unrecognized selector sent to instance 0xfoo Nothing meaningful, no explanatory error message, no “oh, hey silly, you can’t use this data store with that attribute”. No, it just tries blindly and explodes. So I guess you’re supposed to just handle the iCloud stuff completely manually, as for a generic UIDocument. I really do just guess – there’s no documentation at all dealing with binary stores explicitly, and as noted what deals with generic Core Data + iCloud is just wrong. So thanks for nothing, docs. It’d also be nice if there were an example of how you’re supposed to do that – enable ubiquity for the store manually – since you’re creating a new UIManagedDocument at an explicit location, and then you’re, what, going to move it out from under itself into the ubiquitous documents folder? Or do you close the document right after creation, move it, and reopen it? How awkward, if so. It seems to me that this was not planned very well at all. It’s also frustrating that the sync semantics – transactional or whole-database – are coupled with the store format to begin with. I need whole-database granularity – it wouldn’t make any sense at all to merge random sets of transactions – but I could make use of the SQLite store for various things. It also evidently is the only really supported path, at least insofar as the documentation is concerned, so it would have saved me a lot of pain. For example, it’s a royal pain in the arse to even figure out how to use the binary store to begin with: Binary Stores You’d think it would be a pretty trivial exercise to change the persistent store type from the default, SQLite, to the binary format. Obviously you’d specify the format when creating the persistent store. Wait, no, apparently not. Hmm… to the interwebs! Ah, so it’s specified as part of the file type info in your project’s settings. Hmm.. wait, nope, not there anymore. Heck, let’s just add the NSPersistentStoreTypeKey anyway, see what happens. Nothing. Awesome. As it turns out, you have to subclass UIManagedDocument and override persistentStoreTypeForFileType:. I really don’t like any kind of object framework that relies on subclassing. Cocoa has generally been pretty good at avoiding this. It boggles my mind why UIManagedDocument‘s so poorly designed. And why it’s basically undocumented. Le god damn sigh. NSMetadataQuery is infamously flakey. I knew that before even using it, from the sheer volume of complaints I’ve seen about it over the years. So I was cautious – I only copy-pasted verbatim from the examples, not daring to change a thing, not even method names. And it doesn’t work out of the box, even from the code examples. #@%! Long story short, NSMetadataQueryDidUpdateNotification does not do what you might expect. Certainly not what I expected. I read the documentation as saying that it would be used to deliver results, and NSMetadataQueryDidFinishGatheringNotification would be posted when the full first pass had completed. That’s a very typical pattern used elsewhere in Apple’s APIs (including Spotlight, so one would assume NSMetadataQuery would work the same). But it doesn’t. Instead, all of the first full run’s results are buffered up and provided by NSMetadataQueryDidFinishGatheringNotification. NSMetadataQueryDidUpdateNotification is purely for subsequent, real-time changes. That’s also a valid design, albeit unusual, suboptimal and not bloody documented as such. It almost defeats the purpose of an asynchronous API if you delay returning any results until the end anyway. And it’s not even just Apple’s documentation that gets this wrong – even otherwise good tutorials won’t work as written, because they don’t even register for the NSMetadataQueryDidFinishGatheringNotification. Perhaps the behaviour changed at some point (hell, perhaps it’s different between the simulator and devices… hell’s bells…). In summary, I’m really quite annoyed at the state of the iOS SDK, both design and documentation. I really miss being able to look at the source for everything. Only in hindsight do I realise how much more bearable that made everyday coding. I also miss the more mature Cocoa frameworks, that might have some warts but are generally well designed and have stood the test of time. There was a lot of good work ignored when the iOS frameworks were Frankensteined into existence. 2 thoughts on “iCloud documentation is crap” 1. This is an extremely helpful bit of information. The explanation of NSMetadataQueryDidFinishGatheringNotification and NSMetadataQueryDidUpdateNotification made a huge difference in my wrestling match with iCloud in my app. Thank you for putting the effort into sharing this. □ You’re most welcome. Leave a Comment
{"url":"https://wadetregaskis.com/icloud-documentation-is-crap/","timestamp":"2024-11-12T18:35:16Z","content_type":"text/html","content_length":"83814","record_id":"<urn:uuid:3aba7367-54e5-414d-a09a-30a8e9a7f0ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00658.warc.gz"}
Standard Deviation & Variance Simply Explained No statistics student can avoid standard deviation and variance, even though they'd often like to .... (well, I guess most people would like to avoid stats in general—full sympathy for that ;)) Standard deviation and variance are an important part of your basic statistical toolbox, and you will see them in both descriptive and inferential statistics. So it might be a good idea to familiarize yourself with them! Both show how widely the data are scattered around the mean, but only the standard deviation can be used for interpretation and shows the typical deviation from the mean. In the following, you'll learn how to calculate and interpret these two parameters. Let's do this! What are standard deviation & variance? The standard deviation is the square root of the variance and thus its "daughter". Put differently, the variance is the mama of the standard deviation ;). Both describe or quantify the dispersion of values around the mean of a data set, and thus tell you how much subjects differ on the characteristic in question. They can only be applied to metric data—interval, ratio, or absolute scales. Standard deviation and variance are very sensitive to outliers—use caution when interpreting them, or use another measure of dispersion such as the interquartile range The big difference between the two is that the values are squared for the variance and in the original units for the standard deviation. Therefore, the variance is used as a mathematical bridge to calculate the standard deviation, which is much more user-friendly for concrete interpretation. In addition, the variance is the basis for further calculations, e.g. in regression or – as you may have guessed – in the analysis of variance. These measures of dispersion can be found in most empirical studies: Usually the standard deviation is reported as additional information to the mean. It looks like this: M (SD), where M is the mean and SD the acronym for standard deviation. An example could be: 5.14 Which area of statistics do they belong to? Standard deviation and variance belong to the world of descriptive statistics, but they can also be found in inferential statistics where they are called something else: At the population level, s (standard deviation) and s squared (variance) become sigma and sigma squared. But the principle remains the same—only the calculation is slightly different. What does the standard deviation tell us? The standard deviation describes or quantifies how widely values are typically scattered around the mean of a data set: This tells us how large a typical, representative deviation from the "average" is. In more practical terms: It tells us whether the test subjects have pretty similar values in a certain variable—or whether they differ a lot. Good to know: If the data is normally distributed, almost 70% of all values fall between one standard deviation below and one standard deviation above the mean. As described above, the variance should not be used for interpretation, but only as a bridge to calculate the standard deviation. Calculation of variance In order to get to the standard deviation, you need to calculate the variance first. The confusing part here is that you'll find slight variations of formulas in every textbook: Some only divide by n, some only by n – 1, and some do both. So you better check what's being done at your university. Regarding the calculations, the following formulas are essentially the same: The mean is subtracted from each value, then the result is squared. You add all these squared values up and divide them either by the sample size (n) or the sample size minus 1 (n – 1). The reason for dividing by "n – 1" is that you get closer to the "true" variance in the population, i.e. you can estimate it better. If you only divide by n, you underestimate the true variance. However, if you already have all the people in your sample that make up your population (sample = population), then there's no need for any estimation or inference to the true variance. In this case, you simply divide by n. Now, let's get into it: You use this formula when you want to estimate the true variance in the population using the data from the sample—this is called the "sample variance": If you just want to calculate the variance in your specific sample without making any inferences about the population, meaning you use it only as a descriptive statistic, you use this formula: In the unlikely event that you've gathered data from every single person of the population you're interested in, you use this formula to calculate the "population variance". Note that in this formula, you use sigma squared instead of s squared. That's because you use Greek letters for inferential statistics and Roman letters for descriptive statistics. Our example: Speed dating Let's say you're doing a study on self-confidence at speed dating events, conducted among adults over the age of 18. Self-confidence is scaled from 0 (none at all) to 30 (likes to indulge in fantasies of irresistible attractiveness). Here are the data: 1, 20, 26, 14, 9, 6, 19, 22 n = 8 This is how you do it: 1. Calculate the mean 2. Subtract the mean from each value and square the result 3. Then add up all the squared values (this is the sum of squares, or SS) 4. Finally, divide by n – 1 (or by n) The mean value of our data is 14.63. We now insert the mean plus the above values neatly into the formula. We'll start with "divided by n – 1" which is 8 – 1 = 7: The variance equals 74.84 which is quite large for this small data set and not suitable for interpretation due to the squared values. And now the version "divided by n": This time, the variance equals 65.48. As you can see, this doesn't really help us much when it comes to the interpretation of self-confidence on a scale of 0 – 30 ... So we'll move straight on to the standard deviation! Calculation of standard deviation Once you have calculated the variance, the lion's share is already done. Now all that remains is to take the square root of the variance: This is how you do it: 1. Calculate the variance 2. Take the square root In our example of self-confidence at speed dating events, the result is as follows – above the version with divided by n – 1, below divided by n: what does this tell us? In terms of a relatively manageable scale of 0 to 30, these scores are quite high, meaning that the subjects varied quite widely in their self-confidence—some had very low and some very high So it's not really a homogeneous sample. To summarize: On average, the participants had a self-confidence score of about 15 (14.63), which is right in the middle of the scale. Typically the scores ranged between 7 and 23. How did I get those numbers 7 and 23? If the standard deviation is about 8 and the mean about 15, I subtract and add one standard deviation from and to the mean. This gives me the range where the values typically lie: 15 – 8 = 7 and 15 + 8 = 23 One thing to keep in mind when interpreting the data is the scale on which the characteristic of interest was measured. It's always important to put the size of the standard deviation in relation to the range of the scale! For example, a standard deviation of 2.2 is quite high on a scale of 0 – 5, and would be very low on a scale of 1 – 100. Finally, a quick summary: Summary standard deviation & variance ready to implement? Is your motivation high enough to apply what you've just read? Then grab a small data set and start calculating like there's no tomorrow ... And don't forget to reward yourself regularly! The fun factor of statistics is usually very limited. That's why you should really enjoy yourself during and after learning. For example, like this: Aron, A., Coups, E. J., & Aron, E. N. (2013). Statistics for Psychology (International ed). Pearson. Field, A. (2018). Discovering Statistics using IBM SPSS Statistics. SAGE. Howell, D. C. (2007). Statistical Methods for Psychology. Thomson Wadsworth. {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://statisticssimplyexplained.com/standard-deviation-variance-simply-explained/","timestamp":"2024-11-07T17:19:26Z","content_type":"text/html","content_length":"547206","record_id":"<urn:uuid:01122a10-9973-4d84-8e22-2ec3ef41068b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00109.warc.gz"}
ISTC-CC Abstract Parallel Probabilistic Tree Embeddings, k-Median, and Buy-at-Bulk Network Design Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'12), June 2012. Guy Blelloch, Anupam Gupta, and Kanat Tangwongsan Carnegie Mellon University This paper presents parallel algorithms for embedding an arbitrary n-point metric space into a distribution of dominating trees with O(log n) expected stretch. Such embedding has proved useful in the design of many approximation algorithms in the sequential setting. We give a parallel algorithm that runs in O(n^2 log n) work and O(log^2 n) depth—these bounds are independent of Δ = max[x,y] d (x,y) / min[x≠y] d(x,y), the ratio of the largest to smallest distance. Moreover, when Δ is exponentially bounded (Δ ≤ 2^O(n)), our algorithm can be improved to O(n^2) work and O(log^2 n) depth. Using these results, we give an RNC O(log k)-approximation algorithm for k-median and an RNC O(log n)-approximation for buy-at-bulk network design. The k-median algorithm is the first RNC algorithm with non-trivial guarantees for arbitrary values of k, and the buy-at-bulk result is the first parallel algorithm for the problem. KEYWORDS: Parallel algorithms, probabilistic tree embedding, kmedian, buy-at-bulk network design FULL PAPER: pdf
{"url":"https://istc-cc.cmu.edu/publications/papers/2012/spaa12-frt_abs.shtml","timestamp":"2024-11-06T14:08:23Z","content_type":"application/xhtml+xml","content_length":"6777","record_id":"<urn:uuid:db8ddda1-786b-4c28-93f1-6ce9afb673d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00054.warc.gz"}
partial derivative application examples In other words, we want to compute \(g'\left( a \right)\) and since this is a function of a single variable we already know how to do that. Refer to the above examples. Differentiation. Weâ ll start by looking at the case of holding \(y\) fixed and allowing \(x\) to vary. There is one final topic that we need to take a quick look at in this section, implicit differentiation. Solution: Given function: f (x,y) = 3x + 4y To find â f/â x, keep y as constant and differentiate the function: Therefore, â f/â x = 3 Similarly, to find â f/â y, keep x as constant and differentiate the function: Therefore, â f/â y = 4 Example 2: Find the partial derivative of f(x,y) = x2y + sin x + cos y. In this case we treat all \(x\)â s as constants and so the first term involves only \(x\)â s and so will differentiate to zero, just as the third term will. We also canâ t forget about the quotient rule. To evaluate this partial derivative atthe point (x,y)=(1,2), we just substitute the respective values forx and y:â fâ x(1,2)=2(23)(1)=16. The function f can be reinterpreted as a family of functions of one variable indexed by the other variables: z= f(x;y) = ln 3 p 2 x2 3xy + 3cos (2 + 3 y) 3 + 18 2 Find f x(x;y), f y(x;y), f(3; 2), f x(3; 2), f y(3; 2) For w= f(x;y;z) there are three partial derivatives f x(x;y;z), f y(x;y;z), f z(x;y;z) Example. Thus, the only thing to do is take the derivative of the x^2 factor (which is where that 2x came from). Partial derivatives are the basic operation of multivariable calculus. For the fractional notation for the partial derivative notice the difference between the partial derivative and the ordinary derivative from single variable calculus. The partial derivative notation is used to specify the derivative of a function of more than one variable with respect to one of its variables. In this last part we are just going to do a somewhat messy chain rule problem. Two goods are said to be substitute goods if an increase in the demand for either result in a decrease for the other. ... your example doesn't make sense. Example of Complementary goods are mobile phones and phone lines. 1. Hopefully you will agree that as long as we can remember to treat the other variables as constants these work in exactly the same manner that derivatives of functions of one variable do. The final step is to solve for \(\frac{{dy}}{{dx}} \). We will now hold \(x\) fixed and allow \(y\) to vary. We will spend a significant amount of time finding relative and absolute extrema of functions of multiple variables. We went ahead and put the derivative back into the â originalâ form just so we could say that we did. Newton's Method; 4. endobj ... For a function with the variable x and several further variables the partial derivative to x is noted as follows. In this case all \(x\)â s and \(z\)â s will be treated as constants. Since we can think of the two partial derivatives above as derivatives of single variable functions it shouldnâ t be too surprising that the definition of each is very similar to the definition of the derivative for single variable functions. Before taking the derivative letâ s rewrite the function a little to help us with the differentiation process. What is the partial derivative, how do you compute it, and what does it mean? Concavityâ s connection to the second derivative gives us another test; the Second Derivative Test. Also, the \(y\)â s in that term will be treated as multiplicative constants. The product rule will work the same way here as it does with functions of one variable. 2. In this section we are going to concentrate exclusively on only changing one of the variables at a time, while the remaining variable(s) are held fixed. Here are the derivatives for these two cases. It will work the same way. 8 0 obj Just as with functions of one variable we can have derivatives of all orders. Solution: The partial derivatives change, so the derivative becomesâ fâ x(2,3)=4â fâ y(2,3)=6Df(2,3)=[46].The equation for the tangent plane, i.e., the linear approximation, becomesz=L(x,y)=f(2,3)+â fâ x(2,3)(xâ 2)+â fâ y(2,3)(yâ 3)=13+4(xâ 2)+6(yâ 3) This video explains how to determine the first order partial derivatives of a production function. For the fractional notation for the partial derivative notice the difference between the partial derivative and the ordinary derivative from single variable calculus. We will now look at finding partial derivatives for more complex functions. Ontario Tech University is the brand name used to refer to the University of Ontario Institute of Technology. Solution: Now, find out fx first keeping y as constant fx = â f/â x = (2x) y + cos x + 0 = 2xy + cos x When we keep y as constant cos y becomes a conâ ¦ Before we work any examples letâ s get the formal definition of the partial derivative out of the way as well as some alternate notation. For the same f, calculate â fâ x (1,2).Solution: From example 1, we know that â fâ x(x,y)=2y3x. Given the function \(z = f\left( {x,y} \right)\) the following are all equivalent notations. If looked at the point (2,3), what changes? This one will be slightly easier than the first one. Weâ ll do the same thing for this function as we did in the previous part. 12 0 obj If you recall the Calculus I definition of the limit these should look familiar as they are very close to the Calculus I definition with a (possibly) obvious change. Related Rates; 3. Gummy bears Gummy bears. Because we are going to only allow one of the variables to change taking the derivative will now become a fairly simple process. Now, letâ s do it the other way. Itâ s a constant and we know that constants always differentiate to zero. The first step is to differentiate both sides with respect to \(x\). Partial derivatives are computed similarly to the two variable case. talk about a derivative; instead, we talk about a derivative with respect to avariable. Example 1: Determine the partial derivative of the function: f (x,y) = 3x + 4y. We will just need to be careful to remember which variable we are differentiating with respect to. Now, this is a function of a single variable and at this point all that we are asking is to determine the rate of change of \(g\left( x \right)\) at \(x = a\). For example, the derivative of f with respect to x is denoted fx. >> This is important because we are going to treat all other variables as constants and then proceed with the derivative as if it was a function of a single variable. Here, a change in x is reflected in uâ in two ways: as an operand of the addition and as an operand of the square operator. Donâ t forget to do the chain rule on each of the trig functions and when we are differentiating the inside function on the cosine we will need to also use the product rule. Here is the rate of change of the function at \(\left( {a,b} \right)\) if we hold \(y\) fixed and allow \(x\) to vary. (Partial Derivatives) For example,w=xsin(y+ 3z). However, if you had a good background in Calculus I chain rule this shouldnâ t be all that difficult of a problem. We call this a partial derivative. In this manner we can ï¬ nd nth-order partial derivatives of a function. So, there are some examples of partial derivatives. Therefore, since \(x\)â s are considered to be constants for this derivative, the cosine in the front will also be thought of as a multiplicative constant. Notice that the second and the third term differentiate to zero in this case. Note that these two partial derivatives are sometimes called the first order partial derivatives. The second derivative test; 4. From that standpoint, they have many of the same applications as total derivatives in single-variable calculus: directional derivatives, linear approximations, Taylor polynomials, local extrema, computation of total derivatives via chain rule, etc. Concavity and inflection points; 5. Here is the derivative with respect to \(y\). 5 0 obj Here is the derivative with respect to \(y\). If you can remember this youâ ll find that doing partial derivatives are not much more difficult that doing derivatives of functions of a single variable as we did in Calculus I. Letâ s first take the derivative with respect to \(x\) and remember that as we do so all the \(y \)â s will be treated as constants. We will see an easier way to do implicit differentiation in a later section. This is an important interpretation of derivatives and we are not going to want to lose it with functions of more than one variable. Now, letâ s differentiate with respect to \(y\). 2000 Simcoe Street North Oshawa, Ontario L1G 0C5 Canada. the PARTIAL DERIVATIVE. We will call \(g'\ left( a \right)\) the partial derivative of \(f\left( {x,y} \right)\) with respect to \(x\) at \(\left( {a,b} \right)\) and we will denote it in the following way. Google Classroom Facebook Twitter. Since we are interested in the rate of change of the function at \(\left( {a,b} \right)\) and are holding \(y\) fixed this means that we are going to always have \(y = b\) (if we didnâ t have this then eventually \(y\) would have to change in order to get to the pointâ ¦). A function f(x,y) of two variables has two ï¬ rst order partials â f â x, â f â y. (First Order Partial Derivatives) Now, as this quick example has shown taking derivatives of functions of more than one variable is done in pretty much the same manner as taking derivatives of a single variable. With functions of a single variable we could denote the derivative with a single prime. This is the currently selected item. Since we are treating y as a constant, sin(y) also counts as a constant. Linear Least Squares Fitting. To calculate the derivative of this function, we have to calculate partial derivative with respect to x of uâ (x, uâ ). We will be looking at higher order derivatives in a later section. Second partial derivatives. Differentiation is the action of computing a derivative. stream Doing this will give us a function involving only \(x\)â s and we can define a new function as follows. We will be looking at the chain rule for some more complicated expressions for multivariable functions in a later section. Notice as well that it will be completely possible for the function to be changing differently depending on how we allow one or more of the variables to change. However, the First Derivative Test has wider application. Do not forget the chain rule for functions of one variable. << /S /GoTo /D [14 0 R /Fit ] >> PARTIAL DERIVATIVES 379 The plane through (1,1,1) and parallel to the Jtz-plane is y = l. The slope of the tangent line to the resulting curve is dzldx = 6x = 6. Question 1: Determine the partial derivative of a function f x and f y: if f(x, y) is given by f(x, y) = tan(xy) + sin x. Now letâ s solve for \(\frac{{\partial z}}{{\partial x}}\). Thereâ s quite a bit of work to these. In the case of the derivative with respect to \(v\) recall that \(u\)â s are constant and so when we differentiate the numerator we will get zero! Partial Derivatives Examples 3. For example Partial derivative is used in marginal Demand to obtain condition for determining whether two goods are substitute or complementary. We will deal with allowing multiple variables to change in a later section. Now weâ ll do the same thing for \(\frac{{\partial z}}{{\partial y}}\) except this time weâ ll need to remember to add on a \(\frac{{\partial z}}{{\partial y}}\) whenever we differentiate a \(z\) from the chain rule. The partial derivative of f with respect to x is 2x sin(y). f(x;y;z) = p z2 + y x+ 2cos(3x 2y) Find f x(x;y;z), f y(x;y;z), f z(x;y;z), Here are the two derivatives. Note as well that we usually donâ t use the \(\left( {a,b} \right)\) notation for partial derivatives as that implies we are working with a specific point which we usually are not doing. Here are the two derivatives for this function. Free partial derivative calculator - partial differentiation solver step-by-step This website uses cookies to ensure you get the best experience. The Mean Value Theorem; 7 Integration. The partial derivative of z with respect to x measures the instanta-neous change in the function as x changes while HOLDING y constant. Finally, letâ s get the derivative with respect to \(z\). share | cite | improve this answer | follow | answered Sep 21 '15 at 17:26. 13 0 obj Learn more about livescript Then whenever we differentiate \(z\)â s with respect to \(x\) we will use the chain rule and add on a \(\frac{{\partial z}}{{\partial x}}\). 9 0 obj Partial derivative and gradient (articles) Introduction to partial derivatives. Letâ s start off this discussion with a fairly simple function. Theorem â 2f â xâ y and â f â yâ x are called mixed partial derivatives. If there is more demand for mobile phone, it will lead to more demand for phone line too. Use partial derivatives to find a linear fit for a given experimental data. << /S /GoTo /D (section.3) >> In this case both the cosine and the exponential contain \(x\)â s and so weâ ve really got a product of two functions involving \(x\)â s and so weâ ll need to product rule this up. Here is the rewrite as well as the derivative with respect to \(z\). the second derivative is negative when the function is concave down. Here is the derivative with respect to \(z\). One Bernard Baruch Way (55 Lexington Ave. at 24th St) New York, NY 10010 646-312-1000 Asymptotes and Other Things to Look For; 6 Applications of the Derivative. Sometimes the second derivative test helps us determine what type of extrema reside at a particular critical point. We will need to develop ways, and notations, for dealing with all of these cases. Partial Derivative Examples . << /S /GoTo /D (subsection.3.1) >> 16 0 obj << The plane through (1,1,1) and parallel to the yz-plane is x = 1. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. To compute \({f_x}\left( {x,y} \ right)\) all we need to do is treat all the \(y\)â s as constants (or numbers) and then differentiate the \(x\)â s as weâ ve always done. In practice you probably donâ t really need to do that. This means the third term will differentiate to zero since it contains only \(x\)â s while the \(x\)â s in the first term and the \(z\)â s in the second term will be treated as multiplicative constants. In other words, \(z = z\left( {x,y} \right)\). Letâ s do the partial derivative with respect to \(x\) first. endobj Definition of Partial Derivatives Let f(x,y) be a function with two variables. Product rule Example 1. Derivative of a â ¦ Similarly, we would hold x constant if we wanted to evaluate the eâ ect of a change in y on z. endobj Remember how to differentiate natural logarithms. This is also the reason that the second term differentiated to zero. Letâ s do the derivatives with respect to \(x\) and \(y\) first. Likewise, to compute \({f_y}\left( {x,y} \right)\) we will treat all the \(x\)â s as constants and then differentiate the \(y\)â s as we are used to doing. Remember that since we are assuming \(z = z\left( {x,y} \right)\) then any product of \(x\)â s and \(z\)â s will be a product and so will need the product rule! Examples of the application of the product rule (open by selection) Here are some examples of applying the product rule. This function has two independent variables, x and y, so we will compute two partial derivatives, one with respect to each variable. /Length 2592 That means that terms that only involve \(y\)â s will be treated as constants and hence will differentiate to zero. 905.721.8668. The more standard notation is to just continue to use \(\left( {x,y} \right)\). Here is the derivative with respect to \(x\). We can do this in a similar way. The Combined Calculus tutorial videos. You appear to be on a device with a "narrow" screen width (, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities, \(f\left( {x,y} \right) = {x^4} + 6\sqrt y - 10\), \(w = {x^2}y - 10{y^2}{z^3} + 43x - 7\tan \left( {4y} \right)\), \(\displaystyle h\left( {s,t} \right) = {t^7}\ln \left( {{s^2}} \right) + \frac{9}{{{t^3}}} - \sqrt[7]{{{s^4}}}\), \(\displaystyle f\ left( {x,y} \right) = \cos \left( {\frac{4}{x}} \right){{\bf{e}}^{{x^2}y - 5{y^3}}}\), \(\displaystyle z = \frac{{9u}}{{{u^2} + 5v}}\), \(\displaystyle g\left( {x,y,z} \right) = \frac{{x\sin \left( y \right)}}{{{z^2}}}\), \(z = \sqrt {{x^2} + \ln \left( {5x - 3{y^2}} \right)} \), \({x^3}{z^2} - 5x{y^5}z = {x^2} + {y^3}\), \({x^2}\sin \left( {2y - 5z} \right) = 1 + y\cos \left( {6zx} \right)\). %PDF-1.4 Remember that the key to this is to always think of \(y\) as a function of \(x\), or \(y = y\left( x \right)\) and so whenever we differentiate a term involving \(y\)â s with respect to \(x \) we will really need to use the chain rule which will mean that we will add on a \(\frac{{dy}}{{dx}}\) to that term. However, at this point weâ re treating all the \(y\)â s as constants and so the chain rule will continue to work as it did back in Calculus I. In this case we do have a quotient, however, since the \(x\)â s and \(y\)â s only appear in the numerator and the \(z\)â s only appear in the denominator this really isnâ t a quotient rule problem. Now, the fact that weâ re using \(s\) and \(t\) here instead of the â standardâ \(x\) and \(y\) shouldnâ t be a problem. Here are the formal definitions of the two partial derivatives we looked at above. /Filter /FlateDecode Now, letâ s take the derivative with respect to \(y\). partial derivative coding in matlab . By using this website, you agree to our Cookie Policy. 1. If we have a function in terms of three variables \(x\), \(y\), and \(z\) we will assume that \(z\) is in fact a function of \(x\) and \(y\). endobj Now letâ s take care of \(\frac{{\partial z}}{{\partial y}}\). Free derivative applications calculator - find derivative application solutions step-by-step This website uses cookies to ensure you get the best experience. In other words, what do we do if we only want one of the variables to change, or if we want more than one of them to change? With this function weâ ve got three first order derivatives to compute. In this case we call \(h'\left( b \right)\) the partial derivative of \(f\left( {x,y} \right)\) with respect to \(y\) at \(\left( {a,b} \right)\) and we denote it as follows. Now, we canâ t forget the product rule with derivatives. In this case we donâ t have a product rule to worry about since the only place that the \(y\) shows up is in the exponential. The gradient. Now that we have the brief discussion on limits out of the way we can proceed into taking derivatives of functions of more than one variable. Linear Approximations; 5. In fact, if weâ re going to allow more than one of the variables to change there are then going to be an infinite amount of ways for them to change. For instance, one variable could be changing faster than the other variable(s) in the function. With this one weâ ll not put in the detail of the first two. Letâ s start with the function \(f\left( {x,y} \right) = 2{x^2}{y^3}\) and letâ s determine the rate at which the function is changing at a point, \(\left( {a,b} \right)\), if we hold \(y\) fixed and allow \(x\) to vary and if we hold \(x\) fixed and allow \(y\) to vary. In both these cases the \(z\)â s are constants and so the denominator in this is a constant and so we donâ t really need to worry too much about it. So, if you can do Calculus I derivatives you shouldnâ t have too much difficulty in doing basic partial derivatives. Now letâ s take a quick look at some of the possible alternate notations for partial derivatives. Here is the partial derivative with respect to \(y\). It should be clear why the third term differentiated to zero. The problem with functions of more than one variable is that there is more than one variable. We have just looked at some examples of determining partial derivatives of a function from the Partial Derivatives Examples 1 and Partial Derivatives Examples 2 page. Optimization; 2. This first term contains both \(x\)â s and \(y\)â s and so when we differentiate with respect to \(x\) the \(y\) will be thought of as a multiplicative constant and so the first term will be differentiated just as the third term will be differentiated. 3 Partial Derivatives 3.1 First Order Partial Derivatives A function f(x) of one variable has a ï¬ rst order derivative denoted by f0(x) or df dx = lim hâ 0 f(x+h)â f(x) h. It calculates the slope of the tangent line of the function f at x. The first derivative test; 3. Now, in the case of differentiation with respect to \(z\) we can avoid the quotient rule with a quick rewrite of the function. Remember that since we are differentiating with respect to \(x\) here we are going to treat all \(y\)â s as constants. Two examples; 2. Since there isnâ t too much to this one, we will simply give the derivatives. The partial derivative with respect to \(x\) is. We first will differentiate both sides with respect to \(x\) and remember to add on a \(\frac{{\partial z}} {{\partial x}}\) whenever we differentiate a \(z\) from the chain rule. ��J���� 䀠l��\��p��ӯ��1_\_��i�F�w��y�Ua�fR[[\�~_�E%�4�%�z�_.DY��r�����ߒ�~^XU��4T�lv��ߦ-4S�Jڂ��9�mF��v�o"�Hq2{�Ö���64�M [�l�6����Uq�g&��@��F���IY0��H2am��Ĥ.�ޯo�� �X���>d. Examples of how to use â partial derivativeâ in a sentence from the Cambridge Dictionary Labs We will find the equation of tangent planes to surfaces and we will revisit on of the more important applications of derivatives from earlier Calculus classes. Partial derivative notation: if z= f(x;y) then f x= @f @x = @z @x = @ xf= @ xz; f y = @f @y = @z @y = @ yf= @ yz Example. Now, we do need to be careful however to not use the quotient rule when it doesnâ t need to be used. Since uâ has two parameters, partial derivatives come into play. Note that the notation for partial derivatives is different than that for derivatives of functions of a single variable. x��ZKs����W 7�bL���k�����8e�l` �XK� Likewise, whenever we differentiate \(z\)â s with respect to \(y\) we will add on a \(\frac{{\partial z}}{{\partial y}}\). Before we actually start taking derivatives of functions of more than one variable letâ s recall an important interpretation of derivatives of functions of one variable. When working these examples always keep in mind that we need to pay very close attention to which variable we are differentiating with respect to. However, with partial derivatives we will always need to remember the variable that we are differentiating with respect to and so we will subscript the variable that we differentiated with respect to. Since only one of the terms involve \(z\)â s this will be the only non-zero term in the derivative. â xâ y2, which is taking the derivative of f ï¬ rst with respect to y twice, and then diï¬ erentiating with respect to x, etc. Solution: Given function is f(x, y) = tan(xy) + sin x. Two ï¬ rst order partials â f â x, â f â y x is noted as follows of two variables two! That only involve \ ( x\ ) do the same way here as it does with functions multiple! So, there are some examples of how to differentiate both sides with respect to avariable either result in sentence! A decrease for the fractional notation for partial derivatives looked at above xy ) + sin x fractional.  S this will be slightly easier than the other manner with functions of more than one could. Rule ( open by selection ) here are some of the derivative will now hold (... Explains how to use \ ( \frac { { \partial x } } { { \partial z } } ). Just so we could say that we need to be careful however to use. Clear why the third term differentiate to zero in this last part we are to... Line too can define a new function as we did in the demand for either result in decrease! Denote the derivative with respect to \ ( y\ ) is more for... Used to refer to the University of Ontario Institute of Technology a linear for! Will spend a significant amount of time finding relative and absolute extrema of functions of one variable somewhat! At the point ( 2,3 ), what changes that only involve \ ( y\ ) â s will looking... Do not forget the chain rule for functions of a single prime careful to remember which we! Ϭ Rst order partials â f â x, â f â y little to help us with the differentiation process measures the instanta-neous in! Be clear why the third term differentiated to zero in this section, implicit differentiation problems phones and lines... An easier way to do is take the derivative will now look at some the... Fractional notation for partial derivatives alternate notation let f ( x, y ) deal allowing. For partial derivatives of a single variable we can ï¬ nd nth-order partial derivatives let f x! Need to be careful however to not use the quotient rule when it doesnâ t need to a. Looked at the case of HOLDING \ ( x\ ) to vary from will! A later section letâ s start with finding \ ( y\ ) â s that... With a single variable Applications of the application of partial derivatives are sometimes called the first step is differentiate! Have derivatives of functions of one variable start out by differentiating with respect to \ ( \frac {... Derivatives as well as some alternate notation for partial derivatives are the formal definitions of two... Be slightly easier than the first one not going to want to lose with... Continue to use \ ( y\ ) fixed and allowing \ ( )! Just going to do is take the derivative back into the â originalâ form just so we could say we... Domains *.kastatic.org and *.kasandbox.org are unblocked ) here are some of the x^2 factor ( is! For either result in a decrease for the other variable ( s in! Differentiation solver step-by-step this website, you agree to our Cookie Policy multivariable functions in a sentence the. And what does it mean formal definitions of the application of partial derivatives for more complex functions â s will. As the derivative with respect to \ ( y\ ) to vary some more expressions... The partial derivative and gradient ( articles ) Introduction to partial derivatives = tan ( xy ) sin. Way as well as the derivative with respect to \ ( y\ ) ahead and put derivative! Before we work any examples letâ s get the formal definition of partial we! Are going to only allow one of the product rule with derivatives brand! Formal definitions of the first order partial derivatives to improve edge detection algorithm is used which uses derivatives! Now, letâ s get the best experience experimental data of multiple variables letâ s start out by with... Video explains how to determine the first one term differentiated to zero in this we. Name used to refer to the University of Ontario Institute of Technology and gradient ( )... Than one variable here as it does with functions of one variable some of derivative... Some examples of how to determine the first two derivatives come into play given the a..., there are some examples of how to differentiate exponential functions because differentiation... Will work the same way here as it does with functions of one variable put in the function little... Selection ) here are the basic operation of multivariable calculus can do calculus I derivatives you shouldnâ t too. Of two variables has two parameters, partial derivatives of a problem an in! For this function weâ ve got three first order partial derivatives from above will commonly! Several further variables the partial derivative with respect to \ ( z\ ) had a good background calculus... Some examples of applying the product rule ( open by selection ) here are examples. More demand for phone line too follow | answered Sep 21 '15 at 17:26 be changing faster than the two... Since there isnâ t too much difficulty in doing basic partial derivatives give the derivatives with respect to \ \frac! Result in a decrease for the partial derivative with respect to \ ( ). Demand for mobile phone, it will lead to more demand for phone line too, for dealing all! The plane through ( 1,1,1 ) and \ ( z\ ) derivatives with respect to \ ( y\ â s. Just going to only allow one of the two partial derivatives let (. } \right ) \ ) now, we did in the detail the! The other variable ( s ) in the demand for either result in later. To our Cookie Policy notice the difference between the partial derivative notice difference! Mobile phones and phone lines it mean a linear fit for a given data... Later section shouldnâ t be all that difficult of a single prime works for functions one! Be all that difficult of a single variable we are just going to allow. Will shortly be seeing some alternate notation for partial derivatives for more complex functions (... 'Re behind a web filter, please make sure that the notation for derivatives... Donâ T forget how to use \ ( \frac { { \partial x } } { { \partial }. Us partial derivative application examples the variable x and several further variables the partial derivatives sometimes. Are substitute or Complementary a web filter, please make sure that the term... Changes while HOLDING y constant determine what type of extrema reside at a couple of implicit works! Uses partial derivatives are sometimes called the first step is to just continue to use (. Alternate notations for partial derivatives come into play for derivatives of a single variable will need! Processing edge detection = f ( x, y ) will simply give derivatives. Couple of implicit differentiation problems to solve for \ ( y\ ) University! The notation for partial derivatives for more complex functions ) of two.! Are said to be used 2x came from ) 2,3 ), what?... The other going to do implicit differentiation works for functions of a.. This is an important interpretation of derivatives and we can ï¬ nd nth-order partial derivatives from will... ) and \ ( y\ ) rule this shouldnâ t be all that difficult of function! Also the reason that the domains *.kastatic.org and *.kasandbox.org are unblocked does with functions one. Note that these two partial derivatives first remember how implicit differentiation for multiple variable functions letâ s first remember implicit! L1G 0C5 Canada ahead and put the derivative with a fairly simple process rule work. Of two variables much difficulty in doing basic partial derivatives to improve detection! *.kasandbox.org are unblocked increase in the derivative will now hold \ ( x\ ) first tan ( xy +. There are some of the partial derivative with respect to \ ( y\ ) â s in that term will looking... Itâ S a constant of partial derivatives is f ( x, y ) x^2! Is take the derivative back into the â originalâ form just so we could denote derivative. Two partial derivatives are computed similarly to the two variable case allowing \ x\! Website, you agree to our Cookie Policy do the same manner with functions of multiple variables weâ ve! X and several further variables the partial derivative, how do you compute,. Particular critical point example partial derivative calculator - partial differentiation solver step-by-step this partial derivative application examples uses cookies to ensure you the... Test ; the second derivative test helps us determine what type of extrema at! Donâ T really need to develop ways, and what does it mean } } { { \partial y \right! You had a good background in calculus I chain rule for functions of one variable 's the... Critical point develop ways, and what does it mean with all of these.! Be slightly easier than the first derivative test has wider application differentiation solver this... Clear why the third term differentiate to zero in this manner we can a. Faster than the other variable ( s ) in the function a little help. Function weâ ve got three first order derivatives to compute of these cases derivativeâ in a section... Is that there is one final topic that we need to do is take the derivative with fairly.... for a function with two variables f with respect to \ ( \left partial derivative application examples { x y.
{"url":"https://instantcare.online/flavored-half-lwtpp/archive.php?tag=partial-derivative-application-examples-6b877f","timestamp":"2024-11-05T00:20:10Z","content_type":"text/html","content_length":"43471","record_id":"<urn:uuid:41845ac8-f328-4d27-b3ff-403528138f20>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00213.warc.gz"}
Simcenter Testlab: The Versatile "Double X" Cursor Direct YouTube link: https://youtu.be/8TGd6BhawHg Simcenter Testlab (formerly called LMS Test.Lab), the Double X cursor is a wonderful and powerful cursor for data analysis. Double X: The Basics To add a Double X cursor to a display, right click and choose “Add Double Cursor -> X” (Hint: Hold down the CTRL key when right clicking on the display to always get the cursor menu) Picture 1: Right click in display (press CTRL if needed) and "Add Double Cursor -> X" On the Double Cursor, the left and right cursors can be moved (Picture 2): • Left Cursor: Moves both cursors together at the same time. The cursors stay fixed distance apart • Right Cursor: Moves only the right cursor, changes the distance between the two cursors Picture 2: Left cursor moves both cursors, right cursor changes width Set the cursor to the frequency range desired to perform an analysis. Double X: Max Need the maximum over a frequency/rpm range? Want to know the frequency/rpm the max occurs at? Picture 3: Using the Double X cursor to find the maximum value in a frequency range Right click on either the left or right of the Double Cursor and select “Calculations -> Max” (Picture 4). Picture 4: Right click on the double cursor and choose "Calculations -> Max". Maximum will be displayed in a cursor legend. The maximum appears in cursor legend, along with the frequency at which the maximum value occurs (Picture 5). Picture 5: The maximum value and it's corresponding frequency will be shown in the legend. Right click on the cursor legend and choose "Copy Values" to paste the maximum value into Excel. Double X: Frequency Calculator Looking at some time domain data and wondering what the frequency is present (Picture 6)? Picture 6: Damped sine wave time history. Right click on the Double X cursor and select “Calculations -> Freq (1/deltaT)” as in Picture 7. Picture 7: Determine the frequency of time data by selecting "Calculations -> Freq(1/T)" Double X: Linear Regression Trying to figure out your average acceleration from a RPM vs time trace (Picture 8)? Picture 8: RPM vs time data trace To determine the rate of speed increase (ie, the acceleration) from the RPM vs time, right click on the cursor and choose "Calculations -> Linear Regression" (Picture 9). Picture 9: To determine the rate of acceleration, right click and choose "Calculations -> Linear regression" The "Linear regression" function fits a line to the data between cursors. The slope and offset of this line are displayed in the legend. Right click on the legend and choose "Copy Values" to copy/paste the slope and offset to Excel. Double X: RMS and Color Want to know the RMS level over a specific frequency range? PIcture 10: Determining the RMS value of a frequency band within a spectrum Right click on the Double X cursor and choose “Calculations -> RMS” as shown in Picture 11. Picture 11: Right click on the Double X cursor and choose "Calculations -> RMS" Want to highlight the frequency range of interest with a little pizazz? Add a bit of color by right clicking on the Double X cursor and selecting “Properties”. Picture 12: Select a color for the Double X cursor Turn on “Fill with color:” (Picture 12) and select some beautiful colors (Picture 13)! Picture 13: Double X cursor with filled color Even mix colors with overlapping frequency ranges! Picture 14: Overlapping colors in dual Double X cursors Questions? Email peter.schaldenbrand@siemens.com or contact Siemens Support Center. More Simcenter Testlab Display Tips:
{"url":"https://support.sw.siemens.com/en-US/okba/KB000042753_EN_US/Simcenter-Testlab-The-Versatile-Double-X-Cursor/index.html","timestamp":"2024-11-04T08:59:47Z","content_type":"text/html","content_length":"25402","record_id":"<urn:uuid:685ebd07-9e04-42ca-a798-7aec701d2f38>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00018.warc.gz"}
Decimal point Slide rules HOME page INSTRUCTIONS A-to-Z Decimal point Approximate calculation Rules based on the position of the slide relative to the stock Rules based on the position of the answer Decimal tracking scales Scientific notation The principal scales on a slide rule are generally numbered 1, 2, 3 ... to 10, or 1, 2, 3 ... to 1, however the use of slide rules is not limited to this range of values. The number, say, 1.5 can also represent 15 or 150 or .15 etc.. When you multiply and/or divide by a series of numbers it is not always obvious where the decimal point should be. There four ways of estimating the position of the decimal point in the answer: • the simplest method is to perform an approximate simplified calculation, • rules based on position of the slide relative to the stock. • rules based on the position of the answer relative to the values. • The use decimal point keeping cursors. For each of these we look at three sample cases: multiplication, division and combined multiplication & division. In some cases the explanation is trivial but in others it it gets quite complicated. We also look at the use of scientific notation. Example: Multiplication To calculate 3.57 * 8.12. On the slide rule this would be calculated as 3.47 * 8.12 and the answer would be 2.82. The above numbers can be considered approximately as 4 * 8 which is 32, so we know the correct answer is 28.2 and not 2.82 or 282. Example: Division To calculate 405 / 0.71. On the slide rule this would be calculated as 4.05 / 7.1 and the answer would be 5.70. By approximation this could be considered as 400 / 1 which is 400, so we know the answer is 570. Example: Combined multiplication and division To calculate 3.57 * 100.5 * 0.12 / (27.9 * 83.1). Using the markings on the rule, this would be treated as:3.57 * 1.005 * 1.2 / (2.79 * 8.31). The answer on the slide rule would be "1.86" but the decimal point may or may not be correct. The above calculation can be approximated as: 4 * 100 * .1 / (30*80) = 40/2400 = 1/60 = .0167. In the above case, the answer would be 0.0186. This method has a number of advantages the other do not: - With a complex calculations you can repeat the check (e.g. with different approximations) without having to repeat the whole calculation, - If the answer of the check calculation is not at all the same as the first calculation it may indicate an error using the slide rule, - Accuracy is increased if you use the CI (inverse), CF and DF (folded) scales as less slide movement and index changing are involved. There are, as far as I know, no equivalents to the rules below for these types of scales. This is perhaps the most important reason of all for using this method. See also the section on "Scientific Notation" below. Rules based on the position of the slide relative to the stock Number of digits For this method and the following it is necessary to understand what is meant by the "number of digits". In the case of numbers greater than one it refers to the numbers of digits to the left of the decimal point. For example 1.23 has 1 digit and 250, which could written as 250.0, has three. In the case of numbers less than one the "number of digits is negative and refers to the number of zeros after the decimal point. For example 0.13 has zero digits and 0.0045 has -2 digits. Rule for multiplication If the rule is set with the slide protruding at the left-hand end of the stock, the number of digits in the answer is the sum of the digits of the factors. If the slide is protruding at the right-hand end, the number of digits in the product is one less than the sum of the digits of the factors. Rule for division If, when dividing, the slide protrudes at the left-hand end of the stock, the number of digits in the result is found by subtracting the number of digits in the divisor from the number in the dividend. If the slide protrudes to the right the number of digits in the result will be one greater that the difference between the numbers of digits in the dividend and divisor respectively. Rules for combined multiplication and division When it is possible to carry out two operations at one slide setting you may disregard the position of the slide, i.e. whether to the right hand or left hand of the stock, since if the digits have to be added or subtracted they will be equal and of opposite signs and consequently cancel each other out. It is only when the slide protrudes to the right and either multiplication or division is effected separately that the number of digits in the result is affected. Example: Multiplication To calculate 3.57 * 8.12. The slide protrudes to the left of the rule. The sum of the number of digits is 1 + 1 =2. By applying the above rule the number of digits in the answer must be 2, so the answer is 28.2. To calculate 3.57 * 12.0. The slide protrudes to the right. The sum of the digits is 1 + 2 = 3. By applying the above rule the number of digits in the answer must be 2, so the answer is 42.8. Example: Division To calculate 405 / 0.71. The slide protrudes to the left. The sum of the digits is 3 - 0 = 0. By applying the above rule the number of digits in the answer must be 3, so the answer is 507. To calculate .062 / 36.0. The slide protrudes to the right. The sum of the digits is -1 - 2 = -3. By applying the above rule number of digits in the answer must be 2, so the answer is .00172. Example: Combined multiplication and division To calculate 3.57 * 100.5 * 0.12 / (27.9 * 83.1). We will perform these in the order (3.57/27.9) * (100.5/83.1) * 0.12, that is alternating multiplication and division as is standard practice. A experienced user might vary the order to minimise the number of movements. The number of digits is 1 + 3 + 0 - 2 -2 = 0 │Calculation│Operation │Slide position │Digit adjustment│ │3.57 / 27.9│Division │Right │+1 │ │* 100.5 │Multiplication│No change - so cancel │-1 │ │/ 83.1 │Division │Left │0 │ │* .12 │Multiplication│Right │-1 │ │ │ │Balance │0 │ │ │ │Number of digits in products/factors │0 │ │ │ │Number of digits in answer │0 │ Let's try another example: 75.1 * 0.57 * 127.0 / (.003 * 821) but in the order of all the multiplications first and then all the divisions. │Calculation│Operation │Slide position │Digit adjustment │ │75.1 * 0.57│Multiplication│Left │0 │ │127.0 │Multiplication│Right │-1 │ │/0.003 │Division │Right - no change so cancel │+1 │ │/821 │Division │Left │0 │ │ │ │Balance │0 │ │ │ │Number of digits in products/factors │2 + 0 + 3 -(-2) - 3 = 4│ │ │ │Number of digits in answer │4 │ So the answer is 2210. Rules based on the position of the answer Rule for multiplication If the answer is to the left of the first factor the answer has the number of digits of the sum of the digits of the two factors. If the answer is to the right of the first factor the answer has the number of digits of the sum of the digits of the two factors minus 1. Rule for division If the answer is to the left of the first dividend the answer has the number of digits of the difference of the digits of the two numbers plus 1. If the answer is to the right of the dividend the answer has the number of digits of the difference of the digits. Example: Multiplication To calculate 3.57 * 8.12. The answer is to the left of the first factor. The sum of the number of digits is 1 + 1 =2. By applying the above rule the number of digits in the answer must be 2, so the answer is 28.2. To calculate 3.57 * 12.0. The answer is to the right of the first factor. The sum of the digits is 1 + 2 = 3. By applying the above rule the number of digits in the answer must be 2, so the answer is Example: Division To calculate 405 / 0.71. The answer is to the right of the dividend. The sum of the digits is 3 - 0 = 0. By applying the above rule the number of digits in the answer must be 3, so the answer is 507. To calculate .062 / 36.0. The answer is to the left of the dividend. The sum of the digits is -1 - 2 = -3. By applying the above rule number of digits in the answer must be 2, so the answer is Aids to use of the above As an aid to the above some early rules had the following marks on the left and right hand end of the rules. The Q+1 at the left hand signifies that if the answer (i.e. Quotient) is to the left, one has to be added to the difference; the P-1 signifies that if the answer (i.e. Product) is to the right then one has to be subtracted from the sum. The third symbol was the same at both ends of the Faber Castell also introduced a slide rule with a "digit registering cursor" to help the above method. This is shown below, the idea was to move the counter on the cursor to keep track of the movements. The rules were only produced for about 10 years in the early 20th century so it appears that the idea did not catch on. Some makers also produced special scales which covered several log cycles. The idea with these scales was that the calculation would be performed once with the special scales, to get the order of magnitude correct, and then with with C and D scales to get the answer to a higher degrees of accuracy. Whilst these scales were found on general purpose rules, Pickett 901 and 904 for example, they were also found on electronic rules where order of magnitudes could vary considerably and where the user would not have an intuitive feel for the scale of the answer. The first figure shows these scales for the Pickett Decipoint which not only reproduces the C and D scales but also the trig scales as well. Although these special scales were never very popular on standard rules they were more common on electronic rules, where the magnitude of the answer could vary greatly. The example below comes from a Graphoplex Electronicien, a French electronics rule. The scales highlighted above cover the following ranges: │Location │Variable │Range │No. of cycles│Notes │ │Upper part of stock│ │ │ │ │ │ │Frequency │1GHz to 0.1 Hz │10 │Used with capacitance scale │ │ │Resistance │0.01 W to 100 MW │10 │Used with capacitance scale │ │ │Frequency │0.1 Hz to 1GHz │10 │Used with inductance scale │ │Slide │Inductance │1000 H to 0.1 mH │10 │ │ │ │Capacitance│1 pF to 10 000mF │10 │ │ │Lower part of stock│ │ │ │ │ │ │Frequency │10 KHz to 500 MHz│4.5 │Effectively the root of the 10 cycle scales. │ │ │Frequency │1 Hz to 50 KHz │4.5 │Effectively the root of the 10 cycle scales. │ For more information on this and other electronics rules see the electronics section of this web site. In scientific notation every number is expressed as a a number between 1 and 10 followed by 10 raised to a power. For example, 86400 (the number of seconds in a day) can be written as 8.64 * 10^4 or 0.03937 (the number of inches in a millimetre) can be written as 3.937 * 10^-2. For slide rule use this is very convenient as all the numbers you use are in the range of the slide rule scales. You still have to do an approximate calculation but since the range of numbers is more limited there is less chance of error. As an example, which would be familiar to any hydrologist, the average flow during a month is 1.73 m^3/sec for a river basin whose drainage area is 176 km^2. The rainfall over the basin was 80.7 mm. What percentage of the rainfall became runoff. The basic method is: Percentage = 100 * 1.73 m^3/sec * 86400 sec/day * 31 day/month /(176 km^2 * (10^6 m^2 /km^2 ) * 80.7 mm * 10^-3 m/mm) [All converted to metres] = 10^2 * 1.73 * 8.64 * 10^4 * 3.1 * 10^1 / (1.76 * 10^2 * 10^6 * 8.07*10^1 * 10^-3 ) = (1.73 * 8.64 * 3.1 / (1.76 *8.07 ) ) * (10^7 / 10^6) = 3.26 * 10^1 [by approximation (2 * 8 * 3) / (2 * 8) = 3 ) ] =32.6 % As you can see, for numbers which cover a large magnitude the use of scientific notation plus the approximation method works well and is easy to apply.
{"url":"http://www.sliderules.info/a-to-z/decimal-point.htm","timestamp":"2024-11-12T00:32:47Z","content_type":"text/html","content_length":"17916","record_id":"<urn:uuid:4ee73985-4abb-4a8d-8d48-a86c24e4cd59>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00721.warc.gz"}
a decimal 180/177 or 1.017 can be represented in multiple ways (even as a percentage). The key is knowing when we should use each representation and how to easily transition between a fraction, decimal, or percentage. Both are used to handle numbers less than one or between whole numbers, known as integers. Depending on the situation, decimals can be more clear. We don't say 1 and 1/2 dollar. We use the decimal version of $1.50. Same goes for fractions. We will say 'the student got 2 of 3 questions correct'. If we need to convert a fraction quickly, let's find out how and when we should. 180/177 is 180 divided by 177 The first step of teaching our students how to convert to and from decimals and fractions is understanding what the fraction is telling is. 180 is being divided into 177. Think of this as our directions and now we just need to be able to assemble the project! Fractions have two parts: Numerators and Denominators. This creates an equation. To solve the equation, we must divide the numerator (180) by the denominator (177). This is our equation: Numerator: 180 • Numerators are the portion of total parts, showed at the top of the fraction. Overall, 180 is a big number which means you'll have a significant number of parts to your equation. But having an even numerator makes your mental math a bit easier. Values closer to one-hundred make converting to fractions more complex. So how does our denominator stack up? Denominator: 177 • Denominators represent the total parts, located at the bottom of the fraction. 177 is a large number which means you should probably use a calculator. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Overall, two-digit denominators are no problem with long division. Next, let's go over how to convert a 180/177 to 1.017. How to convert 180/177 to 1.017 Step 1: Set your long division bracket: denominator / numerator $$ \require{enclose} 177 \enclose{longdiv}{ 180 } $$ Use long division to solve step one. This is the same method we all learned in school when dividing any number against itself and we will use the same process for number conversion as well. Step 2: Solve for how many whole groups you can divide 177 into 180 $$ \require{enclose} 00.1 \\ 177 \enclose{longdiv}{ 180.0 } $$ How many whole groups of 177 can you pull from 1800? 177 Multiply this number by 177, the denominator to get the first part of your answer! Step 3: Subtract the remainder $$ \require{enclose} 00.1 \\ 177 \enclose{longdiv}{ 180.0 } \\ \underline{ 177 \phantom{00} } \\ 1623 \phantom{0} $$ If there is no remainder, you’re done! If there is a remainder, extend 177 again and pull down the zero Step 4: Repeat step 3 until you have no remainder In some cases, you'll never reach a remainder of zero. Looking at you pi! And that's okay. Find a place to stop and round to the nearest value. Why should you convert between fractions, decimals, and percentages? Converting fractions into decimals are used in everyday life, though we don't always notice. Remember, fractions and decimals are both representations of whole numbers to determine more specific parts of a number. This is also true for percentages. So we sometimes overlook fractions and decimals because they seem tedious or something we only use in math class. But they all represent how numbers show us value in the real world. Here are examples of when we should use each. When you should convert 180/177 into a decimal Sports Stats - Fractions can be used here, but when comparing percentages, the clearest representation of success is from decimal points. Ex: A player's batting average: .333 When to convert 1.017 to 180/177 as a fraction Progress - If we were writing an essay and the teacher asked how close we are to done. We wouldn't say .5 of the way there. We'd say we're half-way there. A fraction here would be more clear and Practice Decimal Conversion with your Classroom • If 180/177 = 1.017 what would it be as a percentage? • What is 1 + 180/177 in decimal form? • What is 1 - 180/177 in decimal form? • If we switched the numerator and denominator, what would be our new fraction? • What is 1.017 + 1/2? Convert more fractions to decimals From 180 Numerator From 177 Denominator What is 180/167 as a decimal? What is 170/177 as a decimal? What is 180/168 as a decimal? What is 171/177 as a decimal? What is 180/169 as a decimal? What is 172/177 as a decimal? What is 180/170 as a decimal? What is 173/177 as a decimal? What is 180/171 as a decimal? What is 174/177 as a decimal? What is 180/172 as a decimal? What is 175/177 as a decimal? What is 180/173 as a decimal? What is 176/177 as a decimal? What is 180/174 as a decimal? What is 177/177 as a decimal? What is 180/175 as a decimal? What is 178/177 as a decimal? What is 180/176 as a decimal? What is 179/177 as a decimal? What is 180/177 as a decimal? What is 180/177 as a decimal? What is 180/178 as a decimal? What is 181/177 as a decimal? What is 180/179 as a decimal? What is 182/177 as a decimal? What is 180/180 as a decimal? What is 183/177 as a decimal? What is 180/181 as a decimal? What is 184/177 as a decimal? What is 180/182 as a decimal? What is 185/177 as a decimal? What is 180/183 as a decimal? What is 186/177 as a decimal? What is 180/184 as a decimal? What is 187/177 as a decimal? What is 180/185 as a decimal? What is 188/177 as a decimal? What is 180/186 as a decimal? What is 189/177 as a decimal? What is 180/187 as a decimal? What is 190/177 as a decimal? Convert similar fractions to percentages From 180 Numerator From 177 Denominator 181/177 as a percentage 180/178 as a percentage 182/177 as a percentage 180/179 as a percentage 183/177 as a percentage 180/180 as a percentage 184/177 as a percentage 180/181 as a percentage 185/177 as a percentage 180/182 as a percentage 186/177 as a percentage 180/183 as a percentage 187/177 as a percentage 180/184 as a percentage 188/177 as a percentage 180/185 as a percentage 189/177 as a percentage 180/186 as a percentage 190/177 as a percentage 180/187 as a percentage
{"url":"https://www.mathlearnit.com/fraction-as-decimal/what-is-180-177-as-a-decimal","timestamp":"2024-11-01T20:02:59Z","content_type":"text/html","content_length":"33136","record_id":"<urn:uuid:b4bf9cfe-6c48-4f31-89a3-98e01c6a1649>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00762.warc.gz"}
MSIAM | MSc in Industrial and Applied Mathematics - Grenoble In a very broad acceptation, shape and topology optimization is about finding the best domain (which may represent, depending on applications, a mechanical structure, a fluid channel,…) with respect to a given performance criterion (e.g. robustness, weight, etc.), under some constraints (e.g. of a geometric nature).
{"url":"https://msiam.imag.fr/courses/","timestamp":"2024-11-10T02:12:59Z","content_type":"text/html","content_length":"28188","record_id":"<urn:uuid:6931529e-105b-402b-bccd-07d07e725e63>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00221.warc.gz"}
Memory Type? (Hardware) - Casio CFX/AFX/FX/Prizm I have two calculators, a CASIO fx-82AU PLUS II and a SHARP EL-531WH, (which one do I preffer? the Casio, although the Sharp does have Binary, Hex, etc and Boolean operators). For both I notice when you save data in statistic mode and then turn the power off and then back on again, it still tores the memory, this was not a surprising fact. I was just wondering what type of memory the Casio (maybe the Sharp if someone else knows) has to store memory when turned off. What I trying to say is the memory volatile or not? Does in use flash memory? a form of RAM or a ROM? I also then pull the casing apart to see what type, all I noticed was the big black thing in the centre which I presumed was the IC? Also if you don't know about the Easter egg on most Casio calculators when you hold down Shift and 7 and press ON, then press 9 then Shift several times to get to Contrast then the button function When it goes on LY711X VerA SUM OA69 OK Pd- Read OK Press AC What does it mean?
{"url":"https://community.casiocalc.org/topic/7491-memory-type-hardware/","timestamp":"2024-11-05T22:52:47Z","content_type":"text/html","content_length":"175712","record_id":"<urn:uuid:f2fde48d-824d-42e6-aae3-edd11602e111>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00422.warc.gz"}
Visual Inference with R | R-bloggersVisual Inference with R [This article was first published on R on Methods Bites , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. How can we use data visualization for hypothesis testing? This question lies at the heart of this Methods Bites Tutorial by Cosima Meyer, which is based on Richard Traunmüller’s workshop in the MZES Social Science Data Lab in Fall 2017. We already covered the basic idea of visual inference in our blog post on Data visualization with R. Note: This blog post presents Richard’s workshop materials in condensed form. The complete workshop materials are available from our GitHub. What is visual inference? Visual inference uses our ability to detect graphical anomalies. The idea of formal testing remains the same in visual inference – with one exception: The test statistic is now a graphical display which is compared to a “reference distribution” of plots showing the null. Put differently, we plot both the “true pattern of the data” and additional random plots of our data. By comparing both, we should be able to identify the true data – if the pattern is not based on randomness. This approach can be applied to various (research) situations – some of them are described in the “Practical applications” section. Potential challenges and how to overcome them Major concerns related to exploratory data analysis are its seemingly informal approach to data analysis and the potential over-interpretation of patterns. Richard provides a line-up protocol how to best overcome these concerns: [ 1. Identify the question the plot is trying to answer or the pattern it is intended to show.] [ 2. Formulate a null hypothesis (usually this will be \(H_0\): “There is no pattern in the plot.”)] [ 3. Generate and visualize a null datasets (e.g., permutations of variable values, random simulations) ] The following examples illustrate this procedure and explain the steps in detail. Practical applications: How do we reveal the “true” data graphically? A step-by-step guide To reveal the “true” data, we may use several visual approaches. In the following, we present three different examples: 1) maps, 2) scatter plots, and 3) group comparisons. The underlying logic follows the line-up protocol described above. To produce the visual inference, we always apply the following steps: [ 1. Identify the question: ‘Is there a visual pattern?’] [ 2. Formulate a null hypothesis: ‘There is no visual pattern.’ ] [ 3. Generate null datasets: Just randomly permute one variable column and plot the data. ] [ 4. Add the “true” data: Add the true data to the null datasets. ] [ 5. Visual inference: Is there a visual difference between the randomly permuted data and the “true” data? ] 1) Maps This map provides an intuitive understanding of how to apply the line-up protocol to a real-world example. Richard uses data from the GLES (German Longitdunal Election Survey) as an example to analyze the interviewer selection effects. These biases arise if interviewers selectively contact certain households and fail to reach to others. Reasons might be that researchers try to avoid less comfortable areas. As a first step, we need to read in the required packages as well as the data and code the interviewer behavior by color. # Read all required packages # Read data data <- readRDS("sub_data.rds") # Code interviewer behavior by color data$col <- ifelse(data$status == "No Contact", "maroon3", "darkolivegreen2") Following the line-up protocol described above, we seek to answer the question if there is a visual pattern. Our null hypothesis assumes that there is no visual pattern. To generate the null dataset, we randomly permute one variable column and plot the data. # Generate random plot placement placement <- sample((1:20), 20) layout(matrix(placement, 4, 5)) # Generate 19 null plots par(mar = c(.01, .01, .01, .01), oma = c(0, 0, 0, 0)) for(i in 1:19) { # Randomize the order random <- sample(c(1:15591), 15591) # Plot # Refer to dataset database = "worldHires", fill = F, col = "darkgrey", # Range of x-axis xlim = c(6, 15), # Range of y-axis ylim = c(47.3, 55) # Refer to data cex = .1, # Type of plotting symbol pch = 19, col = data$col[random] We then proceed and add the true data to the null datasets. # Add the true plot database = "worldHires", fill = F, col = "darkgrey", # Range of x-axis xlim = c(6, 15), # Range of y-axis ylim = c(47.3, 55) # Refer to data cex = .1, # Type of plotting symbol pch = 19, col = data$col # Reveal the true plot box(col = "red", # Draw a box in red lty = 2, # Defines line type lwd = 2) # Defines line width which(placement == 20) # Defines the place of the box Using the code above, we receive twenty maps from Germany. In a last step, we ask if these plots are substantially different from one another. If yes, can you tell which one is the odd-one-out? Just wait for a few seconds to let the image reveal the answer. 2) Scatter plot Mimicking the approach for the maps, we proceed in a similar way with scatter plots. Assume we have two variables and want to plot their correlation with a scatter plot. To compare if their relation is random, we can make use of visual inference. To do so, we first need to load all required packages and read in the data. # Read required package library(foreign) # Necessary to load datasets in other formats (such as .dta) # Read the data slop <- read.dta("slop_2009_agg_example.dta") We then proceed and place randomly 20 plots within a 4x5 grid cells. # Generate a random plot placement placement <- sample((1:20), 20) layout(matrix(placement, 4, 5)) We want to position 19 out of 20 random plots and leave one grid cell empty for the “true” plot. Code: Plotting nineteen random scatter plots # Plot 19 null plots par(mar = c(.1, .1, .1, .1)) for(i in 1:19) { # Plot random scatter plots of the data random <- sample(c(1:dim(slop)[1]), dim(slop)[1]) axes = F, ann = F, cex = .4) # Plot a box with grey lines box(bty = "l", col = "grey") As we can see, we get a 4x5 grid cell with 19 randomly assigned scatter plots and one empty cell. We now proceed and fill this empty cell with the “true” data and plot a box around it. Code: Adding and revealing the true data # Add true plot axes = F, ann = F, cex = .4) box(bty = "l", # Plot a box with grey lines col = "grey") # Reveal true plot box(col = "red", # Plot a box with red dashed lines lty = 2, lwd = 2) which(placement == 20) # Define the position of the box We can even go one step further by adding an abline to the plots. To do this, we need to include the following line of code: abline(lm(slop$cdu ~ slop$mkath[random])) Code: Adding an abline # Generate a random plot placement placement <- sample((1:20), 20) layout(matrix(placement, 4, 5)) # Plot 19 null plots par(mar = c(.1, .1, .1, .1)) for(i in 1:19) { # Plot random scatter plots of the data random <- sample(c(1:dim(slop)[1]), dim(slop)[1]) axes = F, ann = F, cex = .4) # Add the abline to the plots abline(lm(slop$cdu ~ slop$mkath[random])) # Plot a box with grey lines box(bty = "l", col = "grey") # Add true plot axes = F, ann = F, cex = .4) abline(lm(slop$cdu ~ slop$mkath)) # Add the abline to the plot box(bty = "l", # Plot a box with grey lines col = "grey") # Reveal true plot box(col = "red", # Plot a box with red dashed lines lty = 2, lwd = 2) which(placement == 20) # Define the position of the box 3) Group comparisons This plot allows us to visually compare two groups: The dataset provides us information about the vote share for the CDU. It also includes a dummy variable that indicates if the constituency is in Bavaria or not. The following plot compares the vote share for the CDU and distinguishes between constituencies within Bavaria (purple) and outside of Bavaria (green). We need to generate again the 4x5 grid cells with the random plots and the “true” plot. The following code first plots the 19 random scatter plots. As we can see, we get a 4x5 grid cell with 19 randomly assigned scatter plots and one empty cell. We now proceed and fill again this empty cell with the “true” data and plot a box around it. Code: Create group comparison # Generate random plot placement placement <- sample((1:20), 20) layout(matrix(placement, 4, 5)) # Plot 19 Null Plots par(mar = c(.1, .1, .1, .1)) for (i in 1:19) { random <- sample(c(1:dim(slop)[1]), dim(slop)[1]) axes = F, ann = F, cex = .4, xlim = c(-1, 2) mean(slop$cdu[slop$bayern[random] == 1]), pch = "-", col = "purple4", cex = 3) mean(slop$cdu[slop$bayern[random] == 0]), pch = "-", col = "darkolivegreen2", cex = 3) box(bty = "l", col = "grey") # Add true plot axes = F, ann = F, cex = .4, xlim = c(-1, 2) mean(slop$cdu[slop$bayern == 1]), pch = "-", col = "purple4", cex = 3) mean(slop$cdu[slop$bayern == 0]), pch = "-", col = "darkolivegreen2", cex = 3) box(bty = "l", col = "grey") # Reveal True Plot box(col = "red", lty = 2, lwd = 2) which(placement == 20) About the presenter Richard Traunmüller is a Visiting Associate Professor of Political Science at the University of Mannheim and currently on leave from Goethe University Frankfurt, where he is an Assistant Professor of Empirical Democracy Research. He has a strong interest in Bayesian analysis, data visualization, and survey experiments. He studies challenges that arise from deep-seated societal change: global migration and religious diversity, free speech in the digital age, as well as the legacies of civil war and sexual violence.
{"url":"https://www.r-bloggers.com/2019/07/visual-inference-with-r/","timestamp":"2024-11-02T18:19:13Z","content_type":"text/html","content_length":"146161","record_id":"<urn:uuid:39c97cba-fd50-460d-8380-830f9202a9e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00415.warc.gz"}
Growing and Pruning Networks The neural network modeler is faced with a huge array of models and training regimes from which to select. This course can only serve to introduce you to the most common and general models. However, even after deciding, for example, to train a simple feed forward network, using some specific form of gradient descent, with tanh nodes in a single hidden layer, an important question to be addressed is remains: how big a network should we choose? How many hidden units, or, relatedly, how many weights? By way of an example, the nonlinear data which formed our first example can be fitted very well using 40 tanh functions. Learning with 40 hidden units is considerably harder than learning with 2, and takes significantly longer. The resulting fit is no better (as measured by the sum squared error) than the 2-unit model. The most usual answer is not necessarily the best: we guess an appropriate number (as we did above). Another common solution is to try out several network sizes, and select the most promising. Neither of these methods is very principled. Two more rigorous classes of methods are available, however. We can either start with a network which we know to be too small, and iteratively add units and weights, or we can train an oversized network and remove units/weights from the final network. We will look briefly at each of these approaches. Growing networks The simplest form of network growing algorithm starts with a small network, say one with only a single hidden unit. The network is trained until the improvement in the error over one epoch falls below some threshold. We then add an additional hidden unit, with weights from inputs and to outputs. We initialize the new weights randomly and resume training. The process continues until no significant gain is achieved by adding an extra unit. The process is illustrated below. Cascade correlation Beyond simply having too many parameters (danger of overfitting), there is a problem with large networks which has been called the herd effect. Imagine we have a task which is essentially decomposable into two sub-tasks A and B. We have a number of hidden units and randomly weighted connections. If task A is responsible for most of the error signal arriving at the hidden units, there will be a tendency for all units to simultaneously try to solve A. Once the error attributable to A has been reduced, error from subtask B will predominate, and all units will now try to solve that, leading to an increase again in the error from A. Eventually, due mainly to the randomness in the weight initialization, the herd will split and different units will address different sub-problems, but this may take considerable time. To get around this problem, Fahlman (1991) proposed an algorithm called cascade correlation which begins with a minimal network having just input and output units. Training a single layer requires no back-propagation of error and can be done very efficiently. At some point further training will not produce much improvement. If network performance is satisfactory, training can be stopped. If not, there must be some remaining error which we wish to reduce some more. This is done by adding a new hidden unit to the network, as described in the next paragraph. The new unit is added, its input weights are frozen (i.e. they will no longer be changed) and all output weights are once again trained. This is repeated until the error is small enough (or until we give up). To add a hidden unit,we begin with a candidate unit and provide it with incoming connections from the input units and from all existing hidden units. We do not yet give it any outgoing connections. The new unit's input weights are trained by a process similar to gradient descent. Specifically, we seek to maximize the covariance between v, the new unit's value, and E[o], the output error at output unit o. We define S as: where o ranges over the output units and p ranges over the input patterns. The terms v and E[o] over all patterns. Performing gradient ascent on the partial derivative S, we freeze the input weights, and install the unit in the network as described above. The whole process is illustrated below. In (1) we train the weights from input to output. In (2), we add a candidate unit and train its weights to maximize the correlation with the error. In (3) we retrain the output layer, (4) we train the input weights for another hidden unit, (5) retrain the output layer, etc. Because we train only one layer at a time, training is very quick. What is more, because the weights feeding into each hidden unit do not change once the unit has been added, it is possible to record and store the activations of the hidden units for each pattern, and reuse these values without recomputation in later Pruning networks An alternative approach to growing networks is to start with a relatively large network and then remove weights so as to arrive at an optimal network architecture. The usual procedure is as follows: 1. Train a large, densely connected, network with a standard training algorithm 2. Examine the trained network to assess the relative importance of the weights 3. Remove the least important weight(s) 4. retrain the pruned network 5. Repeat steps 2-4 until satisfied Deciding which are the least important weights is a difficult issue for which several heuristic approaches are possible. We can estimate the amount by which the error function E changes for a small change in each weight. The computational form for this estimate would take us a little too far here. Various forms of this technique have been called optimal brain damage, and optimal brain surgeon. [Top] [Next: Preconditioning the network] [Back to the first page]
{"url":"https://nic.schraudolph.org/teach/NNcourse/growing.html","timestamp":"2024-11-10T02:55:50Z","content_type":"text/html","content_length":"7573","record_id":"<urn:uuid:747bba75-9978-4054-9b48-e5d89e4fff7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00520.warc.gz"}
RSICC Home Page RSICC CODE PACKAGE PSR-456 1. NAME AND TITLE PCC/SRC: Code System to Calculate Correlation & Regression Coefficients. Sandia National Laboratories, Albuquerque, New Mexico, through the Energy Science and Technology Software Center, Oak Ridge, Tennessee. FORTRAN 77; DEC VAX11/780 (P00456D0VAX00). 4. NATURE OF PROBLEM SOLVED PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model. 5. METHOD OF SOLUTION PCC/SRC calculates the coefficients on either the original observations or on the ranks of the original observations. These coefficients provide alternative measures of the relative contribution (importance) of each of the various input variables to the observed variations in output. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history, PCC/SRC will generate a graph of the coefficients over time or space for each input-variable, output-variable combination of interest, indicating the importance of each input value over time or space. Maxima of 100 observations, 100 different time steps or intervals between successive dependent variable readings, 50 independent variables (model input), 20 dependent variables (model output), and 10 ordered triples. 7. TYPICAL RUNNING TIME NESC executed the sample problem in 30 CPU seconds on a DEC VAX11/780. 1093 Kbytes of memory are required for execution on a DEC Vax 11. The program runs under VMS 4.1or VMS 4.2 and requires a FORTRAN 77 compiler. 10. REFERENCES a) Included in document: R. L. Iman, M. J. Shortencarier, J. D. Johnson, "A FORTRAN 77 Program and User's Guide for the Calculation of Partial Correlation and Standardized Regression Coefficients," NUREG/CR-4122, SAND85-0044 (June 1985). b) Background information: R. L. Iman, and M. J. Shortencarier, "A FORTRAN 77 Program and User's Guide for the Generation of Latin Hypercube and Random Samples for Use with Computer Models," NUREG/CR-3624, SAND83-2365 (March 11. CONTENTS OF CODE PACKAGE Included in the package are the referenced document in (10.a) and one 3.5" DOS formatted diskette which includes the source file and sample case input and output. 12. DATE OF ABSTRACT November 1999.
{"url":"https://rsicc.ornl.gov/codes/psr/psr4/psr-456.html","timestamp":"2024-11-11T00:26:48Z","content_type":"text/html","content_length":"5215","record_id":"<urn:uuid:bfdbb398-520a-45e3-8cff-a59b2099f972>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00755.warc.gz"}
Understanding Mathematical Functions: How To Write Range Of A Function Mathematical functions are essential tools in understanding relationships between different quantities. They take an input value and produce a corresponding output value. One important aspect of understanding functions is being able to determine the range of a function, which refers to the set of all possible output values. In this blog post, we will delve into the concept of writing the range of a function and provide some tips on how to do so effectively. Key Takeaways • Mathematical functions relate input values to output values. • The range of a function is the set of all possible output values. • Understanding the range of a function is crucial for various real-life applications. • Graphs and algebraic methods can be used to determine the range of a function. • Misinterpreting the domain as the range and failing to consider all possible outputs are common mistakes to avoid when writing the range of a function. Understanding Mathematical Functions: How to write range of a function In order to understand how to write the range of a function, it is important to first have a clear understanding of what a mathematical function is and how it relates to its input and output. A. Define what a mathematical function is A mathematical function is a relation between a set of inputs and a set of possible outputs where each input is related to exactly one output. In other words, for every input there is a unique output. This relationship can be represented in various forms such as equations, graphs, or tables. B. Explain the input and output relationship in a function The input in a function, often denoted as x, is the independent variable that can take on various values. The output, often denoted as f(x) or y, is the dependent variable that is determined by the input. The function itself defines how the input values are transformed to produce the corresponding output values. Writing the range of a function Once a clear understanding of mathematical functions is established, we can delve into how to write the range of a function. The range of a function is the set of all possible output values that the function can produce. • 1. Determine the output values To write the range of a function, it is important to determine all the possible output values that the function can produce for the given range of input values. This can be done by evaluating the function for different input values and identifying the corresponding output values. • 2. Identify the set of output values Once the output values are determined, the range of the function is simply the set of all these output values. This set can be represented in various forms such as listing the values, using interval notation, or graphically depicting the range on a coordinate plane. Understanding the range of a function A. Define what the range of a function is When we talk about the range of a function, we are referring to the set of all possible values that the function can output based on its input. In other words, it is the collection of all the output values of the function. 1. How it is represented • The range of a function is often represented using interval notation or set notation. For example, if the function's range is all real numbers, it can be represented as (-∞, ∞) in interval notation or as {x | x is a real number} in set notation. 2. Finding the range • To find the range of a function, one must plug in different values for the input and observe the corresponding output values. The collection of all these output values will give us the range of the function. B. Discuss the significance of finding the range of a function Finding the range of a function is important for a few reasons. Firstly, it helps us understand the behavior and limitations of the function. Knowing the range can also assist in determining the domain of the function. Additionally, it can be useful in identifying the maximum and minimum values of the function, which can be essential in various real-world applications. Techniques for writing the range of a function When it comes to understanding mathematical functions, it is important to be able to write the range of a function. The range of a function refers to the set of all possible values that the function can output. There are several techniques that can be used to determine the range of a function, including using the graph of the function and algebraic methods. Let's explore these techniques in more A. Use the graph of the function to find the range One of the most straightforward ways to determine the range of a function is by examining its graph. The graph of a function visually represents the relationship between the input and output values. By analyzing the behavior of the graph, you can identify the range of the function. • Identify the highest and lowest points on the graph: By locating the highest and lowest points on the graph of the function, you can determine the maximum and minimum values of the output. The range will then be the set of all values between the maximum and minimum. • Consider the behavior of the graph: Pay attention to the overall shape and direction of the graph. Is it increasing or decreasing? Is it bounded or unbounded? These characteristics can provide insight into the range of the function. • Look for asymptotes and intercepts: Asymptotes and intercepts can also provide valuable information about the range of the function. By analyzing these features, you can further refine your understanding of the possible output values. B. Use algebraic methods to determine the range In addition to using the graph of the function, algebraic methods can be employed to determine the range of a function. These methods involve manipulating the function algebraically to identify the possible output values. • Find the domain of the function: The domain of the function represents the set of all possible input values. By considering the domain, you can identify the corresponding output values and thus determine the range of the function. • Use interval notation: Interval notation can be used to express the range of a function in a concise and precise manner. By identifying the intervals of output values, you can effectively communicate the range of the function. • Apply function properties: Certain properties of functions, such as symmetry and periodicity, can provide valuable insights into the range of the function. By leveraging these properties, you can refine your understanding of the possible output values. Real-life applications of understanding the range of a function Understanding the range of a function is crucial in real-life scenarios as it helps in making informed decisions and predictions based on the data provided by the function. The range of a function provides important insights into the possible outputs or values that the function can produce, which is essential in various fields. A. Discuss how understanding the range of a function is crucial in real-life scenarios • Understanding the range of a function is crucial in real-life scenarios as it helps in analyzing and interpreting data accurately. • It allows for predicting future outcomes based on the possible range of values that the function can produce. • Understanding the range of a function is essential for optimizing processes and making informed decisions in various fields. B. Provide examples of how the range of a function is used in different fields • Finance: In finance, understanding the range of a function is crucial for predicting future stock prices, analyzing investment options, and managing risk. • Engineering: In engineering, the range of a function is used to determine the maximum and minimum values for designing structures, optimizing processes, and ensuring safety. • Healthcare: In healthcare, understanding the range of a function is important for analyzing patient data, predicting health outcomes, and optimizing treatment plans. • Statistics: In statistics, the range of a function is used to analyze and interpret data, make predictions, and draw meaningful conclusions from the data sets. Common Mistakes to Avoid When Writing the Range of a Function When writing the range of a function, there are several common mistakes that should be avoided in order to accurately represent the outputs of the function. Here are two of the most common mistakes: A. Misinterpreting the domain as the range One common mistake when writing the range of a function is misinterpreting the domain as the range. The domain of a function consists of all possible input values, while the range consists of all possible output values. It is important to differentiate between the two and not confuse the domain with the range. B. Failing to consider all possible outputs of the function Another common mistake is failing to consider all possible outputs of the function. When determining the range of a function, it is essential to consider all the possible output values that the function can produce. Failing to do so can result in an incomplete or inaccurate representation of the function's range. By avoiding these common mistakes, you can ensure that the range of a function is accurately determined and represented. In conclusion, understanding the range of a mathematical function is crucial in solving problems and analyzing data. We discussed how to write the range of a function by considering the output values it produces. By identifying the lowest and highest values, and any restrictions, we can determine the range effectively. • Summarize the key points discussed: We covered the definition of the range and the steps to finding it, including identifying the output values and considering any restrictions. • Reinforce the importance of understanding the range of a function in mathematics: Understanding the range of a function is essential for various applications, including determining the possible output values and analyzing the behavior of the function. Keep practicing and applying these techniques to enhance your mathematical skills and problem-solving abilities! ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-how-to-write-range-of-a-function","timestamp":"2024-11-11T18:27:50Z","content_type":"text/html","content_length":"213944","record_id":"<urn:uuid:f35cc3ae-5ad5-40be-8a2b-4257c8532dca>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00678.warc.gz"}
ManPag.es - slasq3.f − subroutine SLASQ3 (I0, N0, Z, PP, DMIN, SIGMA, DESIG, QMAX, NFAIL, ITER, NDIV, IEEE, TTYPE, DMIN1, DMIN2, DN, DN1, DN2, G, TAU) SLASQ3 checks for deflation, computes a shift and calls dqds. Used by sbdsqr. Function/Subroutine Documentation subroutine SLASQ3 (integerI0, integerN0, real, dimension( * )Z, integerPP, realDMIN, realSIGMA, realDESIG, realQMAX, integerNFAIL, integerITER, integerNDIV, logicalIEEE, integerTTYPE, realDMIN1, realDMIN2, realDN, realDN1, realDN2, realG, realTAU) SLASQ3 checks for deflation, computes a shift and calls dqds. Used by sbdsqr. SLASQ3 checks for deflation, computes a shift (TAU) and calls dqds. In case of failure it changes shifts, and tries again until output is positive. I0 is INTEGER First index. N0 is INTEGER Last index. Z is REAL array, dimension ( 4*N ) Z holds the qd array. PP is INTEGER PP=0 for ping, PP=1 for pong. PP=2 indicates that flipping was applied to the Z array and that the initial tests for deflation should not be DMIN is REAL Minimum value of d. SIGMA is REAL Sum of shifts used in current segment. DESIG is REAL Lower order part of SIGMA QMAX is REAL Maximum value of q. NFAIL is INTEGER Number of times shift was too big. ITER is INTEGER Number of iterations. NDIV is INTEGER Number of divisions. IEEE is LOGICAL Flag for IEEE or non IEEE arithmetic (passed to SLASQ5). TTYPE is INTEGER Shift type. DMIN1 is REAL DMIN2 is REAL DN is REAL DN1 is REAL DN2 is REAL G is REAL TAU is REAL These are passed as arguments in order to save their values between calls to SLASQ3. Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. September 2012 Definition at line 181 of file slasq3.f. Generated automatically by Doxygen for LAPACK from the source code.
{"url":"https://manpag.es/SUSE131/3+SLASQ3","timestamp":"2024-11-12T13:43:50Z","content_type":"text/html","content_length":"21148","record_id":"<urn:uuid:6088bbf6-2331-4e78-8f4a-234a0c0ba18d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00424.warc.gz"}
Calculations Using Significant Figures When determining the number of significant figures, the zeros used to position the comma are not taken into account. The number 0.0023 has only two significant figures. The number 0.0000002354 has four significant figures. Writing the number in scientific notation shows the significant figures more clearly: 2.3 10 ^-3 and 2.354 10 ^-7 . How many significant figures does the number 2,300 10 ^5 have? The answer is four significant figures; And 2.0300 10 ^-20 ? The answer is five significant figures. Product and quotient: "The result of multiplying or dividing two numbers has no more significant figures than the number with least significant figures used in the operation." See the following examples: The final result has three significant figures, the same as 5.82. The calculator returns 119.06556. Final result with three significant figures. The result of the calculator is: 19.0035
{"url":"https://www.quimicafisica.com/en/fundamentals-of-chemistry/calculations-using-significant-figures.html","timestamp":"2024-11-04T20:55:09Z","content_type":"text/html","content_length":"30624","record_id":"<urn:uuid:f12a11b7-ae65-49e1-94eb-4894bea686f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00665.warc.gz"}
Transpose VLOOKUP in Excel The VLOOKUP function looks for a value in the leftmost column of a table and then returns a value in the same row from the column you specify. By default, the table must be in a vertical range and sorted in ascending order. In some cases, however, the lookup table may be a horizontal range. This tutorial shows how to use the VLOOKUP function to return data from a horizontal range or transpose VLOOKUP in Excel. This tutorial will show how to lookup data from the top vertical range and fill it into the bottom vertical range: To achieve this, we need to use formulas that combine the VLOOKUP function and the TRANSPOSE function that converts a vertical range of cells to a horizontal range or vice versa. Transpose the Second Row into the Second Column To transpose the second row in the first horizontal data range into the second column of the second vertical data range, we use the following steps: 1. Select cell B7 and type in the following formula: 1 =VLOOKUP($A7,TRANSPOSE($B$1:$E$3),2,FALSE) 2. Click the Enter button on the Formula bar to enter the formula. Double-click or drag down the fill handle to copy the formula down the column. Transpose the Third Row into the Third Column To transpose the third row in the first horizontal data range into the third column of the second vertical data range, we use the following steps: 1. Select cell C7 and type in the following formula: 1 =VLOOKUP(A$7,TRANSPOSE($B$1:$E$3),3,FALSE) 2. Click the Enter button on the Formula bar. Double-click or drag down the fill handle to copy the formula down the column. This tutorial has shown how to transpose VLOOKUP in Excel. We hope you found the information useful.
{"url":"https://officetuts.net/excel/formulas/transpose-vlookup/","timestamp":"2024-11-13T07:47:04Z","content_type":"text/html","content_length":"146064","record_id":"<urn:uuid:ea39c3fa-21b9-4378-a757-85f5c7c7ffca>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00045.warc.gz"}
How to count linear and affine closed lambda terms? Paper Information Title: How to count linear and affine closed lambda terms? Authors: Pierre Lescanne Proceedings: Linearity/TLLA Pre-proceedings Editors: Maribel Fernandez, Valeria de Paiva, Thomas Ehrhard and Lorenzo Tortora De Falco Keywords: linear term, affine term, combinatoric, lambda calculus ABSTRACT. Affine lambda-terms are lambda-terms in which each bound variable occurs at most once and linear lambda-terms are lambda-terms in which each bound variable occurs once and only once. In this paper we count the number of affine closed lambda-terms of size n, linear closed lambda-terms of size n, affine closed beta-normal forms of size n and linear closed Abstract: beta-normal forms of size n, for several measures of the size of lambda-terms. From these formulas, we show how we can derive programs for generating all the terms of size n for each class. The foundation of all of this is a specific data structure, made of contexts in which one counts all the holes at each level of abstractions by lambda's. Pages: 6 Talk: Jul 08 09:20 (Session 34J)
{"url":"http://t-news.cn/Floc2018/FLoC2018-pages/paper2438.html","timestamp":"2024-11-13T12:11:02Z","content_type":"application/xhtml+xml","content_length":"10667","record_id":"<urn:uuid:12fefc19-3180-4915-9d41-8bcd609cf1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00490.warc.gz"}
Quantum Mechanics: Part II 6. Quantum Mechanics: Part II# 6.1. The Schrödinger Wave Equation# In November 1925, Schrödinger was presenting on de Broglie’s wave theory for particles when Peter Debye suggested that there should be a wave equation. Within a few weeks, Schrödinger found a suitable equation based on what he knew about geometrical and wave optics. Schrödinger’s equation permits wave-like solutions even though it is not strictly a wave equation in a mathematical sense (see Eqn. (5.8)). It is more analogous to a diffusion equation with a diffusion coefficient \(\beta^2 = h/2m\) (Mita 2021). Newton’s laws govern the motion of particles, where we need a similar set of equations to describe the wave motion of particles. A wave equation is needed that is dependent on the potential field that the particle experiences. The wave function \(\Psi\) is operated on by the field and allows us to calculate the physical observables (e.g., position, energy, momentum, etc.). Note that we will not be able to determine the exact position and momentum simultaneously due to the uncertainty principle. The Schrödinger wave equation in its time-dependent form for a particle moving in a potential \(V\) in one dimension is \[i\hbar \frac{\partial \Psi(x,t)}{\partial t} = \left[-\frac{\hbar^2}{2m} \frac{\partial }{\partial x^2} + V\right]\Psi(x,t),\] which is a complex partial differential equation (PDE). Both the potential \(V\) and the wave function \(\Psi\) may be functions of space and time (i.e., \(V(x,t)\) and \(\Psi(x,t)\)). The extension of Eqn. (6.1) into three dimensions is straightforward, \[\begin{align*} i\hbar \frac{\partial \Psi(x,y,z,t)}{\partial t} &= \left[-\frac{\hbar^2}{2m} \left( \frac{\partial }{\partial x^2} + \frac{\partial }{\partial y^2}+ \frac{\partial }{\partial z^2}\ right) + V(x,y,z,t)\right]\Psi(x,y,z,t), \\ &= \left[-\frac{\hbar^2}{2m} \nabla^2 + V(x,y,z,t)\right]\Psi(x,y,z,t), \end{align*}\] but this introduction is restricted to the one-dimensional form until Section 6.5. The wave equation must be linear so that we can use the superposition principle to form wave packets comprised of two or more waves. Prove that the wave function in Eqn. (6.1) is linear by using the addition of two waves by using two waves (\(\Psi_1\) and \(\Psi_2\)) with constant coefficients (\(a\) and \(b\)) as \[ \Psi(x,t) = a\Psi_1(x,t) + b\Psi_2(x,t), \] which satisfies the Schrödinger equation. To apply the Schrödinger equation, we need to know the partial derivatives of \(\Psi\) with respect to time \(t\) and position \(x\). These derivatives are \[\begin{align*} \frac{\partial \Psi}{\partial t} &= a\frac{\partial \Psi_1}{\partial t} + b\frac{\partial \Psi_1}{\partial t}, \\ \frac{\partial \Psi}{\partial x} &= a\frac{\partial \Psi_1}{\partial x} + b\frac{\partial \Psi_1}{\partial x}, \\ \frac{\partial^2 \Psi}{\partial x^2} &= a\frac{\partial^2 \Psi_1}{\partial x^2} + b\frac{\partial^2 \Psi_1}{\partial x^2}. \end{align*}\] Substituting these derivatives into Eqn. (6.1) produces \[ i\hbar \left(a\frac{\partial \Psi_1}{\partial t} + b\frac{\partial \Psi_1}{\partial t} \right) = -\frac{\hbar^2}{2m}\left(a\frac{\partial^2 \Psi_1}{\partial x^2} + b\frac{\partial^2 \Psi_1}{\ partial x^2}\right) + V\left(a\Psi_1 +b\Psi_2 \right). \] Collecting all the \(\Psi_1\) terms to the left-hand side and the \(\Psi_2\) terms to the right-hand side gives \[ a \left(i\hbar\frac{\partial \Psi_1}{\partial t} + \frac{\hbar^2}{2m}\frac{\partial^2 \Psi_1}{\partial x^2} - V\Psi_1 \right) = -b \left(i\hbar\frac{\partial \Psi_2}{\partial t} + \frac{\hbar^2} {2m}\frac{\partial^2 \Psi_2}{\partial x^2} - V\Psi_2 \right). \] If \(\Psi_1\) and \(\Psi_2\) satisfy Eqn. (6.1) individually, then the quantities in parentheses are identically zero. Therefore \(\Psi\) is also a solution. A wave can be characterized by a wave number \(k\) and angular frequency \(\omega\) traveling in the \(+x\) direction by \[\Psi(x,t) = A \sin\left(kx-\omega t + \phi\right).\] However, Eqn. (6.2) could be generalized to be a complex function with both sines and cosines. A more general form of a wave function is \[ \Psi(x,t) = Ae^{i(kx-\omega t)} = A\left[\cos(kx-\omega t) + i \sin(kx-\omega t)\right], \] where the coefficient \(A\) can be complex. The complex exponential \(e^{i(kx-\omega t)}\) is an acceptable solution to the time-dependent Schrödinger wave equation, but not all functions of \(\sin (kx-\omega t)\) and \(\cos(kx-\omega t)\) are solutions. (a) Show that \(\Psi_A(x,t) = Ae^{i(kx-\omega t)}\) satisfies the time-dependent Schrödinger wave equation. (b) Determine whether \(\Psi_B(x,t) = A\sin(kx-\omega t)\) is an acceptable solution to the time-dependent Schrödinger wave equation. (a) Similar to the previous example, we need to take the appropriate partial derivatives of the wave function \(\Psi_A\) with respect to time as \[\begin{align*} \frac{\partial \Psi_A}{\partial t} &= \frac{\partial }{\partial t}\left(Ae^{i(kx-\omega t)}\right), \\ &= -i\omega Ae^{i(kx-\omega t)} = -i\omega \Psi_A, \end{align*}\] and with respect to space as \[\begin{align*} \frac{\partial \Psi_A}{\partial x} &= \frac{\partial }{\partial x}\left(Ae^{i(kx-\omega t)}\right), \\ &= ik\left(Ae^{i(kx-\omega t)}\right) = ik \Psi_A, \\ \frac{\partial^2 \Psi_A} {\partial x^2} &= ik\frac{\partial }{\partial x}\left(Ae^{i(kx-\omega t)}\right),\\ &= i^2k^2 \left(Ae^{i(kx-\omega t)}\right) = -k^2 \Psi_A. \end{align*}\] Now we insert the results into Eqn. (6.1) to get \[\begin{align*} i\hbar \left(-i\omega \Psi_A \right) &= -\frac{\hbar^2}{2m} \left(-k^2\Psi_A\right) + V\Psi_A, \\ 0 &= \left(\hbar\omega -\frac{\hbar^2k^2}{2m} - V \right)\Psi_A. \end{align*}\] If we use \(E=hf=\hbar \omega\) and \(p = \hbar k\), we obtain \[ \left(E - \frac{p^2}{2m} - V\right)\Psi_A = 0, \] which is zero in our nonrelativistic formulation, because \(E = K + V = p^2/(2m) + V\). Thus \(\Psi_A\) appears to be an acceptable solution. (b) Similar to part a, we need to take the appropriate partial derivatives of the wave function \(\Psi_B\) with respect to time as \[\begin{align*} \frac{\partial \Psi_B}{\partial t} &= \frac{\partial }{\partial t}\left(A\sin(kx-\omega t) \right), \\ &= -\omega A\cos(kx-\omega t), \end{align*}\] and with respect to space as \[\begin{align*} \frac{\partial \Psi_B}{\partial x} &= \frac{\partial }{\partial x}\left(A\sin(kx-\omega t)\right), \\ &= kA\cos(kx-\omega t), \\ \frac{\partial^2 \Psi_B}{\partial x^2} &= ik\frac{\ partial }{\partial x}\left(kA\cos(kx-\omega t)\right),\\ &= -k^2 A\sin(kx-\omega t) = -k^2 \Psi_B. \end{align*}\] Now we insert the results into Eqn. (6.1) to get \[\begin{align*} i\hbar \left(-\omega A\cos(kx-\omega t) \right) &= -\frac{\hbar^2}{2m} \left(-k^2\Psi_B\right) + V\Psi_B, \\ &= \left(\frac{\hbar^2k^2}{2m} + V\right) \Psi_B. \end{align*}\] The left-hand side of the above result is complex where the right-hand side is real. Therefore, \(\Psi_B\) is not an acceptable wave function because Eqn. (6.1) is not satisfied for all \(x\) and \(t \). The function \(\Psi_B\) is a solution to the classical wave equation. 6.1.1. Normalization and Probability# The probability \(P(x)\ dx\) of a particle between \(x\) and \(x+dx\) is given as \[ P(x)\ dx = |\Psi(x,t)|^2\ dx = \Psi^*(x,t)\Psi(x,t)\ dx, \] where the complex conjugate operator \(^*\) negates the complex parts of a function (i.e., \(i\rightarrow -i\)) and leaves the real parts unchanged. The probability of a particle existing between two points (\(x_1\) and \(x_2\)) is given by \[ P = \int_{x_1}^{x_2} \Psi^*\Psi dx. \] If the wave function represents the probability of a particle existing somewhere, then the sum of all probability intervals must equal unity, or \[P(\text{all space}) = \int_{-\infty}^\infty \Psi^*\Psi dx = 1,\] which is called normalization. Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from myst_nb import glue def Psi_alpha(x,A): #calculate the wave function Ae^{-|x|/alpha}; alpha is a global variable #A = normalization constant return A*np.exp(-np.abs(x)/alpha) fs = 'medium' lw = 2 col = (218/256., 26/256., 50/256.) alpha = 1. #scale of the plot norm_A = 1./alpha x_rng = np.arange(-5*alpha,5*alpha,0.01) x_a = np.arange(0,1,0.01) x_b = np.arange(1,2,0.01) fig = plt.figure(figsize=(6,3),dpi=150) ax = fig.add_subplot(111) Psi_all = Psi_alpha(x_rng,norm_A) Psi_a = Psi_alpha(x_a,norm_A) Psi_b = Psi_alpha(x_b,norm_A) ax.plot(x_rng,Psi_all,'-',color=col,lw=lw,label='$\Psi(x) = Ae^{-\\frac{|x|}{\\alpha}}$') ax.set_xlabel("Position ($\\alpha$)",fontsize=fs) glue("psi_fig", fig, display=False); Consider a wave packet formed by using the wave function \(\Psi(x) = Ae^{-\frac{|x|}{\alpha}}\), where \(A\) is a constant to be determined by normalization (Fig. 6.1). Normalize this wave function and find the probabilities of the particle being between (a) \(0\) and \(\alpha\) (gray area in Fig. 6.1 ) and (b) \(\alpha\) and \(2\alpha\) (blue area in Fig. 6.1). To normalize the wave function \(\Psi\), we use Eqn. (6.4) by \[\begin{align*} \int_{-\infty}^\infty \left(Ae^{-\frac{|x|}{\alpha}} \right) \left(Ae^{-\frac{|x|}{\alpha}}\right)\ dx &= 1, \\ \int_{-\infty}^\infty A^2e^{-\frac{2|x|}{\alpha}} dx &= 1. \end Because the wave function is symmetric about \(x=0\), we can evaluate only the positive interval \((0\leq x \leq \infty\)), multiply by \(2\), and drop the absolute value signs on \(|x|\) to get \[\begin{align*} 2\int_0^\infty A^2e^{-\frac{2x}{\alpha}} dx &= -\frac{2A^2\alpha}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_0^\infty, \\ &= -\alpha A^2 \left(e^{-\infty} - e^0 \right) = \alpha A^2 = 1. \ The coefficient \(A = \alpha^{-1/2}\), and the normalized wave function \(\Psi\) is \[ \Psi(x) = \frac{1}{\sqrt{\alpha}}e^{-\frac{|x|}{\alpha}}. \] We use the above result to determine the probability of the particle existing within each interval. Since the intervals are on the positive \(x\)-axis, we drop the absolute value signs on \(|x|\) to \[\begin{align*} P_a &= \frac{1}{\alpha} \int_0^\alpha e^{-\frac{2x}{\alpha}} dx, \\ &= -\frac{1}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_0^\alpha, \\ &= -\frac{1}{2}\left(e^{-2}-e^0\right) \approx 0.432. \end{align*}\] The probability in the other interval is \[\begin{align*} P_b &= \frac{1}{\alpha} \int_0^\alpha e^{-\frac{2x}{\alpha}} dx, \\ &= -\frac{1}{2}e^{-\frac{2x}{\alpha}}\biggr\rvert_\alpha^{2\alpha}, \\ &= -\frac{1}{2}\left(e^{-4}-e^{-2}\right) \ approx 0.059. \end{align*}\] The particle is much more likely to exist in interval (a) than in interval (b). This is expected given the shape of the wave function. The wave function \(e^{i(kx-\omega t)}\) represents a “free” particle under zero net force (constant \(V\)) moving along the \(x\) axis. There is a problem with this wave function if we try to normalize it. The normalization integral diverges to infinity! This occurs because there is a finite probability of the particle to exist anywhere along the \(x\) axis and our sum is trying to add up an infinite number of small boxes containing a finite probability. The only other possibility is a zero probability, but that is not an interesting physical result. Because this wave function has a precise \(k\) and \(\omega\), it represents a particle with a definite energy and momentum. From the uncertainty principle, \(\Delta E = 0\) and \(\Delta p = 0\), which implies that \(\Delta t = \infty\) and \(\Delta x = \infty\). We cannot know where the particle is at any time. We can still use such wave functions, if we restrict the particle to certain positions in space, such as in a box or in an atom. We can also form wave packets from such functions to localize the particle. 6.1.2. Properties of Valid Wave Functions# There are certain properties (i.e., boundary conditions) that an acceptable wave function \(\Psi\) must: 1. be finite everywhere to avoid infinite probabilities. 2. be single valued to avoid multiple values of the probability. 3. be continuous for finite potentials. Its spatial derivative \(\partial \Psi/\partial x\) must also be continuous. This is required due to the second-order derivative term in the Schrödinger equation. (There are exceptions to this rule for infinite potentials). 4. must approach zero as \(x \rightarrow \pm \infty\) to normalize the wave functions. Solutions for \(\Psi\) that do not satisfy these properties do not generally correspond to physically realizable circumstances. 6.1.3. Time-Independent Schrödinger Wave Equation# In many cases, the potential will not depend explicitly on time (i.e., time-independent). The dependence on time and position can then be separated in the Schödinger equation. Let \(\Psi(x,t) = \psi (x)f(t)\), and insert into Eqn. (6.1) to find \[ i\hbar \psi(x)\frac{\partial f(t)}{\partial t} = -\frac{\hbar^2 f(t)}{2m}\frac{\partial^2 \psi(x)}{\partial x^2} + V(x)\psi(x)f(t). \] Dividing by \(\psi(x) f(t)\), we get \[ \frac{i\hbar}{f(t)} \frac{d f(t)}{dt} = -\frac{\hbar^2}{2m}\frac{d^2 \psi(x)}{d x^2} + V(x). \] Notice that the left-hand side depends only on time, while the right-hand side depends only on spatial coordinates. This allows us to change the partial derivatives to ordinary derivatives because each side depends only on one variable. Each side must be equal to a constant because one variable may change independently of the other. Let’s call this constant \(B\) and set it equal to the left-hand side to get \[\begin{align*} \frac{i\hbar}{f}\frac{df}{dt} &= B, \\ i\hbar \int \frac{df}{f} &= \int B\ dt. \end{align*}\] We can integrate both sides and find \[ i\hbar \ln f = Bt + C, \] which includes the integration constant \(C\) that we may choose to be 0. Then, \[ \ln f = \frac{Bt}{i\hbar}, \] and we determine \(f\) to be \[ f(t) = e^{\frac{Bt}{i\hbar}} = e^{-\frac{iBt}{\hbar}}. \] If we let \(B = \hbar \omega = E\), then \(f(t) = e^{-i\omega t} = e^{-\frac{iEt}{\hbar}}\). We can now write the time-independent Schödinger equation as \[-\frac{\hbar^2}{2m} \left[\frac{d^2}{dx^2} + V(x)\right] \psi(x) = E\psi(x).\] The wave function \(\Psi(x,t)\) becomes \[\Psi(x,t) = \psi(x)e^{-i\omega t}.\] Many important results can be obtained when only the spatial part of the wave function \(\psi(x)\) is needed. The probability density used the complex conjugate operator, which now becomes useful. Using Eqn. (6.7), we have \[ \Psi^*\Psi &= \psi^2(x)\left(e^{i\omega t}e^{-i\omega t}\right), \\ &= \psi^2(x), \] where the probability distributions are constant in time. If the probability distribution represents a wave, then we have the phenomenon of standing waves. In quantum mechanics, we say the system is in a stationary state. Consider a metal in which there are free electrons, and the potential is zero (e.g., a conductor). What mathematical form does the wave function \(\psi(x)\) take? Using Eqn. (6.7), we let \(V(x)=0\) to get \[ -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2} = E\psi(x), \] and using Eqn. (5.35), we find \[ \frac{d^2\psi}{dx^2} = -\frac{2mE}{\hbar^2}\psi = k^2\psi.\] The differential equation \(d^2\psi/dx^2 = -k^2\psi\) has appeared several times in calculus and introductory physics (e.g., pendulum and simple harmonic motion). If the energy \(E>0\), then \(k^2\) is real and the wave function solution is \[ \psi(x) = A\sin kx + B\cos kx.\] Otherwise (\(E<0\)), then \(k^2\) is imaginary and the wave function solution is \[ \psi(x) = Ce^{ikx}.\] 6.1.4. Comparison of Classical and Quantum Mechanics# Newton’s second law (\(\vec{F} = m\vec{a}\)) and Schrödinger’s equation are both differential equations that are postulated to explain observed behavior, and experiments show that they are successful. Newton’s second law can be derived from the Schrödinger equation. Schrödinger’s equation is more fundamental. Classical mechanics only appears to be more precise because it deals with macroscopic phenomena, where the underlying uncertainties are just too small to be significant. An interesting parallel between classical and quantum mechanics lies in considering ray and wave optics. Throughout the 18th century, scientists argued whether wave or ray optics was more fundamental (Newton favored ray optics). In the early 19th century, it was shown that wave optics was need to explain diffraction and interference. Ray optics is a good approximation as long as the wavelength \ (\lambda\) is much smaller than the dimensions of the apertures because rays of light are characteristic of particle-like behavior. To describe interference phenomena, wave optics is required. For macroscopic objects, the de Broglie wavelength is so small that wave behavior is not apparent. Eventually the wave descriptions and quantum mechanics were required to understand all the data. Classical mechanics is a good macroscopic approximation, but there is only one correct theory, quantum mechanics, as far as we know. 6.2. Expectation Values# The wave equation formalism must measure physical quantities (e.g., position, momentum, and energy). Consider a measurement of the the position \(x\) of a particular system. If we make three measurements of the position, we are likely to obtain three different results due to uncertainty. If our method of measurement is inherently accurate, then there is some physical significance to the average of our measure values of \(x\). The precision of our result improves with additional measurement. In quantum mechanics, we use wave functions to calculate the expected result of the average of many measurements of a given quantity. Measurements are made through integration, which returns the average value (recall the mean value theorem from calculus). We call this result the expectation value, where the expectation value of \(x \) is \(\langle x \rangle\). *Physical observables are found the the epxectation value. The expectation value mus be real (i.e., not complex) because the results of measurements are real. Consider a particle that is constrained to move along the \(x\)-axis. If we make many measurements of the particle, we may observe the particle at the position \(x_1\) for a number \(N_1\) times, the position \(x_2\) for \(N_2\) times, and so forth. The average value of \(x\) (i.e., \(\bar{x}\) or \(x_{\rm av}\)) is then \[ \bar{x} = \frac{N_1 x_1 + N_2 x_2 + \cdots}{N_1 + N_2 + \cdots} = \frac{\sum_i N_i x_i}{\sum_i N_i}, \] from a series of discrete measurements. For continuous variables, we use the probability \(P(x,t)\) of observing the particle at a particular \(x\). Then the average value of \(x\) is \[ \bar{x} = \frac{\int_{-\infty}^\infty xP(x) dx}{\int_{-\infty}^\infty P(x) dx}. \] In quantum mechanics, we use the probability distribution (i.e., \(P(x)dx = \Psi^*(x,t)\Psi(x,t) dx\)) to determine the average or expectation value. The expectation value \(\langle x \rangle\) is given by \[ \langle x \rangle = \frac{\int_{-\infty}^\infty x\Psi^*(x,t)\Psi(x,t) dx}{\int_{-\infty}^\infty \Psi^*(x,t)\Psi(x,t) dx}. \] The denominator is the normalization equation. If the wave function is normalized, then the denominator is equal to unity. For a normalized wave function \(\Psi\), the expectation value is then given \[ \langle x \rangle = \int_{-\infty}^\infty x\Psi^*(x,t)\Psi(x,t) dx. \] The same general procedure is applicable to determining the expectation value of any function \(g(x)\) for a normalized wave function \(\Psi(x,t)\) by \[ \langle g(x) \rangle = \int_{-\infty}^\infty \Psi^*(x,t)g(x)\Psi(x,t) dx. \] Any knowledge we might have of the simultaneous value of the position \(x\) and the momentum \(p\) must be consistent with the uncertainty principle. To find the expectation value of \(p\), we need to represent \(p\) in terms of \(x\) and \(t\). Consider the wave function of the free particle, \(\Psi(x,t) = e^{i(kx-\omega t)}\). The spatial derivative is \[ \frac{\partial \Psi}{\partial x} = \frac{\partial}{\partial x}\left[ e^{i(kx-\omega t)}\right] = ike^{i(kx-\omega t)} = ik\Psi. \] Since \(k = p/\hbar\), this becomes \[ \frac{\partial \Psi}{\partial x} = \frac{i}{\hbar}p\Psi, \] \[ p\left[\Psi(x,t)\right] = -i\hbar \frac{\partial \Psi}{\partial x}. \] An operator is a mathematical operation that transforms one function into another. Coffee is the operator that transforms ideas into knowledge! An operator, denoted by \(\hat{Q}\), transforms the function \(f(x)\) by \(\hat{Q}f(x) = g(x)\). The momentum operator \(\hat{p}\) is then represented by \[ \hat{p} = -i\hbar \frac{\partial}{\partial x}, \] where the \(\hat{\ }\) symbol (i.e., “hat”) sign over the letter indicates that it is an operator. The momentum operator is not unique, where each of the physical observables has an associated operator that used to find the observable’s expectation value. To compute the expectation value of some physical observable \(Q\), the operator \(\hat{Q}\) must operate on \(\Psi\) before calculating the probability to get \[ \langle Q \rangle = \int_{-\infty}^\infty \Psi^*(x,t)\hat{Q}\Psi(x,t) dx. \] The expectation value of the momentum \(p\) becomes \[ \langle p \rangle &= \int_{-\infty}^\infty \Psi^*(x,t)\hat{p}\Psi(x,t) dx, \\ &=-i\hbar \int_{-\infty}^\infty \Psi^*(x,t)\frac{\partial \Psi(x,t)}{\partial x} dx. \] The position \(x\) is its own operator (i.e., \(\hat{x} = x\)). Operators for observables of both \(x\) and \(p\) can be constructed from \(\hat{x}\) and \(\hat{p}\). The energy operator \(\hat{E}\) can be constructed from the time derivative of the free-particle wave function, \[ \frac{\partial \Psi}{\partial t} = \frac{\partial}{\partial t}\left[ e^{i(kx-\omega t)}\right] = -i\omega \Psi. \] Substituting \(\omega = E/\hbar\) and rearranging, we find \[\begin{split}E\left[\Psi(x,t)\right] &= i\hbar \frac{\partial \Psi}{\partial t}, \\ \hat{E}\Psi(x,t) &= i\hbar \frac{\partial }{\partial t}\Psi(x,t).\end{split}\] The energy operator \(\hat{E}\) is used to find the expectation value \(\langle E \rangle\) of the energy by \[ \langle E \rangle &= \int_{-\infty}^\infty \Psi^*(x,t)\hat{E}\Psi(x,t) dx, \\ &=i\hbar \int_{-\infty}^\infty \Psi^*(x,t)\frac{\partial \Psi(x,t)}{\partial t} dx. \] The application of operators are general results and not limited to just free-particle wave functions. Use the momentum and energy operators with the conservation of energy to produce the Schrödinger wave equation. The conservation of energy tells us how to calculate the total energy \[ E = K + V = \frac{p^2}{2m} + V. \] We replace \(p\) with the operator and apply to the wave function \(\Psi\), which gives \[\begin{align*} \left[ \frac{1}{2m}\hat{p}^2 + V \right]\Psi &= \frac{-i\hbar}{2m}\frac{\partial}{\partial x}\left[-i\hbar\frac{\partial \Psi}{\partial x} \right] + V\Psi, \\ &= -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi. \end{align*}\] Notice that \(\hat{p}^2\) results in applying \(\hat{p}\) twice. Using the application of the energy operator from Eqn. (6.17), we obtain \[\begin{align*} \hat{E}\Psi &= \left[ \frac{1}{2m}\hat{p}^2 + V \right]\Psi, \\ i\hbar \frac{\partial }{\partial t}\Psi(x,t) &= -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi, \end which is the time-dependent Schrödinger wave equation. This is not a derivation, but only a verification of the consistency of the definition. 6.3. Infinite Square-Well Potential# The simplest system consists of a particle trapped in a well with infinitely hard walls that the particle cannot penetrate (i.e., particle-in-a-box). This scenario was explored in Sect. 5.8, but now we present the full quantum mechanical solution. The potential that gives rise to an infinite square well with a width \(L\) is given by \[ V(x) = \begin{cases} \infty &\qquad x\leq 0, x\geq L \\ 0 &\qquad 0<x<L. \end{cases} \] The particle is constrained to move only between \(x=0\) and \(x=L\), where the particle experiences no forces. The infinite square-well potential can approximate many physical situations, such as the energy levels of simple atomic and nuclear systems. If we insert the infinite square-well potential into the the time-independent Schödinger equation (Eqn. (6.7)), • the only possible solution for the region \(x=\leq 0\) or \(x\geq L\) is the wave function \(\psi(x) = 0\) because \(V = \infty\). There is zero probability for the particle to exist in this • the region \(0<x<L\) permits a nonzero solution because \(V=0\). The probability for the particle to exist in this region is 1. Using \(V=0\) in Eqn. Eqn. (6.7) produces \[ \frac{d^2 \psi}{dx^2} = -\frac{2mE}{\hbar^2}\psi = -k^2 \psi, \] where the wave number \(k = \sqrt{2mE/\hbar^2}\). A suitable solution to this equation is \[ \psi(x) = A\sin kx + B \cos kx, \] where \(A\) and \(B\) are constants used to normalize the wave function. The wave function must be continuous at the boundaries, which means that \(\psi(0)=\psi(L) = 0\). Let’s apply these boundary conditions as \[ \psi(0) &= A \sin (0) + B \cos(0) = B, \\ \psi(L) &= A \sin (kL) + B \cos(kL). \] For \(\psi(0) = 0\), this implies that \(B=0\). To apply \(\psi(L) = 0\), then \(A\sin(kL) = 0\) and \(A=0\) is a trivial solution. We must have for \(\sin(kL) = 0\) and \(n\) is a positive integer. The wave function is now \[ \psi_n(x) = A \sin \left(\frac{n\pi}{L} x \right) \quad (n=1,\ 2,\ 3,\ \ldots). \] The property that \(d\psi /dx\) must also be continuous so that we can normalize the wave function. The normalization condition is \[ P = \int_{-\infty}^\infty \psi_n^*(x)\psi_n(x) dx = 1, \] where substitution of the wave function yields \[ A^2 \int_0^L \sin^2 \left(\frac{n\pi}{L} x \right) dx = 1. \] This is a straightforward integral (with the help of an integral table) and gives \(L/2\), so that \(A^2(L/2) = 1\) and \(A = \sqrt{2/L}\). The normalized wave function becomes \[\psi_n(x) = \sqrt{\frac{2}{L}} \sin \left(\frac{n\pi}{L} x \right) \quad (n=1,\ 2,\ 3,\ \ldots).\] These wave functions are identical to a vibrating string with its ends fixed. The application of the boundary conditions corresponds to fitting standing waves into the box. It is not a surprise to find standing waves as the solution because wea are considering time-independent solutions. Since \(k_n = n\pi/L\), we have \[ k_n = \frac{n\pi}{L} = \sqrt{2mE_n}{\hbar^2}. \] Notice the subscript \(n\) that denotes quantities that depend on the integer \(n\) and have multiple values. This equation is solved for \(E_n\) to give \[E_n = n^2 \frac{\pi^2 \hbar^2}{2mL^2} \quad (n=1,\ 2,\ 3,\ \ldots).\] The possible energies \(E_n\) of the particle are quantized and the integer \(n\) is a quantum number. The quantization of energy occurs from the application of the boundary conditions (standing waves) to possible solutions of the wave equation. Each wave function \(\psi_n(x)\) has associated with it a unique energy \(E_n\). The lowest energy level given by \(n=1\) is called the ground state and has an energy \(E_1 = \pi^2\hbar^2/(2mL^2)\). Note that the lowest energy level cannot be zero because the wave function has zero probability (\(\psi_0 = 0\)). Classically the particle has equal probability of being anywhere inside the box. The classical probability density is \(P(x) = 1/L\). According to Bohr’s correspondence principle, we should obtain the same probability for large \(n\), where the classical and quantum results should agree. The quantum probability density is \((2/L)\sin^2(k_n x)\). For large values of \(n\), there will be many oscillations within the box and the average value of \(\sin^2 \theta\) over many cycles is \(1/2\). Therefore the quantum probability for large \(n\) is equal to \(1/L\), in agreement with the classical result. Useful Integrals \[\begin{align*} \int u\ dv &= uv - \int v\ du \\ \int x\sin ax\ dx &= \frac{1}{a}\left(\frac{1}{a}\sin ax - x\cos ax \right) \\ \int \sin^2 ax\ dx &= \frac{1}{2}\left(x - \frac{1}{2a}\sin 2ax \ right) \\ \int x\sin^2 ax\ dx &= \frac{1}{4}\left( x^2 - \frac{x}{a}\sin 2ax - \frac{1}{2a^2}\cos 2ax \right) \end{align*}\] Determine the expectation values for (a) \(x\), (b) \(x^2\), (c) \(p\), and (d) \(p^2\) of a particle in an infinite square well for the first excited state. The first excited state corresponds to \(n=2\), and we know the quantized (normalized) wave function for a particle in an infinite square well from Eqn. (6.24). This gives \[ \psi_2(x) = \sqrt{\frac{2}{L}} \sin\left(\frac{2\pi}{L}x \right). \] (a) Using \(\psi_2(x)\), we can find the expectation value of \(x\) through the integral (using an integral table) \[\begin{align*} \langle x \rangle &= \int_0^L \psi_2^* x \psi_2 dx, \\ &= \frac{2}{L}\int_0^L x\sin^2\left(\frac{2\pi}{L}x \right) dx, \\ &= \frac{L}{2}. \end{align*}\] The average position of the particle is in the middle of the box \((x=L/2)\), although this is not the most probable location. (b) The expectation value of \(x^2\) is found through \[\begin{align*} \langle x^2 \rangle &= \frac{2}{L}\int_0^L \left[x\sin\left(\frac{2\pi}{L}x \right)\right]^2 dx, \\ &= 0.32L^2, \end{align*}\] where you can use integration by parts, an integral table, or the python module sympy. the expectation value for \(\sqrt{\langle x^2\rangle}\) for the first excited state is \(0.57L\), which is larger than \(\langle x \rangle\). (c) The expectation value of \(p\) requires the first derivative \(d\psi_2/dx\), which is, \[ \frac{d\psi_2}{dx} = \sqrt{\frac{2}{L}} \frac{d}{dx}\left[\sin \left(\frac{2\pi}{L}x \right)\right] = \frac{2\pi}{L}\sqrt{\frac{2}{L}} \cos \left(\frac{2\pi}{L}x \right). \] Applying this result to the integral for \(\langle p \rangle\) gives, \[\begin{align*} \langle p\rangle &= i\hbar \int_0^L \psi_2^* \frac{d\psi_2}{dx} dx, \\ &= \frac{4\pi i\hbar}{L^2}\int_0^L sin\left(\frac{2\pi}{L}x \right)\cos \left(\frac{2\pi}{L}x \right),\\ &= 0. The integral of \(\sin x \cos x\) over symmetric limits is always zero because the particle is moving left as often as right in the box. (d) The expectation value for \(p^2\) requires us to operate a second time on \(\psi_2\) to get \[ \frac{d^2\psi_2}{dx^2} = \frac{2\pi}{L}\sqrt{\frac{2}{L}} \frac{d}{dx}\left[\cos \left(\frac{2\pi}{L}x \right)\right] = -\frac{4\pi^2}{L^2}\sqrt{\frac{2}{L}} \sin \left(\frac{2\pi}{L}x \right). \] Applying this result to the integral for \(\langle p^2 \rangle\) gives, \[\begin{align*} \langle p^2\rangle &= -\hbar^2 \int_0^L \psi_2^* \frac{d^2\psi_2}{dx^2} dx, \\ &= \frac{8\pi^2 \hbar^2}{L^3}\int_0^L \sin^2 \left(\frac{2\pi}{L}x \right) ,\\ &= \frac{4\pi^2 \hbar^2} {L^2}. \end{align*}\] Using \(\langle p^2\rangle\), we can calculate the energy of the first excited state as \[ E_2 = \frac{\langle p^2\rangle}{2m} = \frac{4\pi^2 \hbar^2}{2mL^2}, \] which is correct because \(V=0\). from sympy import symbols, pi, simplify, integrate, diff, sin, var i, hbar, L = symbols('i, hbar L') psi = sin(2*pi*x/L) #find integral for <x> exp_x = simplify(integrate((2/L)*x*psi**2,(x,0,L))) print("The expectation value for x is",exp_x) #find integral for <x^2> exp_x2 = simplify(integrate((2/L)*(x*psi)**2,(x,0,L))) print("The expectation value for x**2 is", exp_x2, '= [1/3 - 1/(8*pi**2)]*L**2 = %1.2f L**2' % (1/3 - 1/(8*pi**2))) #find integral for <p> exp_p = simplify(integrate((2/L)*psi*diff(i*hbar*psi,x),(x,0,L))) print("The expectation value for p is", exp_p) #find integral for <p^2> exp_p2 = simplify(integrate((2/L)*psi*diff(diff(-hbar**2*psi,x),x),(x,0,L))) print("The expectation value for p**2 is", exp_p2.args[0][0]) The expectation value for x is L/2 The expectation value for x**2 is -L**2/(8*pi**2) + L**2/3 = [1/3 - 1/(8*pi**2)]*L**2 = 0.32 L**2 The expectation value for p is 0 The expectation value for p**2 is 4*pi**2*hbar**2/L**2 6.4. Finite Square-Well Potential# An infinite potential well is not very realistic. The finite-well potential is more realistic and similar to the infinite one. Consider a potential that is zero between \(x=0\) and \(x=L\), but equal to a constant \(V_o\) everywhere else. This is written mathematically as \[ V(x) = \begin{cases} V_o \quad & x\leq -L, \quad &\text{region I} \\ 0 \quad & -L < x < L, \quad &\text{region II} \\ V_o \quad & x\geq L. \quad &\text{region III} \\ \end{cases} \] First, let’s examine a particle of energy \(E < V_o\) that is classically bound inside the well. Using Eqn. (6.7), we have \[ -\frac{\hbar^2}{2m} \frac{d^2\psi}{dx^2} = \left[E-V_o \right]\psi. \quad \text{regions I, III} \] Let \(\alpha^2 = 2m(V_o-E)/\hbar^2\) (a positive constant), and we find \[ \frac{d^2\psi}{dx^2} = \alpha^2 \psi. \] The solution to this differential equation is known and can be written as a combination of exponential functions (\(e^{\alpha x}\) and \(e^{-\alpha x}\)), or \[ \psi = Ae^{\alpha x} + Be^{-\alpha x}. \] A valid wave function needs to be finite everywhere, where we can use this constraint to determine the form of the wave function \(\psi\) within region I and region III. Region I includes all values of \(x\leq -L\), where we can try to evaluate what conditions are necessary for \(\psi\) to be finite using a limit, \[ \lim_{x\rightarrow -\infty} \psi = Ae^{-\infty} + Be^{\infty}, \] which requires that \(B = 0\). Similarly, we can use a limit to find that \(A=0\) for \(x\geq L\). This allows us to define the wave functions in each region as \[ \psi_I(x) &= Ae^{\alpha x}, \quad \text{region I},\ x\leq -l \\ \psi_{III}(x) &= Be^{-\alpha x}. \quad \text{region III},\ x\geq L \] Inside the square well, the potential is zero and the Schrödinger equation becomes \[ \frac{d^2\psi}{dx^2} = -k^2\psi, \] where \(k=\sqrt{(2mE)/\hbar^2}\). This differential equation has a solution, which is a combination of sinusoidal functions: \[ \psi_{II}(x) = C\cos(kx) + D\sin(kx). \quad \text{region II},\ -L<x<L \] To determine a valid wave function, it must be continuous at the boundary (e.g., \((\psi_I = \psi_{II})\) and \((d\psi_I/dx = d\psi_{II}/dx)\) at \(x=-L\)). By choosing symmetric boundaries and noting that the potential is an even function (i.e., \(V_ox^0\)), we can simplify \(\psi_{II}\) to include only the even portion (i.e., \(D=0\)). As a result, we find the following conditions \[ C\cos(kL) &= A e^{-\alpha L}, \\ kC\sin(kL) &= -\alpha A e^{-\alpha L}, \\ C\cos(kL) &= Be^{-\alpha L}, \\ -kC\sin(kL) &= -\alpha B e^{-\alpha L}. \] We immediately find that \(B=A\), and we can divide the last two equations to eliminate \(C\) to get, \[ k\tan(kL) = \alpha. \] Recall that \(k\) and \(\alpha\) are both functions of energy \(E\). To solve for \(E\), we perform a variable transformation \(z \equiv kL\) and \(z_o \equiv \frac{L}{\hbar}\sqrt{2mV_o}\). Using the definitions of \(k\) and \(\alpha\), we find that \[ k^2 + \alpha^2 = \frac{2mE}{\hbar^2} + \frac{2m(V_o-E)}{\hbar^2} = \frac{2mV_o}{\hbar^2}. \] Combining the factors of \(z\) produces, \[ \sqrt{z_o^2 - z^2} = \sqrt{\frac{2mV_o}{\hbar^2}L^2 - \frac{2mE}{\hbar^2}L^2} = \sqrt{\frac{2m(V_o-E)}{\hbar^2}}L = \alpha L. \] Completing the transformation, we get \[\begin{split}kL\tan(kL) &= \alpha L, \\ z\tan(z) &= \sqrt{z_o^2 - z^2},\\ \tan(z) &= \sqrt{\left(\frac{z_o}{z}\right)^2 -1}.\end{split}\] We are left with a transcendental equation that must be solved numerically, where the equation depends on \(z_o\) (i.e., the size of the well) and \(z\) (i.e., a measure of the energy because \(k\ propto \sqrt{E}\)). The wave functions can be determined through normalization (more details here). Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.optimize import newton from myst_nb import glue def fsw(z): #Calculate the root for the finite square well (symmetric case) return np.tan(z) - np.sqrt((z_o/z)**2-1) def fsw_prime(z): #Calculate the derivative of fsw return 1./np.cos(z)**2 + z_o/np.sqrt((z_o/z)**2-1)/z**3 fs = 'medium' lw = 1.5 col = (218/256., 26/256., 50/256.) z = np.arange(0.01,3*np.pi,0.01) z_o = 8 #Example from Griffiths (Fig. 2.13) z1 = np.arange(0,np.pi/2,0.01) fig = plt.figure(figsize=(4,2),dpi=300) ax = fig.add_subplot(111) for i in range(0,3): z1 = np.arange(i*np.pi,(2*i+1)*np.pi/2,0.01) if i == 0: ax.legend(bbox_to_anchor=(-0.05, 1.05, 1., .102),ncol=2,fontsize=fs,frameon=False) roots = [] for i in range(0,3): z_guess = 1.4 + i*np.pi z_root = newton(fsw,z_guess,fsw_prime) #print("The first three roots are approximately %1.2f pi, %1.2f pi, and %1.2f pi." % (roots[0]/np.pi,roots[1]/np.pi,roots[2]/np.pi)) glue("fsw_fig", fig, display=False); Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.constants import m_e, hbar, eV from scipy.optimize import minimize from myst_nb import glue def fsw_wave_func(x,A,C,D,E,V_o,m): #Calculate the wave function for the finite square well k = calc_k(E,m) alpha = calc_alpha(E,V_o,m) region_I = np.where(x<=-L)[0] region_II = np.where(np.logical_and(-L<x,x<L))[0] region_III = np.where(x>=L) psi = np.zeros(len(x)) psi[region_I] = A*np.exp(alpha*x[region_I]) psi[region_II] = C*np.cos(k*x[region_II])+D*np.sin(k*x[region_II]) psi[region_III] = A*np.exp(-alpha*x[region_III]) return psi def calc_alpha(E,V_o,m): #calculate alpha from E and V_o return np.sqrt(2*m*np.abs(E-V_o)/hbar**2) def calc_k(E,m): #calculate k from E return np.sqrt(2*m*E/hbar**2) def calc_En(n,m,V_o): #calculate E_n for a wide deep well #return n**2*np.pi**2*hbar**2/(2*m*(2*L)**2) - V_o z_n = [0.44*np.pi,1.33*np.pi,2.17*np.pi] return z_n[n]**2*hbar**2/(2*m*L**2) def calc_coeff(E,V_o,m): #calculate the coefficient A_n k = calc_k(E,m) alpha = calc_alpha(E,V_o,m) term_1 = (np.exp(2*alpha*L)-np.exp(-2*alpha*L))/(2*alpha) term_2 = (L + np.sin(2*k*L)/(2*L))*np.exp(-2*alpha*L)/np.cos(k*L)**2 A_n = np.sqrt(1./(term_1+term_2)) C_n = A_n*np.exp(-alpha*L)/np.cos(k*L) return A_n, C_n lw = 2 col = (218/256., 26/256., 50/256.) L = 5.29e-11 #half-width of well (in m) V_pot = (8*hbar/L)**2/(2*m_e) #z_o = 8 converted into V_o E_n = [] for i in range(1,4): fig = plt.figure(figsize=(4,4),dpi=150) ax = fig.add_subplot(111) x_rng = np.arange(-2*L,2*L,L/40.) for i in range(0,3): A_n,C_n = calc_coeff(E_n[i],V_pot,m_e) psi = fsw_wave_func(x_rng,A_n,C_n,0,E_n[i],V_pot,m_e)/1e-5 ax.set_xlabel("$x$ (L)") glue("fsw_func_fig", fig, display=False); The application of the boundary conditions leads to quantized energy values \(E_n\) and associated wave functions \(\psi_n\). Remarkably, the particle has a finite probability of being outside the square well (see Fig. 6.3). Notice that the wave functions joint smoothly at the edges of the well and approach zero exponentially outside the well. The particle existing outside the well is prohibited classically, but occurs in quantum mechanics. Because of the exponential decrease of the the wave functions (\(\psi_I\) and \(\psi_{III}\)), the probability of the particle penetrating a distance greater than \(\delta x \approx 1/\alpha\) decreases quickly, where \[ \delta x \approx \frac{1}{\alpha} = \frac{\hbar}{\sqrt{2m(V_o-E)}} \] is called the penetration depth. The fraction of particles that successfully tunnel through the outer walls of the potential well is exceedingly small, but the results have important applications. 6.5. Three-Dimensional Infinite-Potential Well# Like the one-dimensional infinite-potential well, the three-dimensional potential well is expected to have time-independent solutions that can be determined from the time-independent Schrödinger equation. The wave function must depend on all three spatial coordinates, \(\psi =\psi(x,y,z)\). Beginning with the conservation of energy, we multiply by the wave function to get \[ \frac{p^2}{2m}\psi + V\psi = E\psi. \] Then, we use the momentum operator for each dimension because \(\hat{p}^2 = \hat{p}_x^2 + \hat{p}_y^2 + \hat{p}_z^2\). Individually we have \[\begin{align*} \hat{p}_x \psi &= -i\hbar \frac{\partial \psi}{\partial x}, \\ \hat{p}_y \psi &= -i\hbar \frac{\partial \psi}{\partial y}, \\ \hat{p}_z \psi &= -i\hbar \frac{\partial \psi}{\partial z}. \\ \end{align*}\] The application of the \(\hat{p}^2\) operator gives \[-\frac{\hbar^2}{2m}\left(\frac{\partial^2 \psi}{\partial x^2} + \frac{\partial^2 \psi}{\partial y^2}+ \frac{\partial^2 \psi}{\partial z^2} \right) + V\psi = E\psi,\] which defines the time-independent Schrödinger equation in three dimensions. The expression in parentheses is the Laplacian operator, which is usually written using the shorthand notation as \[ \nabla^2 = \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2}, \] and we can write Eqn. (6.35) as \[ -\frac{\hbar^2}{2m}\nabla^2 \psi + v\psi = E\psi. \] Consider a free particle inside a box with lengths \(L_1\), \(L_2\), and \(L_3\) along the \(x\), \(y\), and \(z\) axes, respectively. The particle is constrained to be inside the box. (a) Find the wave functions and energies. (b) Find the ground-state wave function for a cube. (c) Find the energies of the ground and first excited state for a cube of sides \(L\). (a) Similar to the one-dimensional infinite square well, it is reasonable to try a sinusoidal wave function for each of the dimensions. As a result, we obtain \[ \psi(x,y,z) = A\sin(k_1x)\sin(k_2y)\sin(k_3z), \] which has a normalization constant \(A\) and the quantities \(k_i\) (\(k_1\), \(k_2\), and \(k_3\)) are determined by applying the appropriate boundary conditions. The condition that \(\psi = 0\) at \(x=L_1\) requires that \(k_1L_1 = n_1\pi\). Generalizing this constraint and solving for \(k_i\), we get \[ k_1 = \frac{n_1\pi}{L_1}, \quad k_2 = \frac{n_2\pi}{L_2}, \quad k_3 = \frac{n_3\pi}{L_3}, \] using the integers \(n_1\), \(n_2\), and \(n_3\) for quantum numbers in each of the dimensions. Now, we need to take the appropriate derivatives of \(\psi\) with respect to \(x\) (the derivatives in \(y\) and \(z\) will be similar). We have \[\begin{align*} \frac{\partial \psi}{\partial x} &= \frac{\partial}{\partial x} \left[A\sin(k_1x)\sin(k_2y)\sin(k_3z) \right], \\ &= k_1A\cos(k_1x)\sin(k_2y)\sin(k_3z), \\ \frac{\partial^2 \psi}{\ partial x^2} &= \frac{\partial}{\partial x} \left[ k_1A\cos(k_1x)\sin(k_2y)\sin(k_3z) \right], \\ &= -k_1^2A \sin(k_1x)\sin(k_2y)\sin(k_3z) = -k_1^2 \psi. \end{align*}\] Equation (6.35) becomes \[ \frac{\hbar^2}{2m}\left(k_1^2 + k_2^2 + k_3^2\right)\psi = E\psi. \] The \(k_i\) represent a wave number for each dimension, where we can generalize the energies of the one-dimensional case to \[ E_n = \frac{\pi^2\hbar^2}{2m}\left(\frac{n_1^2}{L_1^2} + \frac{n_2^2}{L_2^2} + \frac{n_3^2}{L_3^2} \right), \] where the allowed energies depend on the three quantum numbers. The wave function can also be written in terms of the quantum numbers as \[ \psi(x,y,z) = A\sin\left(\frac{n_1\pi x}{L_1}\right) \sin\left(\frac{n_2\pi y}{L_2}\right) \sin\left(\frac{n_3\pi z}{L_3}\right) \] (b) For a cube \(L_1 = L_2 = L_3 = L\) and the ground state is given by \(n_1 = n_2 = n_3 = 1\). The ground-state wave function is simply \[ \psi(x,y,z) = A\sin\left(\frac{\pi x}{L}\right) \sin\left(\frac{\pi y}{L}\right) \sin\left(\frac{\pi z}{L}\right). \] (c) For a cube the energies are given as \[ E_n = \frac{\pi^2\hbar^2}{2mL^2}\left(n_1^2 + n_2^2 + n_3^2\right), \] and the ground state energy is \(E_{111} = \frac{3\pi^2\hbar^2}{2mL^2}\). The first excited state is when one quantum number is increased by one so that \[\begin{align*} E_{211} &= \frac{\pi^2\hbar^2}{2mL^2}\left(2^2 + 1^2 + 1^2\right) = \frac{6\pi^2\hbar^2}{2mL^2} = \frac{3\pi^2\hbar^2}{mL^2}, \\ &= E_{121} = E_{112}. \end{align*}\] There are three possible combinations for the first excited state depending on which dimension of the cube is excited. A given state is degenerate when there is more than one wave function for a given energy (see Exercise 6.7). Degeneracies are often the result of symmetries. For a cube, there is a degeneracy, but it is removed if the sides have different lengths because there would be different energies. Degeneracy also occurs in classical physics where orbits with different eccentricities may have the same energy. A perturbation of the potential energy can remove a degeneracy. Energy levels can split (thereby removing a degeneracy) by applying an external magnetic (Zeeman effect) or electric (Stark effect) field. 6.6. Simple Harmonic Oscillator# Simple harmonic oscillators (SHOs) commonly occur in nature and are one of the first physical systems studied within introductory physics. Consider a spring with a spring constant \(\kappa\), which is in equilibrium at \(x=x_o\). There is a restoring force described through Hooke’s law and a potential energy, which is given as \[\begin{align*} F &= -\kappa(x-x_o), \\ V &= \kappa (x-x_o)^2. \end{align*}\] The application of the restoring force introduces simple harmonic motion (SHM). Diatomic molecules or a system of atoms in a solid lattice can be approximated by SHM in a general way. Within a lattice, the force on the atoms depends on the distance \(x\) from some equilibrium position and the potential \(V(x)\) can be represented by a Taylor series as \[ V(x) = V_o + V_1(x-x_o) + \frac{V_2}{2}\left(x-x_o\right)^2 + \cdots, \] where each term of the series has a constant \(V_i\). For \(x-x_o \approx 0\) (i.e., a small perturbation), the higher terms are negligible. The minimum of the potential occurs at the equilibrium position, which means that \(dV/dx = 0\) and the lowest surviving term of the potential is \[ V(x) = \frac{V_2}{2}\left(x-x_o\right)^2. \] To study the quantum description of SHM, we insert a potential \(V = kx^2/2\) into Eqn. (6.7) to get \[ \frac{d^2 \psi}{dx^2} = -\frac{2m}{\hbar^2}\left(E - \frac{1}{2}\kappa x^2 \right)\psi = \left(\frac{m\kappa}{\hbar^2}x^2 - \frac{2mE}{\hbar^2}\right) \psi. \] Through a pair of variable transformations, \[ \alpha^2 &= \frac{m\kappa}{\hbar^2}, \\ \beta = \frac{2mE}{\hbar^2}, \] we get \[ \frac{d^2 \psi}{dx^2} = (\alpha^2 x^2 - \beta)\psi. \] The particle is confined to the potential well, and thus, has zero probability of being at \(x= \pm \infty\), which means \[ \lim_{x\rightarrow \pm \infty} \psi(x) = 0. \] Additionally, the lowest energy state cannot be \(E=0\) because if \(x=0\) and \(v=0\), then the kinetic energy is also zero (i.e., \(p=0\)). The uncertainty principle prevents us from knowing \(x\) and \(p\) exactly, which means at \(x=0\) that \(p>0\) (i.e., \(E>0\)). The lowest energy state is defined as some non-zero potential or \(E_o = V_o\). If we represent the energy by a height above the bottom of the well (i.e., some inherent potential energy), then there will be a symmetric distance \(\pm a\) from the equilibrium position to denote the lowest energy (i.e., \(E_o = \kappa a^2/2\)). The walls of the potential are similar to the finite potential well, where there is some small probability of the wave function existing outside the potential. But the wave function decreases rapidly on the other side of the barrier. The minimum energy \(E_o\) is also called the zero-point energy (see Fig. 6.4). Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.special import hermite from scipy.constants import m_e from myst_nb import glue from matplotlib import rcParams rcParams.update({'font.size': 16}) rcParams.update({'mathtext.fontset': 'cm'}) def SHO_potential(x): #calculate the SHO potential return 0.5*kappa*x**2 def SHO_wave_func(n,x): #calculate the SHO wave function A = (alpha/np.pi)**0.25 if n == 0: return A*np.exp(-alpha*x**2/2) alpha = 1. #Assume alpha = 1 --> kappa = hbar^2/m kappa = hbar**2*alpha**2/m_e omega = np.sqrt(kappa/m_e) a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o fs = 'medium' lw = 2 col = (218/256., 26/256., 50/256.) fig = plt.figure(figsize=(8,4),dpi=150) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) x = np.arange(-3*a,3*a,a/40.) ax1.set_xlabel("Position $x$ ($a$)",fontsize=fs) ax2.set_xlabel("Position $x$ ($a$)",fontsize=fs) glue("SHO_fig", fig, display=False); Estimate the minimum energy of the simple harmonic oscillator (SHO) allowed by the uncertainty principle. From classical mechanics, the average kinetic energy is equal to the average potential energy (i.e., \(K_{\rm av}=V_{\rm av}\); virial theorem. As a result, the average potential and kinetic energy are equal to half of the total energy. We can write this mathematically as \[\begin{align*} E_{\rm tot} &= K_{\rm av} + V_{\rm av} = 2 K_{\rm av},\\ K_{\rm av} &= \frac{1}{2}E_{\rm tot}. \end{align*}\] Applying this to the SHO gives \[ K_{\rm av} = \frac{(p^2)_{\rm av}}{2m} = \frac{\kappa}{2}(x^2)_{\rm av} = \frac{E_{\rm tot}}{2}. \] The mean value of \(x=0\), but the mean value of \((x^2)_{\rm av} \) is the mean square deviation \((\Delta x)^2\) (see the uncertainty principle in general). A similar reasoning can be applied to get \((p^2)_{\rm av} = (\Delta p)^2\). Then, we must have \[\begin{align*} E = \kappa (\Delta x)^2 &= \frac{(\Delta p)^2}{2m}, \\ \Delta x &= \frac{\Delta p}{\sqrt{m\kappa}}. \end{align*}\] From Heisenberg’s uncertainty principle, we have \(\Delta p \Delta x \geq \hbar/2\). Then the minimum value of \(\Delta x = \hbar/(2\Delta p)\). Combining with the previous result, we have \[ (\Delta x)^2 = \Delta x \cdot \Delta x = \frac{\Delta p}{\sqrt{m\kappa}} \frac{\hbar}{2\Delta p} = \frac{\hbar}{2\sqrt{m\kappa}}.\] Then, the lowest energy is \[ E_o = \kappa\frac{\hbar}{2\sqrt{m\kappa}} = \frac{\hbar}{2}\sqrt{\frac{\kappa}{m}} = \frac{1}{2}\hbar \omega. \] The zero point energy for the SHO is \(\frac{1}{2}\hbar \omega\). Deriving the wave function solutions for the SHO is beyond the our scope. They are written in terms of Hermite polynomial functions \(H_n(x)\) that have an order \(n\) and \(n\) is a positive integer. The wave function general solution and first four \(\psi_n\) are \[\begin{split}\psi_n &= H_n(x)e^{-\alpha x^2/2}, \\ \psi_0 &= \left(\frac{\alpha}{\pi}\right)^{1/4}e^{-\alpha x^2/2}, \\ \psi_1 &= \sqrt{2\alpha}x \psi_o, \\ \psi_2 &= \frac{1}{\sqrt{2}}\left(2\ alpha x^2 - 1\right)\psi_o, \\ \psi_3 &= \sqrt{\frac{\alpha}{3}}\left(2\alpha x^3 - 3x\right)\psi_o.\end{split}\] Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.special import hermite from scipy.constants import m_e from myst_nb import glue from matplotlib import rcParams rcParams.update({'font.size': 16}) rcParams.update({'mathtext.fontset': 'cm'}) def SHO_potential(x): #calculate the SHO potential return 0.5*kappa*x**2 def SHO_wave_func(n,x): #calculate the SHO wave function psi_o = (alpha/np.pi)**0.25*np.exp(-alpha*x**2/2) if n == 0: return psi_o elif n == 1: return np.sqrt(2*alpha)*x*psi_o elif n == 2: return (2*alpha*x**2-1)/np.sqrt(2)*psi_o elif n == 3: return np.sqrt(alpha)*x*(2*alpha*x**2-3)/np.sqrt(3)*psi_o alpha = 1. #Assume alpha = 10 --> kappa = hbar^2/m kappa = hbar**2*alpha**2/m_e omega = np.sqrt(kappa/m_e) a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o fs = 'medium' lw = 2 col = (218/256., 26/256., 50/256.) fig = plt.figure(figsize=(8,8),dpi=150) ax11 = fig.add_subplot(421) ax12 = fig.add_subplot(422) ax21 = fig.add_subplot(423) ax22 = fig.add_subplot(424) ax31 = fig.add_subplot(425) ax32 = fig.add_subplot(426) ax41 = fig.add_subplot(427) ax42 = fig.add_subplot(428) ax_psi = [ax41,ax31,ax21,ax11] ax_prob = [ax42,ax32,ax22,ax12] x = np.arange(-5*a,5*a,a/40.) for i in range(0,4): ax_psi[i].plot(x/a,SHO_wave_func(i,x),'-',color=col,lw=lw,label='$\psi_%i$' % i) ax_prob[i].plot(x/a,SHO_wave_func(i,x)**2,'-',color=col,lw=lw,label='$|\psi_%i|^2$' % i) x_a = np.sqrt(2*i+1) if i == 0: if i > 0: ax41.set_xlabel("Position $x$ ($a$)",fontsize=fs) ax42.set_xlabel("Position $x$ ($a$)",fontsize=fs) glue("SHO_waves", fig, display=False); In contrast to the particle-in-a-box, the oscillatory behavior is due to the polynomial, which dominates at small \(x\), and the damping occurs due to the Gaussian function, which dominates at large \(x\). Then energy levels are given by \[ E_n = \left(n+\frac{1}{2}\right)\hbar \omega = \left(n+\frac{1}{2}\right)\hbar \sqrt{\frac{\kappa}{m}}. \] The zero-point energy E_o is \(\hbar\omega/2\), where this result is precisely the value found in Exercise 6.8 by using the uncertainty principle. The uncertainty principle is responsible for the minimum energy of the SHO, where the minimum value of the uncertainty principle is can be determined using Gaussian wave packets (see Sect. 5.6). The minimum energy \(E_o\) allowed by the uncertainty principle for the ground state of the SHO is sometimes called the Heisenbergy limit. Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.special import hermite from scipy.constants import m_e from myst_nb import glue from matplotlib import rcParams rcParams.update({'font.size': 16}) rcParams.update({'mathtext.fontset': 'cm'}) def SHO_potential(x): #calculate the SHO potential return 0.5*kappa*x**2 alpha = 1. #Assume alpha = 10 --> kappa = hbar^2/m kappa = hbar**2*alpha**2/m_e omega = np.sqrt(kappa/m_e) a = np.sqrt(hbar*omega/kappa) #width of the potential at E_o fs = 'medium' lw = 2 col = (218/256., 26/256., 50/256.) E_col = ['k','b','c','violet'] fig = plt.figure(figsize=(4,4),dpi=150) ax = fig.add_subplot(111) x = np.arange(-5*a,5*a,a/40.) for i in range(0,4): E_n = (i + 0.5)*hbar*omega x_n = 0.5 - np.sqrt(2*i+1)/10. x_p = 0.5 + np.sqrt(2*i+1)/10. ax.axhline(E_n/(hbar*omega),x_n,x_p,color=E_col[i],linestyle='--',lw=1.5,label='$E_%i$' % i) ax.set_xlabel("Position $x$ ($a$)",fontsize=fs) glue("SHO_energy", fig, display=False); Classically, the probability of finding a mass on a spring is greatest at the ends of the motion and smallest at the center. The quantum theory probability density for the lowest energy state \((\ psi_o^2)\) is contrary to the classical one, where the largest probability is for the particle to be at the center. From the correspondence principle, this distinction disappears for large \(n\) and the average value of the quantum probability approaches the classical result. Useful Integrals \[\begin{align*} \int_0^\infty e^{-ax^2}dx = \frac{1}{2}\sqrt{\frac{\pi}{a}} \end{align*}\] \[\begin{align*} \int_0^\infty x^m e^{-ax^2}dx = \begin{cases} \frac{(m-1)(m-3)\ldots 3\cdot1}{2^{(m+2)/2} a^{m/2}} \sqrt{\frac{\pi}{a}}, \quad & \text{for even }m \\[5pt] \frac{[(m-1)/2]!}{2a^{(m+2) /2}}. \quad & \text{for odd }m \end{cases} \end{align*}\] Normalize the ground state wave function \(\psi_o\) for the simple harmonic oscillator (SHO). Find the expectation values \(\langle x \rangle\) and \(\langle x^2 \rangle\). Starting from the general solution for the SHO, we have \(H_o = 1\) for the ground state and \[ \psi_o(x) = Ae^{-\alpha x^2/2}. \] The constant \(A\) is determined through normalization, where \[\begin{align*} \int_{-\infty}^\infty \psi^*_o \psi_o\ dx &= 1, \\ A^2 \int_{-\infty}^\infty e^{-\alpha x^2}\ dx &= 1, \\ 2A^2 \int_0^\infty e^{-\alpha x^2}\ dx &= 1. \end{align*}\] The above integral is called a Gaussian integral, which can be solved with the help of an integral table. Substituting the result from the integral table, we find \[\begin{align*} 1 &= 2A^2\left( \frac{1}{2}\sqrt{\frac{\pi}{\alpha}}\right), \\ A^2 &= \sqrt{\frac{\alpha}{\pi}}, \\ A &= \left(\frac{\alpha}{\pi} \right)^{1/4}. \end{align*}\] For the ground state wave function of the SHO, we have \[ \psi_o(x) = \left(\frac{\alpha}{\pi} \right)^{1/4} e^{-\alpha x^2/2}. \] The expectation value of \(x\) is found using \[\begin{align*} \langle x \rangle &= \int_{-\infty}^\infty \psi^*_o x \psi_o\ dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \int_{-\infty}^\infty x e^{-\alpha x^2} dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \left(\ int_0^\infty x e^{-\alpha x^2} dx + \int_{-\infty}^0 x e^{-\alpha x^2} dx \right), \\ &= \sqrt{\frac{\alpha}{\pi}} \left(\int_0^\infty x e^{-\alpha x^2} dx - \int_0^{\infty} x e^{-\alpha x^2} dx \ right) = 0. \end{align*}\] The value of \(\langle x \rangle\) must be zero because we are integrating an odd function of \(x\) over symmetric limits. The expectation value of \(x^2\) is found using \[\begin{align*} \langle x^2 \rangle &= \int_{-\infty}^\infty \psi^*_o x^2 \psi_o\ dx, \\ &= \sqrt{\frac{\alpha}{\pi}} \int_{-\infty}^\infty x^2 e^{-\alpha x^2} dx, &= 2\sqrt{\frac{\alpha}{\pi}} \ int_0^\infty x^2 e^{-\alpha x^2} dx, \\ &= 2\sqrt{\frac{\alpha}{\pi}} \left(\frac{1}{4\alpha} \sqrt{\frac{\pi}{\alpha}} \right), \\ &= \frac{1}{2\alpha}. \end{align*}\] Inserting the value of \(\alpha\), we find \[ \langle x^2 \rangle = \frac{\hbar}{2\sqrt{m\kappa}} = \frac{\hbar}{2m\omega}, \] which is what we argued in Exercise 6.8 using the uncertainty principle. 6.7. Barriers and Tunneling# For the finite square-well potential (see Sect. 6.3), there was a finite probability for the wave function to enter the walls of the potential, but it must decay quickly. A potential barrier describes a similar situation with the boundaries inverted. The potential is instead \[ V(x) = \begin{cases} 0 \quad & x\leq -L, \quad &\text{region I} \\ V_o \quad & -L < x < L, \quad &\text{region II} \\ 0 \quad & x\geq L. \quad &\text{region III} \end{cases} \] A particle is affected by the barrier potential \(V_o\) depending on its energy \(E\). 6.7.1. Potential Barrier with \(E>V_o\)# Classically we expect that a particle with \(E>V_o\) will pass the barrier, but move with a reduced velocity in the region of \(V_o\). This can be shown through conservation of energy as \[\begin{align*} E &= K + V_o, \\ K &= E - V_o, \\ v &= \sqrt{\frac{2}{m}(E-V_o)}. \end{align*}\] The particle velocity must decrease for a positive \(V_o\) and increase for a negative \(V_o\). A classical analogy is where a ball is through horizontally over a mountain, where it experiences a pull due to gravity (ignoring air resistance). The pull increases when the ball is directly over the mountain because there is slightly increased mass and this extra force reduces the ball’s speed. Conversely, a ball traveling over a valley experiences slightly less gravity. This is not noticeable for most objects, but NASA’s GRACE has measured the perturbations of Earth’s gravity field using two satellites. According to quantum mechanics, the particle will behave differently because of its wavelike character, where the will be differences in the wave number \(k\) in the three regions. The wave numbers \[ k_I &= k_{III} = \frac{\sqrt{2mE}}{\hbar}, \quad &\text{where }V = 0 \\ k_{II} &= \frac{\sqrt{2m(E-V_o)}}{\hbar}. \quad &\text{where }V = V_o \] Another analogy comes from optics, where light bends (i.e., changes speed) after it penetrates another medium. Actually, some of the light is reflected and the rest is transmitted into the medium. The wave function will exhibit a similar behavior, where there will be an (1) incident wave, (2) reflected wave, and (3) a transmitted wave. Classical mechanics allows no reflection if \(E>V_o\) and total reflection for \(E<V_o\). In contrast, quantum mechanics predicts some reflection for \(E\gg V_o\) and some transmission for \(E\ll V_o The time-independent Schrödinger equation for the three regions are as follows: \[ \frac{d^2\psi_I}{dx^2} + \frac{2m}{\hbar^2}E\psi_I &= 0, \quad & \text{Region I } \\ \frac{d^2\psi_{II}}{dx^2} + \frac{2m}{\hbar^2}(E-V_o)\psi_{II} &= 0, \quad & \text{Region II } \\ \frac{d^2\ psi_{III}}{dx^2} + \frac{2m}{\hbar^2}E\psi_{III} &= 0. \quad & \text{Region III } \] The corresponding wave functions for these equations are: \[ \psi_I &= Ae^{ik_Ix} + Be^{-ik_Ix}, \quad & \text{Region I } \\ \psi_{II} &= Ce^{ik_{II}x} + De^{-ik_{II}x}, \quad & \text{Region II } \\ \psi_{III} &= Fe^{ik_Ix} + Ge^{-ik_Ix}. \quad & \text {Region III } \] We assume that we have an incident particle coming from the left moving along the \(+x\) direction. In this case, the term \(Ae^{ik_Ix}\) represents the incident particle and the term \(Be^{-ik_Ix}\) represents a reflected particle moving in the \(-x\) direction. In region III, there are no reflected particles and thus, \(G=0\). Similar to the finite square-well potential, we try to determine the coefficients using the conditions for a valid wave function: • continuous across a boundary \((\psi_I(-L) = \psi_{II}(-L))\), • spatial derivative must be continuous \((d\psi_I(-L)/dx = d\psi_{II}(-L)/dx)\) across the boundary. We find the following system of equations: \[\begin{split}Ae^{-ik_IL} + Be^{ik_IL} &= Ce^{-ik_{II}L} + De^{ik_{II}L}, \\ k_IAe^{-ik_IL} - k_IBe^{ik_IL} &= k_{II}Ce^{-ik_{II}L} -k_{II}De^{ik_{II}L}, \\ Ce^{ik_{II}L} + De^{-k_{II}L} &= Fe^ {ik_IL}, \\ k_{II}Ce^{ik_{II}L} - k_{II}De^{-ik_{II}L} &= k_IFe^{ik_IL}.\end{split}\] The probability of particles being reflected \(R\) or transmitted \(T\) is determined by the ratio of the appropriate \(\psi^*\psi\), or \[ R &= \frac{|\psi_I(\text{reflected})|^2}{|\psi_I(\text{incident})|^2} = \frac{B^*B}{A^*A}, \\[5pt] T &= \frac{|\psi_{III}(\text{transmitted})|^2}{|\psi_1(\text{incident})|^2} = \frac{F^*F}{A^*A}. Because the particles must be either reflected or transmitted, we must have \(R+T = 1\) (i.e., the total probability must equal unity). The values of \(R\) and \(T\) are found by solving for \(C\) and \(D\) in terms of \(F\). Then those results are substituted to find \(A\) and \(B\) in terms of \(F\). This a long process of tedious algebra, where the transmission probability is \[ T = \left[ 1 + \frac{V_o^2\sin^2(k_{II}L)}{4E(E-V_o)}\right]^{-1}. \] Notice when \(k_{II}L = n\pi\), then the second term goes to zero, resulting in a transmission probability equal to 1. Also, it is possible for particles to be reflected at both \(x=-L\) and \(x=L\). Their path difference back toward the \(-x\) direction is \(2L\). For an integral number of wavelengths inside the potential barrier, the incident and reflected wave functions are precisely out of phase and cancel completely. Show code cell content Hide code cell content import numpy as np import matplotlib.pyplot as plt from scipy.constants import hbar, m_e, eV from scipy.integrate import odeint, simpson from scipy.optimize import minimize,fsolve from myst_nb import glue def ksqr(x): k_squared = 0 if x<=-L: V = 0 elif -L<x<L: V = V_o elif x>= L: V = 0 return 2*m_e*np.abs(E_o-V)/hbar**2 def psi_func(x,coeff): #calculate the wave function psi_x = 0 k_x = np.sqrt(ksqr(x)) if x<=-L: psi_x = coeff[0]*np.cos(k_x*x) + coeff[1]*np.sin(k_x*x) elif -L<x<L: if E_o>V_o: psi_x = coeff[2]*np.cos(k_x*x) + coeff[3]*np.sin(k_x*x) psi_x = coeff[2]*np.exp(k_x*x) + coeff[3]*np.exp(-k_x*x) elif x>=L: psi_x = coeff[4]*np.cos(k_x*x) + coeff[5]*np.sin(k_x*x) return psi_x def boundary(x): k_I = np.sqrt(ksqr(-1.5*L)) k_II = np.sqrt(ksqr(0)) x_coeff = [1,1,x[0],x[1],1,1] psi_I = psi_func(-L,psi_coeff) psi_IIa = psi_func(-L+0.01*L,x_coeff) psi_IIb = psi_func(L-0.01*L,x_coeff) psi_III = psi_func(L,psi_coeff) return [psi_IIa - psi_I, psi_IIb-psi_III] def advance_psi(x,xm,xp,dx,psi_coeff): #advance the wave function using Numerov's method k_x = np.sqrt(ksqr(x)) k_xm = np.sqrt(ksqr(xm)) k_xp = np.sqrt(ksqr(xp)) psi_x = psi_func(x,psi_coeff) psi_xm = psi_func(xm,psi_coeff) psi_next = (psi_x*(2+5*dx**2*k_x**2/6.) - psi_xm*(1-dx**2*k_xm**2))/(1-dx**2*k_xp**2/12.) return psi_next L = 1e-9 E_o = 20*eV V_o = 10*eV ymax = (E_o+V_o)/eV+5 k_I = np.sqrt(2*m_e*E_o)/hbar x_o = -4*L delta_x = L/1e2 x_rng = np.arange(-4*L,4*L,delta_x) V_rng = np.zeros(len(x_rng)) x_bound = [300,500] V_rng[x_bound[0]:x_bound[1]] = V_o/eV*np.ones(x_bound[1]-x_bound[0]) psi = np.zeros(len(x_rng)) psi_coeff = np.ones(6) rng = (E_o + V_o)/eV bounds = [(-rng,rng),(-rng,rng)] opt = fsolve(boundary,[1,1]) psi_coeff[2:4] = opt #psi_coeff_left[4:] = -psi_coeff_left[:2] psi[0] = psi_func(x_o,psi_coeff) psi[-1] = psi_func(-x_o,psi_coeff) full_idx = len(x_rng) for i in range(1,full_idx-1): psi[i] = advance_psi(x_rng[i],x_rng[i-1],x_rng[i+1],delta_x,psi_coeff) fig = plt.figure(figsize=(8,4),dpi=150) ax = fig.add_subplot(111) ax.plot(x_rng/L,psi + E_o/eV,'-',color=col,lw=2) ax.set_xlabel("Position $x$ ($L$)",fontsize=fs) glue("barrier_potential", fig, display=False); 6.7.2. Potential Barrier with \(E<V_o\)# When the energy \(E\) is less than a potential barrier \(V_o\), the particle cannot penetrate the barrier classically because its kinetic energy is negative. The particle is reflected at \(x=-L\) and returns. However, the quantum mechanical result describes a small (but finite) probability that the particle can penetrate the barrier and even emerge on the other side. There are only a few changes to the equations already presented in Sect. 6.7.1. In region II, the wave function becomes \[ \psi_{II}(x) = Ce^{\kappa x} + De^{-\kappa x},\quad \text{where } \kappa = \sqrt{\frac{2m(V_o-E)}{\hbar^2}}. \] The parameter \(\kappa\) is a positive, real number because \(V_o > E\). The application of boundary conditions and a lot of tedious algebra relate the coefficients of the wave functions. The equations for the reflection and transmission probabilities are unchanged, but the results will be modified by replacing \(ik_{II}\rightarrow \kappa\). Tunneling is the effect where the particle penetrates the barrier and emerges on the other side. The transmission probability is \[ T = \left[ 1 + \frac{V_o^2 \sinh^2(kL)}{4E(V_o-E)}\right]^{-1}. \] The only difference from the previous transmission coefficient is the replacement of sine with hyperbolic sine (\(\sinh\)). When \(kL \gg 1\) the transmission probability reduces to \[ T = \frac{16E}{V_o}\left(1 -\frac{E}{V_o}\right)e^{-2\kappa L}. \] Tunneling can be understood using the uncertainty principle, where inside the barrier the wave function \(\psi_{II}\) is dominated by the \(e^{-\kappa x}\) term, and the probability density \(|\psi_ {II}|^2 \approx e^{-2\kappa x}\). Over a small interval \(\Delta x = 1/\kappa\), the probability density of observing the particle has decreased substantially (\(e^{-2} \approx 0.14\)). The uncertainty in the momentum is \(\Delta p \geq \hbar/\Delta x = \hbar \kappa\). The minimum kinetic energy must be \[ K_{\rm min} = \frac{(\Delta p)^2}{2m} = \frac{\pi^2 \kappa^2}{2m} = V_o - E. \] The violation allowed by the uncertainty principle is exactly equal to the negative kinetic energy required. The particle is allowed by quantum mechanics and the uncertainty principle to penetrate into a classically forbidden region. In a particular semiconductor device, electrons that are accelerated through a potential of \(5\ {\rm V}\) attempt to tunnel through a barrier (\(0.8\ {\rm nm}\) wide and \(10\ {V}\) tall). What fraction of the electrons are able to tunnel through the barrier if the potential is zero outside the barrier? The kinetic energy of the electrons is \(5\ {\rm eV}\), which is less than the barrier potential height (\(V_o = 10\ {\rm eV}\)). Thus, we use the equation for the transmission probability with the \ (\sinh\) term and \(\kappa\). The parameter \(\kappa\) is given as \[\begin{align*} \kappa &= \sqrt{\frac{2m_e(V_o-E)}{\hbar^2}}, \\ & = \frac{\sqrt{2(9.109 \times 10^{-31}\ {\rm kg})(5\ {\rm eV})(1.602 \times 10^{-19}\ {\rm J/eV})}}{6.626 \times 10^{-34}\ {\rm J\ cdot s}},\\ & = 1.15 \times 10^{10}\ {\rm m^{-1}}, \end{align*}\] and the value of \(\kappa L \approx 9.2\). The equation for the transmission probability gives \[\begin{align*} T = \left[\frac{1 + (1.602 \times 10^{-18}\ {\rm J})\sinh^2(9.2)}{4(8 \times 10^{-19}\ {\rm J})(8 \times 10^{-19}\ {\rm J})}\right]^{-1} = 4.1 \times 10^{-8} \end{align*}\] The fraction of electrons that can tunnel through the barrier is equal to the transmission probability. from scipy.constants import hbar, m_e, eV def get_kappa(m,V_o,E): #calculate the parameter kappa #m = mass of particle #V_o = height of potential barrier #E = kinetic energy of particle return np.sqrt(2*m*(V_o-E))/hbar def Transmission_coeff(kappa,L,V_o,E): T_inv = 1. + V_o**2*np.sinh(kappa*L)**2/(4*E*(V_o-E)) return 1./T_inv L_bar = 8e-10 #width of potential barrier E_el = 5*eV #kinetic energy of the electrons V_bar = 10*eV #potential barrier height kappa = get_kappa(m_e,V_bar,E_el) print("The parameter kappa is %1.2e m^-1 and kappa*L is %1.2f.\n" % (kappa,kappa*L_bar)) T = Transmission_coeff(kappa,L_bar,V_bar,E_el) print("The fraction of electrons (transmission coefficient) that can penetrate the barrier is %1.1e." % T) The parameter kappa is 1.15e+10 m^-1 and kappa*L is 9.16. The fraction of electrons (transmission coefficient) that can penetrate the barrier is 4.4e-08. 6.7.3. Potential Well# A particle of energy \(E>0\) could pass through a potential well rather than into a potential barrier. This is an example for the above cases where \(V=-V_o\) in the region \(-L< x <L\) and zero elsewhere. Classically the particle would gain energy and accelerate while passing the well region. In quantum mechanics, reflection and transmission may occur, but the wavelength inside the potential well is smaller than outside. When the potential well width is precisely equal to half-integral or integral units of the wavelength, then the reflected waves may be out of phase or in phase with the original wave. Additionally cancellations or resonances may occur. 6.7.4. Alpha-Particle Decay# The phenomenon of tunneling explains the alpha-particle decay of radioactive nuclei. Many nuclei heavier than lead are natural emitters of alpha particles, but their emission rates vary (by a factor of \(10^{13}\)) and their energies are limited to only \(4-8\ {\rm MeV}\). Inside the nucleus, the alpha particle feels the strong, short-range attractive nuclear force (i.e., a potential well) and the repulsive Coulomb force (i.e., a potential barrier). The alpha particle is trapped inside the nucleus and classically, it does not have enough energy to penetrate the Coulomb barrier. Through quantum mechanics, the alpha particle can “tunnel” through the Coulomb potential to escape. The widely varying alpha emission rates can be explained by small changes in the potential barrier (both height and width). 6.8. Homework Problems# Problem 1 Show directly that the trial wave function \(\Psi (x,t) = e^{i(kx-\omega t)}\) satisfies the time-dependent Schrödinger wave equation. Problem 2 Normalize the wave function \(Are^{-r/a}\) from \(r=0\) to \(\infty\), where \(\alpha\) and \(A\) are constants. See Wolfram alpha for help with difficult integrals and/or mathematical functions. Problem 3 A wave function has the value \(A \sin{x}\) for \(0\leq x \leq \pi\), but zero elsewhere. Normalize the wave function and find the probability for the particle’s position: (a) \(0\leq x \leq \pi/4\) and (b) \(0\leq x \leq \pi/2\). Problem 4 Determine the average value of \(\psi^2_n(x)\) inside an infinite square-well potential for \(n = 1,\ 5,\ 10,\ \text{and}\ 100\). Compare these averages with the classical probability of detecting the particle inside the box. Problem 5 Compare the results of the infinite and finite square-well potentials. (a) Are the wavelengths longer or shorter for the finite square well compare with the infinite well? (b) Use physical arguments to decide whether the energies (for a given quantum number \(n\)) are larger or smaller for the finite square well compared to the infinite square well. Problem 6 Write the possible (unnormalized) wave functions for each of the first four excited energy levels for the cubical box. Problem 7 Check to see whether the simple linear combination of sine and cosine functions (\(\psi = A\sin(kx) + B\cos(kx)\)) satisfies the time-independent Schrödinger equation for a free particle (\(V=0\)).
{"url":"https://saturnaxis.github.io/ModernPhysics/Chapter_6/quantum-mechanics-part2.html","timestamp":"2024-11-14T04:13:19Z","content_type":"text/html","content_length":"271226","record_id":"<urn:uuid:9c0bb497-0506-403a-9bb9-0a1adeebe334>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00304.warc.gz"}
You are here Date Issued: String theory imposes slight modifications to Einstein's equations of general relativity. In Green, Schwarz, and Witten's classic text, the authors claim that the gravitational field equations in empty space should contain, in addition to the Ricci tensor, an extra term which is first order in the string constant alpha-prime and proportional to the Riemann curvature tensor squared. They do admit, however, that this simple modification is just schematic. We show that these modified equations do not admit an isotropic solution. We then investigate the consequences of the modified gravitational equations when they are coupled to the dilaton field. Callan, Myers, and Perry have found a static isotropic solution to these equations. We calculate the additional perihelion precession of Mercury, the added deflection of photons by the sun, and the extra gravitational redshift which should be present if these equations hold. We determine that additional effects due to string theory in each of these cases are quite minuscule. Title: MODIFICATION TO EINSTEIN'S FIELD EQUATIONS IMPOSED BY STRING THEORY AND CONSEQUENCES FOR THE CLASSICAL TESTS OF GENERAL RELATIVITY. Frye, Christopher, Author Name(s): Efthimiou, Costas, Committee Chair University of Central Florida, Degree Grantor Type of text Date Issued: 2013 Publisher: University of Central Florida Language(s): English String theory imposes slight modifications to Einstein's equations of general relativity. In Green, Schwarz, and Witten's classic text, the authors claim that the gravitational field equations in empty space should contain, in addition to the Ricci tensor, an extra term which is first order in the string constant alpha-prime and proportional to the Riemann Abstract/ curvature tensor squared. They do admit, however, that this simple modification is just schematic. We show that these modified equations do not admit an isotropic solution. We then Description: investigate the consequences of the modified gravitational equations when they are coupled to the dilaton field. Callan, Myers, and Perry have found a static isotropic solution to these equations. We calculate the additional perihelion precession of Mercury, the added deflection of photons by the sun, and the extra gravitational redshift which should be present if these equations hold. We determine that additional effects due to string theory in each of these cases are quite minuscule. Identifier: CFH0004411 (IID), ucf:45087 (fedora) Note(s): Sciences, Dept. of Physics This record was generated from author submitted information. general relativity string theory Subject(s): classical tests modified gravity Link to This http://purl.flvc.org/ucf/fd/CFH0004411 Restrictions public 2013-04-01 on Access: Host UCF In Collections
{"url":"https://ucf.digital.flvc.org/islandora/object/ucf:45087","timestamp":"2024-11-04T08:02:31Z","content_type":"text/html","content_length":"36267","record_id":"<urn:uuid:790deaaf-a0ce-4792-8874-66e531fb18d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00272.warc.gz"}
A Basis for Arbitrary Topological Products of Topological Spaces A Basis for Arbitrary Topological Products of Topological Spaces Recall from the Arbitrary Topological Products of Topological Spaces page that if $\{ X_i \}_{i \in I}$ is an arbitrary collection of topological spaces and $\displaystyle{\prod_{i \in I} X_i}$ is the Cartesian product of these topological spaces whose elements are sequences $(x_i)_{i \in I}$ where $x_i$ is the $i^{\mathrm{th}}$ component of $(x_i)_{i \in I}$, then the product topology $\tau$ on $\displaystyle{\prod_{i \in I} X_i}$ is the initial topology induced by the projection maps $\displaystyle{p_i : \prod_{i \in I} \to X_i}$ and the resulting topological product is the topological space $\left ( \prod_{i \in I} X_i, \tau \right )$. We also noted that a subbasis for the product topology is: \quad \mathcal S = \left \{ U = \prod_{i \in I} U_i : V = p_i^{-1}(U) \: \mathrm{for \: some \: open \: } V \subseteq X_i, \mathrm{and} \: i \in I \right \} In other words, the elements in the subbasis $\mathcal S$ are the subsets $\displaystyle{U \subseteq \prod_{i \in I} X_i}$ whose inverse image under some projection $p_i$ is equal to an open set $V$ in $X_i$. We will now give an explicit form for a basis of the product topology on an arbitrary topological product. Theorem 1: Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of topological spaces and let $\displaystyle{\prod_{i \in I} X_i}$ be the Cartesian product of these spaces. Then a basis for the product topology is $\displaystyle{\mathcal B = \left \{ U = \prod_{i \in I} U_i : U_i \subseteq X_i \: \mathrm{is \: open \: in \:} X_i \: \mathrm{and} U_i = X_i \: \mathrm{for \: all \: but \: finitely \: many} \: i \right \}}$. • Proof: Take $\mathcal S$ as above as a subbasis for the product topology $\tau$ on $\displaystyle{\prod_{i \in I} X_i}$. Then the collection of all finite intersections of elements from $\mathcal S$ form a basis of $\tau$. Consider an arbitrary finite intersection $U$ of elements from $\mathcal S$: \quad U &= \bigcap_{k = 1}^{n} \left ( \prod_{i \in I} U_{i, k} \right ) \\ \quad &= \left ( \prod_{i \in I} U_{i, 1} \right ) \cap \left ( \prod_{i \in I} U_{i, 2} \right ) \cap ... \cap \left ( \ prod_{i \in I} U_{i, n} \right ) • Since $\mathcal S$ is a subbasis of $\tau$ we have that $U$ is an open set in $\displaystyle{\prod_{i \in I} X_i}$. But this means that for some $i \in I$, $p_i^{-1}(U) = V$ is an open set in $X_i$ for some $i \in I$. This happens if $U_{i, 1}, U_{i, 2}, ..., U_{i, n}$ are all open in $X_i$. So $\displaystyle{U = \prod_{i \in I} U_i}$ if $U_i = X_i$ for all but finitely many $i$. This shows that $\mathcal B$ is the basis obtained from $\mathcal S$. $\blacksquare$
{"url":"http://mathonline.wikidot.com/a-basis-for-arbitrary-topological-products-of-topological-sp","timestamp":"2024-11-04T20:53:17Z","content_type":"application/xhtml+xml","content_length":"17459","record_id":"<urn:uuid:e2e7dc92-85f9-4fd2-a3ed-922ffae75f13>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00697.warc.gz"}
iPhone 'not made' for hardcore games, says Street Fighter producer Despite a version of Street Fighter IV having already been released for Apple's iOS platform, producer Yoshinori Ono says he isn't convinced hardcore games can succeed on the platform. Speaking in an interview with CVG, Ono-san was asked if the more complete version of Super Street Fighter IV developed for the Nintendo 3DS could work on the iPad, given the performance boost the second generation of the hardware has received. Ono replied by saying that comparisons drawn between the power of the 3DS and the iPad 2 only look at the hardware on a superficial level, and that the 3DS' is a stronger platform since it was developed entirely for gaming. "When people say the capabilities are the same they're obviously only looking at graphics. OK, in terms of graphics it might be possible but iPad 2 and iPhone are primarily smart phone device plus alpha, whereas 3DS is designed for games," he said. "There is a difference between something that is made for playing games and something that can 'also' play games. Even if they appear similar at times the pool of tools available behind the screen is totally different. "All these features and pockets that the 3DS has as a games console just can't be compared with an iPhone and iPad where people can 'also' use it to play games but it is primarily a phone and app device," he explained. Returning to the question, Ono said he felt that although "a glimpse" of Street Fighter can be experienced on the iPhone ultimately he feels that people don't want to play hardcore games on them. "A glimpse of Street Fighter can be enjoyed on iPhone but would people really play hardcore games on them? I don't think so, because they're just not made for them. At the end of the day game consoles are for games so it's quite clearly divided and differentiated," he said. "I should emphasise that the difference is in 'for' and 'can', so it 'can play games' or 'it's for games'. It's only one word but it is very very important, that word differentiates everything," he added. Source: http://www.computerandvideogames.com/297168/news/iphone-not-made-for-hardcore-games-says-street-fighter-producer/ Gaming on the iPhone/iPad can be painful for some games. Games made specifically for it like Cut the Rope or TinyWings are great but ported games that would greatly benefit from having physical buttons (like Secrets of Mana, FF 1 and 2, MegaMan etc etc) are really painful to play on it. I would buy an experia type iPhone in a heartbeat Subject Delta Playing games on any handheld devices will never replace PC gaming for me. Certain games require some kind of proper interface (controller/keyboard/mouse). Certain games require some kind of proper interface (controller/keyboard/mouse). Well the new Sony Xperia play (Android) has a slide out controller pad. I'm not sure if it's as good as a dedicated gaming interface, but it certainly gives the player more options. With the IPhone though, you get no choice in the hardware :( It really depends on the game. For example, I really enjoy turn-based strategy games and playing on the iPhone is actually easier than playing on the 3DS. Is this specifically about Street Fighter? Because Street Fighter enthusiasts would argue that a regular console controller is not suitable enough for Street Fighter. But as far as hardcore games, games like Dead Space, Sword & Sworcery, Starfront, and Chaos Rings say otherwise. And the experience of playing Final Fantasy III is far more enjoyable than on the DS. Dead Space is probably one of the best portable "hardcore" game experiences I've had. It really all depends on execution and whether or not a developer is lazy about design or not. But as far as mobile/portable gaming as a whole (in the perspective as an adult) I really don't get why hardcore games is such a big topic of debate. Because games on the go serve merely as a distraction, and not a full experience. This is probably why casual games are blowing up like crazy in the appstore. Because a 5-15 minute distraction is what most people need. This is one reason why I haven't really used my DS in the past couple years. Because if you're at home, wouldn't you rather use your gaming time on a PC or console rather than a handheld device? Shivan Commander It's a cell phone/iPod, of course it wasn't made for games. Source: http://www.computerandvideogames.com/297168/news/iphone-not-made-for-hardcore-games-says-street-fighter-producer/ i though phone were made for making phone calls. weird.... • 10 months later... Nick Sheldon I'm impressed with the Lumia800 tbh. Incredibly fluid and responsive.. BUT, I don't think a fighting game will ever work on any phone, whatever make it is. Playing games on any handheld devices will never replace PC gaming for me. You mean because one is a upgradeable power house intended to be played in one place and the other is designed for portability and can never be upgraded? You're comparing apples and oranges, dude. Muhammad Farrukh Does anyone else realize that the article is about a year old? • 3 weeks later... As much as I've had hours of fun on my phone playing games, theres is not one game that comes even close to the full gaming experience of something on my 3ds. Even games like Dead Space on iphone as good as they are would be considered mediocre on 3ds/vita. • 3 months later... Depends on what hardcore means for people. Some touch games can be hard core. But I guess he meant fast paced multi command games. • 1 month later... I think it depends on the user preferences. As for me, I love playing strategy games on my desktop while racing, shooting and puzzles are the best with my iPhone. I wouldn't say an iPhone is made for gaming at all. It is a phone... This thread has done a Lazarus. THolman Veteran Since its already been revived... I've gotten to the point where yeah, I think the iPhone is definitely more suited for casual gaming, it's just inherent in the form factor. But the iPad is a completely different deal, and I want to see more developers push it to its limits, and bring console titles over to it, like Square is doing. I'll happily pay $40 for a true hardcore experience on my iPad, and I'm willing to bet that I'm not the only one. And it can work - just check out Square's offerings, like I said. The Teej iPhones, yeah, quite possibly. Although I would say even those have the potential to have hardcore titles on them, it's just that developers would rather slap stereotypical controls on a touch screen and call it a day. A good (albeit basic) example is Pac-man CE for smartphones - that game has clearly had some thought put into how the touch screen should be used as an input device rather then just shoving a virtual analogue stick onto it. Tablets however - yeah. I can definitely see the hardcore potential in those. • 3 weeks later... As long as people are considering Call of Duty on PS3 and 360 'hardcore', smartphones are perfectly well suited to playing 'hardcore' games. A smartphone will never match a PC with keyboard and mouse for an FPS, but it's better than using a gamepad. It'll never match using a joystick for a flight game, but with the gyroscope it's certainly better than the 360, if not the PS3 thanks to its tilt controls. The small screen size presents a problem with strategy games, and it might never match up with keyboard and mouse and a 20+" screen, but again, quite a bit better than on a console (although the Wii U might change that). Hell, the types of games that work best with a traditional controller are actually quite limited. A 360 has as narrow of a set of games that could justifiably be called hardcore as any smartphone. • 5 months later... but...but..but.. iPhones are gonna replace Xbox and Playstation!!! Said several fanboys. I still don't understand the need to play games on a cellphone, let alone understand how some people seem to think that gaming on a cellphone is going to take over console/pc gaming. Even for movies I think are pointless on a cellphone. Music yeah, they do make a good mp3 player. Maybe a quick lookup on the go for internet related stuff. A good laptop even costs less than most cellphones, which would be better for gaming if ya had the need to play games somewhere other than home. John S. Veteran Posted 07 April 2011 thread closed This topic is now closed to further replies.
{"url":"https://www.neowin.net/forum/topic/988466-iphone-not-made-for-hardcore-games-says-street-fighter-producer/","timestamp":"2024-11-07T01:01:15Z","content_type":"text/html","content_length":"346414","record_id":"<urn:uuid:dc2c4b08-77fc-4a0a-b410-4513d52e92a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00006.warc.gz"}
In regular league matches, draws are a common occurrence. Modeling and predicting draws have some complications. Elo-type ratings allows for draws by simply treating them as half-wins for each team, but it does not allow for direct calculation of draw probabilities. Poisson regression models naturally lets you figure out the probability of a draw by calculating the probability of a goal difference of zero. Poisson models have the additional strength over Elo-type systems in that they can be used to model and predict the number of goals scored, not only who wins (or loose, or draws). The models I have looked at all assumes that draws are possible, and that is the case in regular league matches. But what about the matches where draws are not allowed, such as in knockout tournaments? How could you calculate probabilities for different number of goals? I haven’t really seen any discussion about this anywhere, but I have this one idea I just want to get out there. Bear in mind that this idea I present here is completely untested, so I can not say for sure if it is any good. Matches where draws are impossible are a minority of the matches, so building and fitting a separate model for just those matches is not a good idea. Instead I propose an adjustment to be applied for just those matches. The adjustment can be motivated as follows: The game starts with 0-0 between the teams, so at least one goal has to be scored. This should increase the probabilities of 0-1 and 1-0 results. Similar argument can be given to a game that is in a 1-1 state, a 2-2 state, and so on; at least one goal has to be scored. So the adjustment is to simply divide the probabilities for a draw and add them to the probabilities for a one-goal difference. This should of course be illustrated with an example. Suppose you have a matrix with goal probabilities. This can be computed using a Poisson regression model, perhaps with the Dixon-Coles adjustment or some other bivariate structure, or perhaps it comes from a completely different kind of model. It doesn’t really matter. Then it is just to divide the draw probabilities and add them to the appropriate cells in the matrix: But how should the probabilities be divided? We could just divide them evenly between the two teams, but I think it is more appropriate to divide them based on the relative strengths of the two teams. There are may ways this could be done, but I think a reasonable method is to divide them based on the win-probabilities for the two teams (given that there is no draw). This does not rely on anything other than the goal probability matrix itself, and is easy to compute: Take the sum of the upper and lower triangle of the matrix, divided by the sum of the whole matrix except the diagonal. This also maintains the original win/lose probabilities. This scheme is easy to implement in R. First we need a matrix of probabilities, which I here just compute using two Poisson distributions, then calculate the win probability of the team with goals on the vertical. After that we divide the diagonal with the win-probabilities. # Matrix of goal probabilities probability_matrix <- dpois(0:7, 1.1) %*% t(dpois(0:7, 1.6)) # Win probabilities, for dividing the draw probabilities prop <- sum(mm[lower.tri(mm)]) / (1 - sum(diag(mm))) # Diagonal values, split proportionally divided_vertical <- (diag(probability_matrix) * prop) divided_horizontal <- (diag(probability_matrix) * (1-prop)) Here we encounter a problem. The two vectors we are going to add to the two secondary diagonals are one element too long. If we have a big enough probability matrix, that last element is probably going to be so small that ignoring it should not matter too much. # Increase the probabilities for one-goal wins. diag(probability_matrix[-1,]) <- diag(probability_matrix[-1,]) + divided_vertical[-length(divided_vertical)] diag(probability_matrix[,-1]) <- diag(probability_matrix[,-1]) + divided_horizontal[-length(divided_horizontal)] # The main diagonal, with draw probabilities, should be 0. diag(mm) <- 0 As always, it is nice to see how the probabilities of the goal differences are distributed. Here I have plotted the adjusted and unadjusted probability distributions: We clearly see that one-goal wins are much more probable. As I mentioned above, I haven’t really looked at any data, and it is quite possible that other adjustments are better. Perhaps boosting one-goal wins is a poor idea, and spreading the probabilities more out would be better.
{"url":"https://opisthokonta.net/?m=201502","timestamp":"2024-11-05T21:55:15Z","content_type":"text/html","content_length":"39552","record_id":"<urn:uuid:f0e50007-58a2-471d-8085-fc7823c7b4f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00842.warc.gz"}
Finding Missing Angles In A Polygon Worksheet Answer Key - Angleworksheets.com Finding The Missing Angle Of A Polygon Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you understand the different concepts and build your understanding of these angles. Students will be able to identify unknown angles using the vertex, arms … Read more Finding Missing Angles Worksheet Answer Key Finding Missing Angles Worksheet Answer Key – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help to understand the various concepts and increase your knowledge of angles. Using the vertex, arms, arcs, and complementary angles postulates, students will learn how to find … Read more Finding The Angles Of A Polygon Worksheet Finding The Angles Of A Polygon Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you understand the different concepts and build your understanding of these angles. Using the vertex, arms, arcs, and complementary angles postulates, students will learn how … Read more Find Missing Angle Polygon Worksheet Find Missing Angle Polygon Worksheet – If you have been struggling to learn how to find angles, there is no need to worry as there are many resources available for you to use. These worksheets will help to understand the various concepts and increase your knowledge of angles. Students will be able to identify unknown … Read more
{"url":"https://www.angleworksheets.com/tag/finding-missing-angles-in-a-polygon-worksheet-answer-key/","timestamp":"2024-11-09T14:21:53Z","content_type":"text/html","content_length":"70015","record_id":"<urn:uuid:b2a828a7-daed-4b7e-8c79-d7cf8129a8a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00443.warc.gz"}
Andrea Milani (1948–2018) Andrea Milani, Professor at the University of Pisa, passed away suddenly on 28 November 2018, at age 70. Born in Florence, Italy, on 19 June 1948, Milani became a major figure in the celestial mechanics community, as recognized by the Division on Dynamical Astronomy of the AAS, which named him the 2010 recipient of its prestigious Brouwer Award. Although a meeting had just been held in Pisa in early September 2018 to celebrate his 70th birthday and imminent official retirement, it was clear that Milani intended to remain active in research as an Emeritus Professor and a senior scientist with the company SpaceDyS, which develops software for space-science applications. But chaos, which Milani studied so much in the solar system, also rules human life: the future is unpredictable. The community of planetary dynamicists mourns the loss of one of its most eminent members. To understand the magnitude of the loss, let’s turn back and outline the major achievements of Milani’s almost 50-year-long activity. In the 1980s, Milani was among those who changed our understanding of the solar system, from the view inherited from the Enlightenment times of a perfectly ticking clockwork, to the modern view of a more complex, irregular and ever-evolving machinery. His contribution was both analytic, with studies on normal forms, small divisors and resonant dynamics, and numerical, with the development of integrators with an accurate control of the accumulation of errors. As an example, Milani and his collaborators studied in detail the complex dynamical evolution of Pluto and showed that it is affected by three levels of resonances: the mean motion resonance with Neptune, the Kozai resonance, and a super-resonance among the secular precession rates of Pluto and Neptune, ultimately responsible for the positive Lyapunov exponent (indicating chaotic motion) earlier detected by Sussman and Wisdom (Science 248 (1988): 433-437). The asteroid belt proved to be a more fertile playground than the planets themselves to study chaos and its consequences. Among the many contributions of Andrea Milani on chaotic dynamics in the asteroid belt, I would like to recall three. With A. Nobili, Milani showed that some asteroids can have a small Lyapunov exponent (i.e., a strongly chaotic motion) but remain confined in a neighborhood of their original orbit for the age of the solar system. Only in two degrees of freedom can regular trajectories bound chaotic motion, so this result was not expected for the evolution of an asteroid in the full planetary system. We now know that “stable chaos” can be understood with the theory developed by the Russian mathematician N. N. Nekhoroshev. At the opposite end of the spectrum of possible dynamical behaviors, Milani and P. Farinella, another giant of planetary science at the University of Pisa who prematurely passed away, found an asteroid on the brink of being removed: the asteroid is in the chaotic region surrounding a powerful resonance, which will eventually eject it from the asteroid belt within an expected timescale of 10 My. The existence of bodies with a dynamical lifetime much shorter than the age of the solar system was a real puzzle at the time. We know now that this is due to a non-gravitational force, called the Yarkovsky effect, pushing asteroids from regular orbits into chaotic zones. In another paper, Milani and Farinella demonstrated that chaotic dynamics can cause the slow dispersal of asteroid families. Knowing the chaotic diffusion rate from numerical simulations and the current orbital dispersal, they estimated an upper bound of 50 My for the age of the Veritas family. They called this approach chaotic chronology. Such a young age for an asteroid family was a surprising result (we know now that the age of the family is only 8 My), showing that the structure of the asteroid belt is changing on geological times not only as a consequence of chaotic dynamics, but also of collisional events. The topic of asteroid families is central in Milani’s scientific activity. It’s fair to say that Milani did not just provide relevant contributions in this field: he founded the field himself! The identification of asteroid families requires the calculation of quantities that do not change significantly during the dynamical evolution of the asteroids, provided they are not in chaotic diffusion. These quantities are called proper elements. Before Milani, proper elements were calculated with analytic schemes limited to first order in the masses of the planets and/or low degree in eccentricity and inclination. Then families were identified visually as apparent clumps in proper element space. With Z. Knezevic, Milani developed a perturbation approach to the computation of proper elements up to order two in the planetary masses and degree 4 in eccentricities and inclinations, with an innovative iterative algorithm to minimize the error in the determination of the proper frequencies. Throughout the years, Milani and Knezevic maintained a databased in Pisa, named AstDys, with updated proper elements for all known asteroids. Armed with this database, with colleagues in the Observatory at Turin, Milani searched for asteroid families using a rigorous cluster analysis, which identifies clumps of asteroids above statistical significance level. Then, in a number of papers, Milani and his collaborators (mostly students) studied the structure of the families, their interactions with resonances, their spreading due to the Yarkovsky effect, their ages, etc. He also pioneered coupling proper elements with physical properties (albedo, colors) to achieve a more reliable identification of family members. Today, the study of asteroid families is a major field in planetary science involving many tens of researchers, from theorists to observers and experimentalists. Without Milani kick-starting this field with his mathematical approach to proper elements, the landscape of planetary science would be very different today. Milani was also interested in applied science. As a mathematician, he was looking at rigorous optimized algorithms to solve problems, whereas others were just employing brute-force calculations. Thus, he introduced innovative approaches in space geodesy, orbital determination, and the calculation of collision probabilities of specific asteroids with the Earth given their orbital uncertainties. Today, the most effective algorithms for impact monitoring, both in Pisa and at the Jet Propulsion Laboratory, are of Milani’s origin or heritage. I am 18 years younger than Andrea Milani, so when I was just entering the field, Milani was already an established professor. I remember my apprehension — shared by my young colleagues and never fully extinguished with time — seeing Milani stand up at the end of my talk (he always stood when asking a question) and begin, “I would like to point out....” That usually meant trouble to come. Working on similar subjects, he and I had over our careers some scientific debates, which, both being Italian, could become quickly very animated and loud. But Milani never took any of these personally and even after the hottest exchanges the relationships quickly returned to calm and friendly. And the few times when further results showed Milani’s ideas to be incorrect, he would admit it openly and move on, which reflects on his scholarly honor. I like to imagine that he joins his mentor Giuseppe Colombo, whom he admired so much, and that they both will watch the journey to Mercury of the recently launched BepiColombo mission of the European Space Agency. The mission, devoted to Colombo’s memory, was strongly supported by Milani, who also helped devise the Mercury Orbiter Radio- science Experiment (MORE) that is onboard the spacecraft. Enjoy the show, Andrea. You deserve it.
{"url":"https://baas.aas.org/pub/andrea-milani-1948-2018/release/1","timestamp":"2024-11-11T17:49:15Z","content_type":"text/html","content_length":"807850","record_id":"<urn:uuid:a86de9dc-c4d3-49f5-a516-19ac5a9ad8f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00808.warc.gz"}
584 research outputs found The Principle of Ariadne, formulated in 1988 ago by Walter Carnielli and Carlos Di Prisco and later published in 1993, is an infinitary principle that is independent of the Axiom of Choice in ZF, although it can be consistently added to the remaining ZF axioms. The present paper surveys, and motivates, the foundational importance of the Principle of Ariadne and proposes the Ariadne Game, showing that the Principle of Ariadne, corresponds precisely to a winning strategy for the Ariadne Game. Some relations to other alternative. set-theoretical principles are also briefly discussed The objective of this study was to verify the presence of giant fibres in the Black Sicilian pig skeletal muscle and to evaluate the effect of sex on histochemical and morphometric characteristics of the myocytes (myofibres) as well as vascularity of the muscle. Twenty Black Sicilian pigs (10 males, 10 females) from a farm in Sicily (Italy) were slaughtered at two years of age. Muscle tissues were obtained from three muscles: psoas major, longissimus dorsi, and trapezius. Myofibres were stained for myosin ATPase, succinic dehydrogenase, and α-amylase-PAS. For all fibre types, area and perimeter were measured. Slow-twitch oxidative fibres, fast-twitch glycolytic fibres and fast-twitch oxidative-glycolytic fibres were histochemically differentiated; an image-analyzing system was used. The results showed no differences between males and females in percentage of the fibre types, but there were significant differences between sexes in size of all the three fibre types. Psoas major muscle had a high percentage of slow-twitch oxidative fibres and contained more capillaries per fibre and per mm2 than trapezius and longissimus dorsi, in which fast-twitch glycolytic fibres dominated. The cross-sectional area of all fibres types was larger in longissimus dorsi than in trapezius and psoas major muscles; the giant fibres were absent in all the muscles studied. Fibre type composition may contribute to the variation of meat quality This paper is devoted to study the dynamics of gravitational collapse in the Misner and Sharp formalism. We take non-viscous heat conducting charged anisotropic fluid as a collapsing matter with cylindrical symmetry. The dynamical equations are derived and coupled with the transport equation for heat flux obtained from the M$\ddot{u}$ller-Israel-Stewart causal thermodynamic theory. We discuss the role of anisotropy, electric charge and radial heat flux over the dynamics of the collapse with the help of coupled equation.Comment: 15 pages, accepted for publication in Astrophys. Space Sc We analyse the effects of thermal conduction in a relativistic fluid, just after its departure from hydrostatic equilibrium, on a time scale of the order of thermal relaxation time. It is obtained that the resulting evolution will critically depend on a parameter defined in terms of thermodynamic variables, which is constrained by causality requirements.Comment: 16 pages, emTex (LaTex 2.09). To appear in Classical and Quantum Gravit The evolution equation for the shear is reobtained for a spherically symmetric anisotropic, viscous dissipative fluid distribution, which allows us to investigate conditions for the stability of the shear-free condition. The specific case of geodesic fluids is considered in detail, showing that the shear-free condition, in this particular case, may be unstable, the departure from the shear-free condition being controlled by the expansion scalar and a single scalar function defined in terms of the anisotropy of the pressure, the shear viscosity and the Weyl tensor or, alternatively, in terms of the anisotropy of the pressure, the dissipative variables and the energy density inhomogeneity.Comment: 19 pages Latex. To appear in Gen. Rel. Gra We consider the dynamics of a causal bulk viscous cosmological fluid filled constantly decelerating Bianchi type I space-time. The matter component of the Universe is assumed to satisfy a linear barotropic equation of state and the state equation of the small temperature Boltzmann gas. The resulting cosmological models satisfy the condition of smallness of the viscous stress. The time evolution of the relaxation time, temperature, bulk viscosity coefficient and comoving entropy of the dissipative fluid is also obtained.Comment: 11 pages, 5 figures, accepted for publication in International Journal of Modern Physics We present a discussion of the effects induced by the bulk viscosity on the very early Universe stability. The matter filling the cosmological (isotropic and homogeneous) background is described by a viscous fluid having an ultrarelativistic equation of state and whose viscosity coefficient is related to the energy density via a power-law of the form $\zeta=\zeta_0 \rho^u$. The analytic expression of the density contrast (obtained for $u=1/2$) shows that, for small values of the constant $\zeta_0$, its behavior is not significantly different from the non-viscous one derived by E.M. Lifshitz. But as soon as $\zeta_0$ overcomes a critical value, the growth of the density contrast is suppressed forward in time by the viscosity and the stability of the Universe is favored in the expanding picture. On the other hand, in such a regime, the asymptotic approach to the initial singularity (taken at $t=0$) is deeply modified by the apparency of significant viscosity in the primordial thermal bath i.e. the isotropic and homogeneous Universe admits an unstable collapsing picture. In our model this feature regards also scalar perturbations while in the non-viscous case it appears only for tensor modes.Comment: 8 pages, no figur In this paper, we study dynamics of the charged plane symmetric gravitational collapse. For this purpose, we discuss non-adiabatic flow of a viscous fluid and deduce the results for adiabatic case. The Einstein and Maxwell field equations are formulated for general plane symmetric spacetime in the interior. Junction conditions between the interior and exterior regions are derived. For the non-adiabatic case, the exterior is taken as plane symmetric charged Vaidya spacetime while for the adiabatic case, it is described by plane Reissner-Nordstr$\ddot{o}$m spacetime. Using Misner and Sharp formalism, we obtain dynamical equations to investigate the effects of different forces over the rate of collapse. In non-adiabatic case, a dynamical equation is joined with transport equation of heat flux. Finally, a relation between the Weyl tensor and energy density is found.Comment: 21 pages, accepted for publication Gen. Relativ. Gra We find a simple exact model of radiating stellar collapse, with a shear-free and non-accelerating interior matched to a Vaidya exterior. The heat flux is subject to causal thermodynamics, leading to self-consistent determination of the temperature $T$. We solve for $T$ exactly when the mean collision time $\tau_{c}$ is constant, and perturbatively in a more realistic case of variable $\tau_{c}$. Causal thermodynamics predicts temperature behaviour that can differ significantly from the predictions of non-causal theory. In particular, the causal theory gives a higher central temperature and greater temperature gradient.Comment: Latex [ioplppt style] 9 pages; to appear Class. Quantum Gra
{"url":"https://core.ac.uk/search/?q=author%3A(DI%20PRISCO%20M.%20R)","timestamp":"2024-11-05T12:31:32Z","content_type":"text/html","content_length":"187477","record_id":"<urn:uuid:4f783266-bdb6-4d32-b353-01020fe3eea7>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00468.warc.gz"}
Metric Combinatorics of Convex Polyhedra: Cut Loci and Nonoverlapping Unfoldings Let S be the boundary of a convex polytope of dimension d + 1, or more generally let S be a convex polyhedral pseudomanifold. We prove that S has a polyhedral nonoverlapping unfolding into Rd , so the metric space S is obtained from a closed (usually nonconvex) polyhedral ball in Rd by identifying pairs of boundary faces isometrically. Our existence proof exploits geodesic flow away from a source point v S, which is the exponential map to S from the tangent space at v. We characterize the cut locus (the closure of the set of points in S with more than one shortest path to v) as a polyhedral complex in terms of Voronoi diagrams on facets. Analyzing infinitesimal expansion of the wavefront consisting of points at constant distance from v on S produces an algorithmic method for constructing Voronoi diagrams in each facet, and hence the unfolding of S. The algorithm, for which we provide pseudocode, solves the discrete geodesic problem. Its main construction generalizes the source unfoldi...
{"url":"https://www.sciweavers.org/publications/metric-combinatorics-convex-polyhedra-cut-loci-and-nonoverlapping-unfoldings","timestamp":"2024-11-03T16:25:57Z","content_type":"application/xhtml+xml","content_length":"34722","record_id":"<urn:uuid:8d5264a9-954d-4ab2-a122-60af5f533614>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00205.warc.gz"}
4.6: Multiple Optical Devices Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Image Becomes Object In the previous section we built a lens from two successive spherical refractions. The trick we used to get the lensmaker equation can be extended to many more applications. Indeed, anytime light is affected by more than one optical device on its way from the object to the observer, the image that results from the first optical device in the path is effectively the source for the second optical device. The location of this first image can then be used to compute an "object" distance for the second device. As usual, the best way to visualize the physical basis for this assumption is to think about what is happening to the light waves as it exits the original object, is affected by the first device (which can be a mirror, a lens, or even just a single surface interface between two media), and eventually converges to the image. Figure 4.6.1 – Image Becomes Object Clearly "image becomes object" does not mean that an item suddenly materializes in empty space! Rather, as the light continues past the convergence point, the waves that emerge are indistinguishable from light waves that leave an actual object, which means that the image can subsequently be used as an "object" for further adventures of that light, such as passing through a lens or bouncing off a It should be noted that the image-used-as-an-object has an important difference with an actual object. The light that leaves the tip of an actual object arrow propagates out in all directions. The light that leaves the image can only be light that took a path from the original object and was altered by the optical device. This limits the outgoing light of the image to a specific cone. In the figure above, the optical device is clearly a converging lens, and the cone of light that leaves the image represents the region where one is able to look at the object through the lens. If one does not look through the lens, of course this image cannot be seen. Example \(\PageIndex{1}\) A vessel, the bottom of which is a flat mirror, contains water that is \(30.0cm\) deep. Someone looks down into the container at their reflection, from a height of \(50.0cm\) from the surface of the water. Find how far the image they see is from their actual face. The index of refraction of water is 1.33. This problem includes an extra level of thought that has not yet been introduced in this text, but which will play an important role later, so you should not be discouraged if you found this especially challenging. To see how this works, one has to track what happens to the light, and what the "apparent" source of the light is every time that the light encounters a change (reflection or refraction). We will keep track of where images and objects are throughout the calculation, so there will be no need to track the signs of the values (we will use absolute values throughout). First, the light leaves the face of the person, on its way to the mirror. We know that the mirror will send back the light in a manner exactly symmetric to how it came into the mirror, but the apparent source of the light for the mirror is not positioned where the person's face is, because the water refracts the light before it gets to the mirror. We therefore need to compute the distance from the surface of the water that the face appears to be, according to the mirror. In essence, the position of the image created by the light passing from the air into the water becomes the position of the object for the mirror. The light is coming from air (\(n=1\) into the water, so according the the mirror the image of the face is farther away from the surface of the water than the actual face is: \[s'=\dfrac{1.33}{1}\left(50.0cm\right) = 66.5cm\nonumber \] The "object distance" to the mirror equals the apparent distance of the face to the surface of the water, plus the depth of the water: \[s=66.5cm+30.0cm = 96.5cm\nonumber \] Now after the light reflects, it behaves as if it is coming from the other side of the mirror, making the apparent source of light a distance of \(96.5cm+30.0cm = 126.5cm\) from the surface of the water. This light passes back through the surface of the water and is refracted into the air, which means that the apparent source of the light for the person observing it is closer to the water surface than the \(126.5cm\) computed above: \[s'=\dfrac{1}{1.33}\left(126.5cm\right) = 95.1cm\nonumber \] This is the distance from the surface of the water of the image seen by the person. Adding this to the distance of the face from the water gives our answer – the distance between the face and its image: \(145cm\). If the water had not been present, then because it is \(80.0cm\) from the mirror, the image of the face would have been on the opposite side of the mirror, \(160cm\) from the face. So the presence of the water has the effect of bringing the image closer. Example \(\PageIndex{2}\) A diverging lens is placed in front of a plane mirror as in the diagram to the right. The separation of the lens and the mirror exactly equals the magnitude of the lens’s focal length, which is \ (-1.32m\). An object is placed twice this distance (\(2.64m\)) on the other side of the lens (see the diagram below). 1. Find the position of the image seen by eye A (which is looking into the mirror). Express your answer as a distance measured from the mirror, and indicate which side of the mirror the image appears on (left or right). 2. Find the lateral magnification (relative to the original object) of the image seen by eye A. Indicate whether the image is upright or inverted. 3. Repeat (a) and (b) for the image seen by eye B (which is looking through the lens into the mirror). a. The object distance is positive, so it is \(–2\) times the focal length of the diverging, negative-focal-length lens. Plugging this into the lens equation gives the position of the first \[\dfrac{1}{-2f} + \dfrac{1}{s'} = \dfrac{1}{f} \;\;\;\Rightarrow\;\;\; s' = \frac{2}{3}f\nonumber\] The image distance is the same sign as the focal length, so it is negative, placing it on the left side of the lens. This image becomes the object for the next stage, which is reflection off the plane mirror. The distance of this new object to the plane mirror is the distance to the lens (remember it is to the left of the lens), plus the distance between the lens and the mirror, which is given to be the focal length. Therefore the distance of the first image to the mirror is: \[s = \frac{5}{3}f\nonumber\] The image formed by the plane mirror is exactly the same distance behind it as the object is in front of it, so the image viewed by eye A is \(\frac{5}{3}f\) behind the mirror. Plugging in for f gives the result: The image seen by eye A is \(2.2m\) to the right of the mirror. b. The only lateral magnification comes from the lens, because plane mirrors provide no lateral magnification (i.e. \(M = +1\). This is easily calculated: \[M=-\dfrac{s'}{s} = -\dfrac{\frac{2}{3}f}{-2f} = +\dfrac{1}{3}\nonumber\] The positive value indicates that it is upright, and plane mirrors don’t invert images, so the image seen by eye A is upright. c. We use the result for eye A, as the image for the previous refraction and reflection becomes the object for the second refraction. The distance of the reflected image from the lens is the distance it is behind the mirror plus the focal length of the lens (since that is the separation between the lens and the mirror). Note that the object distance is positive and the focal length negative, so we need to include the minus sign as before. Using this value and the lens equation gives the position of the image seen by eye B relative to the lens: \[\dfrac{1}{-\frac{8}{3}f} + \dfrac{1}{s'} = \dfrac{1}{f}\;\;\;\Rightarrow\;\;\; s' =\frac{8}{11}f \nonumber\] This is the distance from the lens, so the distance from the mirror is \(\frac{3}{11}f = 0.36m\) to the left of the mirror. The lateral magnification is the product of the three lateral magnifications (the second one being that of the plane mirror, which is just 1), so we just need to calculate the final lateral magnification: \[M_3 = -\dfrac{s'}{s} = -\dfrac{\frac{8}{11}f}{-\frac{8}{3}f} = +\dfrac{3}{11}\;\;\;\Rightarrow\;\;\; M=M_1 M_2 M_3 = \left(+\dfrac{1}{3}\right)\left(+1\right)\left(+\dfrac{3}{11}\right) = +\ The magnification is positive and smaller than 1, which means the final image is upright and diminished. Virtual Objects Another thing that came up in our derivation of the lensmaker equation was the possibility that the light could pass through a second surface before it was able to converge due to the first surface. The same can happen with two devices. This seems to cause a problem with the idea of the image the first optical device being an object for the next optical device, but mathematically we get around this by giving the object distance for the second device a negative value, in accordance with our sign conventions. While this mathematics does the trick and worked perfectly for the lensmaker equation, it is somehow not as satisfying as seeing how it works with a ray trace. While it is a bit tricky to do, this can be achieved as well. Here is the method: 1. Ignore the second device and use a couple of principal rays for the original object and the first optical device to locate the image of the first device. 2. Keep in mind that these are not the only two rays that we can sketch from the object. All of the rays that leave the object converge toward the first image, even if the light doesn't actually make it there. Of all these rays, select two that start at the first device and are principal rays for the second device. Note that one of these may be a principal ray of the first device (the one that emerges parallel to the optical axis), but others will not be. 3. Use two of these principal rays of the second device to sketch the rays that converge to the final image. The trick here is that the virtual object is not the origin of rays for the second device, but rather is the target of rays emerging from the first device before they reach the second device. Let's see an example of how this would work for two converging lenses, where the real image of the first converging lens lands behind the second. Figure 4.6.2 – Ray Trace with a Virtual Object Note that there is an infinite number of rays coming from the first lens, all of them heading for the first image. We just happen to choose two that conveniently happen to be principal rays for the second lens. Stacking Thin Lenses (Again) In Section 4.5 we used the lensmaker equation to deduce that multiple thin lenses placed back-to-back result in a single lens with a diopter strength that is the sum of the diopter strengths of the individual lenses (Equation 4.5.4). Here we will show this same equation follows from the first-image-becomes-second-object idea. We start with two lenses (for the sake of this discussion, we will use converging lenses, but this is not required) that are separated by a large distance \(l\). We'll define the "total focal length" \(f_{tot}\) as the distance from the first lens to the image of parallel rays after the second lens. Figure 4.6.3a – Deriving the Lens-Stacking Formula To get the parallel incoming rays, we follow the usual method of placing a point object on the axis very far away from the first lens (just a single point is all we need to get distances we are looking for, no arrow is necessary). According to the thin lens equation, this will result in the image point landing at the focal point of the first lens. Figure 4.6.3b – Deriving the Lens-Stacking Formula This image point of the first lens now becomes the object for the second lens, and we can compute the image distance for the second lens. Figure 4.6.3c – Deriving the Lens-Stacking Formula Now we need to use the thin lens equation and do a bit of math to write \(f_{tot}\) in terms of the other variables: \[\left. \begin{array}{l} \dfrac{1}{s_2} + \dfrac{1}{s_2'} = \dfrac{1}{f_2} \\ s_2 = L-f_1 \\ s_2' = f_{tot} - L \end{array} \right\}\;\;\; \dfrac{1}{L-f_1} + \dfrac{1}{f_{tot}-L} = \dfrac{1}{f_2}\] Now to get the stacked thin lens result, all we need to do is take the limit where the separation of the lenses goes to zero (\(L\rightarrow 0\)), and sure enough, Equation 4.5.4 is the result.
{"url":"https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9B__Waves_Sound_Optics_Thermodynamics_and_Fluids/04%3A_Geometrical_Optics/4.06%3A_Multiple_Optical_Devices","timestamp":"2024-11-11T14:33:20Z","content_type":"text/html","content_length":"142888","record_id":"<urn:uuid:260ef09f-e98e-4de5-93e5-30a159dc621b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00400.warc.gz"}
Comparison of ice cost between direct cooling block ice machine and brine block ice machine Comparison of ice cost between direct cooling block ice machine and brine block ice machine • Categories:Common problem • Author: • Origin: • Time of issue:2016-07-25 13:40 • Views: (Summary description)Water fee: 5 yuan per party, electricity fee: 0.8 yuan per kWh, equipment model: 20 tons per day The ice cost of direct cooling block ice machine is as follows: Water fee: the equipment makes 20 tons of ice a day, and the water fee per day is 20*5=100 yuan Electricity cost: The electricity consumption of direct cooling ice making is about 60 degrees, here is calculated at 65 degrees, the electricity cost: 20*65*0.8=1040 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each person works 4 hours a day, labor cost is: 2*4*25=200 yuan The cost of brine type block ice machine ice is as follows: Water fee: The equipment makes 20 tons of ice a day, and the water fee per day is 23*5=115 yuan (Salt water generally costs more water, 20 tons of ice and 23 tons of water are just estimates) Electricity cost: direct cooling ice making consumes about 80 degrees, here is calculated at 85 degrees, electricity cost: 20*85*0.8=1360 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each working 6 hours a day, labor cost: 2*6*25=300 yuan The cost of buying salt: the 20-ton ice maker runs for 8 months every year, the cost of buying salt is about 4000 yuan, and the cost of buying salt: 4000/240=16.7/day Note: The salt water concentration will decrease due to repeated ice production, and salt must be added to the salt pond to maintain the concentration. Generally, calcium chloride is purchased, and the market price is 1,000 yuan/ton. Ice mold replacement fee: 20 tons of ice mold and ice bucket holder, the total value is about 50,000 (protection replacement labor, freight, etc.), the equipment is calculated at 6 years, the ice mold replacement is: 8,300 yuan a year, every year Calculated based on 8 months of ice making, the daily cost is: 8300/240=34.5 yuan Note: The direct cooling type ice mold basically does not need to be replaced, and there is no need to buy salt, so the cost is lower. Therefore: the direct cooling block ice machine costs 20 tons of ice per day: 100+1040+200=1340 yuan The cost of making 20 tons of ice per day for the brine type block ice machine is: 115+1360+300+16.7+34.5=1826.2 If 20 tons of ice cubes are produced per day, the direct cooling block ice machine will save cost than the salt water block ice machine if it is calculated as 8 months of production per year: (1826.2-1340)*30*8=116688 yuan The direct cooling block ice machine saves per ton of ice than the salt water block ice machine: (1826.2-1340)/20=24.3 yuan Comparison of ice cost between direct cooling block ice machine and brine block ice machine (Summary description)Water fee: 5 yuan per party, electricity fee: 0.8 yuan per kWh, equipment model: 20 tons per day The ice cost of direct cooling block ice machine is as follows: Water fee: the equipment makes 20 tons of ice a day, and the water fee per day is 20*5=100 yuan Electricity cost: The electricity consumption of direct cooling ice making is about 60 degrees, here is calculated at 65 degrees, the electricity cost: 20*65*0.8=1040 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each person works 4 hours a day, labor cost is: 2*4*25=200 yuan The cost of brine type block ice machine ice is as follows: Water fee: The equipment makes 20 tons of ice a day, and the water fee per day is 23*5=115 yuan (Salt water generally costs more water, 20 tons of ice and 23 tons of water are just estimates) Electricity cost: direct cooling ice making consumes about 80 degrees, here is calculated at 85 degrees, electricity cost: 20*85*0.8=1360 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each working 6 hours a day, labor cost: 2*6*25=300 yuan The cost of buying salt: the 20-ton ice maker runs for 8 months every year, the cost of buying salt is about 4000 yuan, and the cost of buying salt: 4000/240=16.7/day Note: The salt water concentration will decrease due to repeated ice production, and salt must be added to the salt pond to maintain the concentration. Generally, calcium chloride is purchased, and the market price is 1,000 yuan/ton. Ice mold replacement fee: 20 tons of ice mold and ice bucket holder, the total value is about 50,000 (protection replacement labor, freight, etc.), the equipment is calculated at 6 years, the ice mold replacement is: 8,300 yuan a year, every year Calculated based on 8 months of ice making, the daily cost is: 8300/240=34.5 yuan Note: The direct cooling type ice mold basically does not need to be replaced, and there is no need to buy salt, so the cost is lower. Therefore: the direct cooling block ice machine costs 20 tons of ice per day: 100+1040+200=1340 yuan The cost of making 20 tons of ice per day for the brine type block ice machine is: 115+1360+300+16.7+34.5=1826.2 If 20 tons of ice cubes are produced per day, the direct cooling block ice machine will save cost than the salt water block ice machine if it is calculated as 8 months of production per year: (1826.2-1340)*30*8=116688 yuan The direct cooling block ice machine saves per ton of ice than the salt water block ice machine: (1826.2-1340)/20=24.3 yuan • Categories:Common problem • Author: • Origin: • Time of issue:2016-07-25 13:40 • Views: Water fee: 5 yuan per party, electricity fee: 0.8 yuan per kWh, equipment model: 20 tons per day The ice cost of direct cooling block ice machine is as follows: Water fee: the equipment makes 20 tons of ice a day, and the water fee per day is 20*5=100 yuan Electricity cost: The electricity consumption of direct cooling ice making is about 60 degrees, here is calculated at 65 degrees, the electricity cost: 20*65*0.8=1040 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each person works 4 hours a day, labor cost is: 2*4*25=200 yuan The cost of brine type block ice machine ice is as follows: Water fee: The equipment makes 20 tons of ice a day, and the water fee per day is 23*5=115 yuan (Salt water generally costs more water, 20 tons of ice and 23 tons of water are just estimates) Electricity cost: direct cooling ice making consumes about 80 degrees, here is calculated at 85 degrees, electricity cost: 20*85*0.8=1360 yuan Labor: labor is calculated at 25 yuan per hour, 2 people, each working 6 hours a day, labor cost: 2*6*25=300 yuan The cost of buying salt: the 20-ton ice maker runs for 8 months every year, the cost of buying salt is about 4000 yuan, and the cost of buying salt: 4000/240=16.7/day Note: The salt water concentration will decrease due to repeated ice production, and salt must be added to the salt pond to maintain the concentration. Generally, calcium chloride is purchased, and the market price is 1,000 yuan/ton. Ice mold replacement fee: 20 tons of ice mold and ice bucket holder, the total value is about 50,000 (protection replacement labor, freight, etc.), the equipment is calculated at 6 years, the ice mold replacement is: 8,300 yuan a year, every year Calculated based on 8 months of ice making, the daily cost is: 8300/240=34.5 yuan Note: The direct cooling type ice mold basically does not need to be replaced, and there is no need to buy salt, so the cost is lower. Therefore: the direct cooling block ice machine costs 20 tons of ice per day: 100+1040+200=1340 yuan The cost of making 20 tons of ice per day for the brine type block ice machine is: 115+1360+300+16.7+34.5=1826.2 If 20 tons of ice cubes are produced per day, the direct cooling block ice machine will save cost than the salt water block ice machine if it is calculated as 8 months of production per year: (1826.2-1340)*30*8=116688 yuan The direct cooling block ice machine saves per ton of ice than the salt water block ice machine: (1826.2-1340)/20=24.3 yuan Thank you for entering the official website of Guangdong Xueyuan Ice Making Equipment Co., Ltd. If you have any cooperation intentions or suggestions, you can contact us through the following methods, and we will reply to you as soon as possible, thank you! Time of issue:2021-01-05 11:13:43 Add:North Park Middle Road, Shishan Science, and Technology Industrial Park, Shishan Town, Nanhai District, Foshan City, Foshan Username used for comment: Time of issue:2021-01-05 11:13:53 © 2020 Guangdong Snowland Refrigeration Equipment Co., Ltd. 粤ICP备16071012号 Powered by:300.cn Time of issue:2021-01-05 09:54:32 Hello, welcome to Guangdong Snowland Refrigeration Equipment Co., Ltd.
{"url":"http://en.51snowland.com/news/13.html","timestamp":"2024-11-09T15:54:31Z","content_type":"text/html","content_length":"97795","record_id":"<urn:uuid:ffaa2dc5-405e-4cf8-8dbb-2c3f880eb100>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00541.warc.gz"}
• Computer Aided Reasoning Material for a 3rd / 4th year course (g53cfr, g54 cfr) at the university of Nottingham 2010 by Thorsten Altenkirch • Type Theory in Rosario Material for an Agda course in Rosario, Argentina in 2011 by Thorsten Altenkirch • Software System Design and Implementation, undergrad(?) course at the University of New South Wales by Manuel Chakravarty. • Tüübiteooria / Type Theory, graduate course at the University of Tartu by Varmo Vene and James Chapman. • Advanced Topics in Programming Languages: Dependent Type Systems, course at the University of Pennsylvania by Stephanie Weirich. • Categorical Logic, course at the University of Cambridge by Samuel Staton. • Dependently typed functional languages, master level course at EAFIT University by Andrés Sicard-Ramírez. • Introduction to Dependently Typed Programming using Agda, research level course at the University of Edinburgh by Conor McBride. • Agda, introductory course for master students at ELTE Eötvös Collegium in Budapest by Péter Diviánszky and Ambrus Kaposi. • Types for Programs and Proofs, course at Chalmers University of Technology. • Dependently typed metaprogramming (in Agda), Summer (2013) course at the University of Cambridge by Conor McBride. • Computer-Checked Programs and Proofs (COMP 360-1), Dan Licata, Wesleyan, Fall 2013. • Advanced Functional Programming Fall 2013 (CS410), Conor McBride, Strathclyde, notes from 2015, videos from 2017. • Inductive and inductive-recursive definitions in Intuitionistic Type Theory, lectures by Peter Dybjer at the Oregon Programming Languages Summer School 2015. • Introduction to Univalent Foundations of Mathematics with Agda , MGS 2019 Martín Hötzel Escardó • Higher-Dimensional Type Theory (CSCI 8980), courses on homotopy type theory and cubical type theory, Favonia, the University of Minnesota, Spring 2020 • Correct-by-construction Programming in Agda, a course at the EUTYPES Summer School ‘19 in Ohrid. • Lectures on Agda, a course by Peter Selinger at Dalhousie University, Winter 2021. • HoTTEST Summer School 2022, online lectures by assorted instructors. • Advanced Programming Paradigms, Postgraduate course jointly offered by the Universities of Applied Sciences in Switzerland, by Daniel Kröni and Farhad Mehta.
{"url":"https://agda.readthedocs.io/en/v2.7.0/getting-started/tutorial-list.html","timestamp":"2024-11-02T04:53:49Z","content_type":"text/html","content_length":"18399","record_id":"<urn:uuid:7cd9bf88-f2d5-46b2-b63a-74e296182155>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00519.warc.gz"}
C02 Macroscopic Modeling of Damage Evolution in Forming Processes The main scientific objectives of C02 involve the development and verification of regularized, macroscopic, ductile damage models accounting for anisotropic material response, rate and temperature dependencies, as well as microstructural effects induced by recrystallization and phase transformation phenomena. C02 thus has a bridging function between characterization (project area B) and process technology (project area A), with the newly developed models contributing to the damage-controlled design of forming processes. During the 2nd funding period (FP) various phenomena have been incorporated into newly developed material models that enable the simulation of advanced forming processes and enhance the prediction of material properties along the entire process chain. This includes, e.g., anisotropic plastic flow (distortional hardening) in finite elastoplasticity, which is observed to significantly influence the material response, specifically damage evolution during, e.g., shear loading. Such advanced model based prediction is most relevant for control and design of suitable load paths in metal forming, as investigated by project area A (e.g., A02, A05, A06). Before extending this distortional hardening model towards the consideration of ductile damage, a large strain gradient-enhanced anisotropic ductile damage model has been established. This fictitious (undamaged) configuration based model accounts for load path dependencies in plastic flow and damage evolution, as observed for different combinations of, e.g., tensile and shear loading sequences, and is suitable for implementation into the finite element software Abaqus. The latter is achievable since regularization of micromorphic type for plasticity and (tensorial) damage is included via two additional scalar field equations of heat equation format. A suitable Abaqus framework has been established by S01 – the Abaqus implementation of the anisotropic ductile damage model constitutes ongoing work (S01, C02). This shall make the anisotropic model available to other projects for the simulation and successful prediction of deformation induced anisotropic damage evolution in advanced metal forming including load path changes (A02, A05, A06). Thermomechanical coupling has been investigated for steel 16MnCr5 and aluminum AW6016 (also as preliminary work for 3rd FP). Mechanical and thermal parameters are fitted – for 16MnCr5 over a wide temperature range (0 to >1200 ^oC). Different strain rates have been considered for 16MnCr5 with a key challenge being the appropriate incorporation of temperature dependencies in strain rate effects. As thermal softening and related localization may occur at (locally) high strain rates due to plastic heating and insufficient heat conduction, a micromorphic regularization approach of temperature contributions was implemented. The material’s microstructure is influenced by thermomechanical processes, e.g. hot working and annealing (A02–A07). To predict its evolution and related recrystallization induced softening influencing the effective thermomechanical behavior, a physics-based large strain thermo-elastoplasticity recrystallization model was implemented. This micromechanically motivated model unifies effects of static and dynamic recovery and recrystallization, grain growth and grain refinement. It allows to predict the response of unified recrystallization for sequential loading conditions during hot working and annealing. Moreover, it enables the modeling of (initial) microstructure dependent response, specifically damage, in subsequent cold forming. Parameters identified for the respective models are typically assumed constant for a given material, i.e. are assumed identical in and across different batches. In practice, initial material properties are associated with uncertainties due to inherent variabilities, such as variability of raw materials or production processes. Based on large strain elastoplasticity, a computationally efficient approach has been developed in collaboration with C06 (as preliminary work for 3rd FP) to quantify effects of uncertainty of homogeneously distributed material parameters on damage initiation related quantities. The model employs a numerically efficient variance-based global sensitivity analysis based on a Gaussian process regression surrogate model combined with a Bayesian active learning approach. Its application reveals variability in parameters related to nonlinear hardening response most significantly influencing damage initiation indicators. This emphasizes the potential of damage control through optimization of nonlinear hardening behavior as influenced by, e.g., heat treatment. Heat treatment is most relevant for microstructural design, resulting material properties, and damage evolution (A02-A05, A08, A10). Towards the integrative investigation of entire process chains in the 3rd funding period, modeling of microstructural effects induced by hot working and annealing will be extended by the modeling and simulation of phase transformations (e.g., during intercritical annealing and quenching) which, subsequent to recrystallization, is a key process within the production of, e.g., DP800 steel. Damage induced by forming is intrinsically influenced by the (initial) microstructures resulting from preceding heat treatments (A02, A05). Gradient-enhanced ductile damage models shall be extended to predict the influence of (initial) microstructure resulting from phase transformations on subsequent damage evolution (implementation in cooperation with S01). Reduction of pre-existing damage can result from both, dynamic processes, such as shrinkage up to closure of voids under mechanical loads, and static processes, such as additional healing of voids under thermal input as observed in recrystallization and phase transformation processes (A04, A05, characterization by B02–B04). New damage models will include both, static and dynamic damage reduction mechanisms, enhancing the prediction of material properties along entire process chains. Uncertainty quantification of the influence of parameters on, e.g., damage evolution is a key challenge and will be investigated (with C06 based on joint preliminary work) along representative load paths. Local material properties resulting from the respective microstructure of a material may be inhomogeneously distributed. In consequence, quantification of uncertainty of related microstructural properties and parameters will be established considering their inhomogeneous distributions. This analysis contributes to the reliable simulation based prediction of process-induced properties. Further materials, such as aluminum alloys (A02, A05, A10), austenitic stainless steel (A02) and hot forming steel (A05), shall be considered in addition to DP800 and 16MnCrS5 to transfer knowledge gained and modeling frameworks established to wider applicability. Report and current state of research Gradient-enhanced ductile damage models Continuum damage mechanics models are typically categorized into micromechanically motivated Gurson-type approaches, where damage evolution affects solely plastic flow through void volume fraction, and phenomenological Lemaitre-type models, where elastic material properties are additionally deteriorated by damage evolution. In C02, large strain Lemaitre-type models are developed to predict the influence of damage on material properties at macroscale (e.g., stiffness degradation). In the context of finite element simulations, local damage models may result in mesh-dependent responses together with loss of ellipticity of the governing field equations in the post-critical regime. Remedies to such mesh-dependence are the incorporation of non-local (integral) damage variables, additional gradient contributions, [Pee96], or the incorporation of viscous effects. In C02, gradient-enhanced damage models are established along the approach in [For09]. The gradient-enhanced isotropic ductile damage model developed in C02 has successfully been applied to forming processes such as air bending, [Spr20a], shape optimization, [Guh23], prediction of electrical resistance, [Güz23], and was also calibrated to DP800, [Spr20b*]. An implementation based on the framework in [Ost19] and, more recently, further improved within S01 enables the application of the gradient-enhanced ductile damage model to mesh-independent process simulation including advanced Abaqus-included features such as contact modeling. An example by analogy with forward rod extrusion processes is shown in Fig. 1a. Fig. 1: Grad.-enhanced ductile damage model applied to process simulation (cf. forward rod extrusion) - force-displ. re-lations for different shoulder opening angles α and distribution of damage variable d for α=60° (a). Experimental (□, △, ○) vs. simulated (lines) stress-strain data of 16MnCr5 for tensile loading at different rates and temperatures, [Opp22*] (b). Plastic anisotropy may strongly influence material response under, e.g., multiaxial, cyclic and shear loading, see [Kwe13] with focus on ductile damage. This anisotropy evolves due to microstructural features and different related models have been established, e.g. based on structural tensors [Kai20]. In C02, a large strain and stress-driven thermodynamically consistent distortional hardening model has been developed, [Mey21*], which guarantees yield surface convexity and includes isotropic and kinematic hardening. Experimental results for pearlitic steel are considered to assess the model’s performance revealing a significantly better fit to experimental data than previous models. Incorporation of anisotropic plasticity fundamentally contributes to the control and predictive design of load paths – and their changes – in advanced forming processes. Prior to the combination of this hardening model – including distortional, proportional and kinematic hardening – with a ductile damage model, a large strain anisotropic ductile damage model has been developed in C02, [Spra23*]. This is based on a fictitious (undamaged) configuration and a tensorial damage metric. The model facilitates its implementation into Abaqus so that other projects, particularly of area A, can use the model for advanced metal forming simulations by analogy with the gradient-enhanced isotropic ductile damage C02 model. The originally planned route for developing a gradient-enhanced anisotropic ductile damage model, e.g. of type in [Lan20], turned out to be unsuitable for implementation into Abaqus (without user elements). This concerns the tensor-valued damage field variable within a micromorphic framework. One potential alternative would have been to introduce scalar-valued field equations related to invariants of a (symmetric second order) damage tensor, cf. [Hol22]. The new C02 model, [Spr23*], further reduces computational costs by introducing only one additional scalar-valued field equation for regularizing anisotropic tensorial damage. The same concept of a scalar damage hardening variable embedded into a micromorphic approach has been applied to anisotropic brittle damage at small strains in [Fas19]. Basis for the (future) Abaqus implementation of the new C02 model, [Spr23*], is the novel Abaqus framework established in S01, [Sob24*]. This allows incorporation of several field equations of heat equation format in addition to balance of linear momentum. Focus of the new large strain gradient-enhanced anisotropic ductile damage model, [Spr23*], was on the introduction of suitable (effective) damage driving forces and regularization of damage and plasticity. Combination of the novel model with microcrack-closure-reopening effects, [Ekh03], and kinematic hardening, [Men05], constitutes future work. The proposed fictitious configuration based model includes a damage metric b[d] entering the elastic Helmholtz energy Ψ[e](F[, ]F[p, ][d](F[d])) with F[d] the local incompatible damage mapping. This damage metric contributes to related invariants, i.e. I[i] = I : [b[d •] E[e]]^i, and is introduced as take interpretation as scalar-valued damage variables, but referred to principal damage values motivated by the load bearing area in the respective principal directions of [d]. Evolution of plasticity and damage are included via a two-surface model, i.e. separate initiation functions for plasticity and damage evolution, namely The underlying effective driving forces , [d ]and the influence function f[dp] induce strong coupling between both phenomena. Effective Mandel-type stresses driving plastic flow include additional dependencies on the damage state, and damage threshold function is additionally weighted by an influence function depending on accumulated equivalent plastic strain α[p]. Contributions β[p] and β[d] represent nonlinear plastic and damage hardening stresses. Associated evolution equations together with Karush-Kuhn-Tucker conditions are solved monolithically at local level based on a Fischer-Burmeister algorithm – in contrast to [Guh20]. The novel model is able to predict strong load path dependencies of plastic flow and damage which may result in significantly different stress and damage states for identically deformed configurations generated along different load paths – see Fig. 2a for different tensile and shear loading sequences. This enables control of damage states, respectively prediction of improved load paths, which is of key interest for TRR 188. Moreover, a suitable set of material parameters was identified for DP800 based on data for uniaxial tension, Fig. 2: Gradient-enhanced anisotropic ductile damage, [Spr23*]. Different combinations of tensile and shear loading sequences, as represented by coefficients of the deformation gradient, result in different Kirchhoff stresses and damage values (a); plate with hole – no sufficient regularization obtained for gradient-enhanced damage combined with local plasticity (b) but for gradient-enhanced damaged combined with gradient-enhanced plasticity (c). For regularization, the micromorphic framework was applied to the plasticity and damage related scalar hardening stresses β[p] and β[d] resulting in only two additional scalar field variables to be solved for. The analysis in [Spr23*] demonstrated that regularization based on these scalars is sufficient to obtain mesh-independent finite element simulation results. Regularization of the damage contribution alone, respectively of the plasticity contribution alone, is in general insufficient to regularize representative boundary value problems. Instead, regularization has to be applied to both, damage and plasticity contributions; see also [Fri23]. Fig. 2b,c show a related classic example of a plate with hole under tension for initially isotropic material and deformation induced anisotropic damage evolution. Due to symmetry, one eighth of the specimen is discretized. The respective left and right halves in Fig. 2b,c show deformed finite element meshes and contour plots of maximum damage value and accumulated equivalent plastic strain simulated based on different activation of regularization contributions. As a result, gradient-enhanced damage in combination with local plasticity does not sufficiently regularize the boundary value problem as seen by localization of the deformed mesh in Fig. 2b. To the contrary, gradient-enhancement applied to both, damage and plasticity contributions results in sufficient regularization, see Fig. 2c. Thermomechanical coupling, recrystallization and phase transformation During manufacturing processes, the underlying materials may undergo significant changes in temperature –either prescribed, e.g. during heat treatment, or induced by deformation. Such thermal paths at elevated temperatures influence the material response, with temperature dependent yield limits as well as microstructure evolution induced by recrystallization and phase transformations being typical related phenomena. Models have been developed in C02, also as preliminary work for the 3rd FP, that capture these phenomena and which can be combined with ductile damage models (in the 3rd A thermo-viscoplasticy model was established in C02, [Opp22*], which is applicable to ductile thermo-viscoplastic response of various materials at wide temperature ranges. As an example, its parameters are particularly verified using data for 16MnCr5. The model incorporates temperature dependent parameters of thermal, thermo-elastic, and thermo-viscoplastic type in a thermodynamically consistent manner. It covers non-linear hardening, thermal softening, heat dissipation and generation due to structural cooling and plastic deformation. The choice of functions for the temperature-dependencies, including Perzyna-type viscoplastic flow, renders the model applicable over wide temperature ranges – for 16MnCr5 it has been verified for temperature ranges from 0 to >1200 ^oC. The model is embedded into a thermomechanically fully coupled finite element framework so that localization, e.g. due to thermal softening under elevated mechanical loading rates, can be predicted. In case of insufficient regularization by heat conduction, the model can be combined with a micromorphic approach to regularize thermal softening effects, [Muc23]. A particular model feature is its ability to appropriately capture the influence of temperature on viscous response. This is illustrated in Fig. 1b by the comparison of experimental and simulated stress-strain relations for uniaxial tension under different loading rates and temperature levels. Its combination with gradient-enhanced damage modeling, by analogy with [Opp19] and [Fel22], constitutes future research as well as its implementation into Abaqus. While model verification in [Opp22*] is based on experimental data generated for homogeneous strain states, model parameters for thermoplastic material response are identified on the basis of full field data in [Ros21*]. Aluminum alloy AW6016 was elaborated, as also relevant for the 3rd FP, with the focus on small changes in temperature induced by deformation under tensile loading. Full field digital image correlation data and full field thermography data were used for the purpose of parameter identification in the framework of the Finite-Element-Model-Updating (FEMU) method. The thermo-elastoplasticity model demonstrates identifiability of the model parameters for the experimental data considered. A key aspect of the model [Ros21*] is the application of an additional non-standard contribution to the Helmholtz energy in the form of Therein, the additional model parameter υ controls the amount of dissipation resulting from plastic deformation and contributing to the heat source. In contrast to other well-established approaches, such as the introduction of a Taylor-Quinney factor, the proposed formulation aims at thermodynamically consistent scaling of dissipation. For the experiments considered in [Ros21*], the model shows excellent agreement between measured and simulated temperature fields. The parameter identification framework can be transferred to other materials and suitable experiments. Further applications of the FEMU approach for time dependent material response in combination with gradient-enhanced damage are elaborated in [Sch20*]. Suitable starting values for parameter identification within FEMU can be generated via the neural network (NN) framework proposed in [Sch23*] which considers homogeneous deformation states. The NN was trained by different model parameter sets and for different combinations of mechanical loading paths, making it directly applicable to experimental data at different temperature levels. A physics-based micromechanically motivated large strain thermo-elastoplasticity recrystallization model unifying effects of static and dynamic recovery, static and dynamic recrystallization, grain growth and grain refinement – as present in processes such as hot forming and heat treatment – was implemented, [Böd23b*], to predict microstructure evolution and related recrystallization dependent softening affecting the effective thermomechanical behavior. The hypo-elasticity based recrystallization model in [Cho19] was transferred to a hyper-elasticity based thermo-elastoplasticity model to overcome limitations inherent to hypo-elasticity based approaches. The model and its implementation are a key basis for developments in the 3rd FP and shall be combined with modeling of (initial) microstructure dependent damage evolution in (subsequent) cold forming. An alternative back-stress related approach is investigated in [Moh22]. In its present form, the C02 model includes temperature dependent Hencky-type hyper-elasticity combined with non-associative temperature dependent hardening minus recovery for variable α[p] in Armstrong-Frederick format, with absolute temperature θ, plastic multiplier λ[p] and hardening stress β[p]. The recovery activation functions R[d,s] are of Arrhenius type. Physically motivated evolution equations are introduced for the increase and decrease of recrystallized volume fraction X as well as of average grain size d[g]. The proposed model results in a thermodynamical consistency condition that constrains the evolution of X and d[g] and gives rise to the thermodynamic relation between these. The model’s capability to predict the material response for thermomechanical sequential loading is demonstrated in Fig. 3a,b. This simulation involves a first hot working step, enforcing homogeneous states under uniaxial tension, followed by unloading and holding at various temperatures in a second step, Fig. 3a. The results show a sigmoidal evolution of recrystallized volume fraction and grain size reduction followed by grain size growth amongst other phenomena. The model was embedded in a thermomechanically fully coupled finite element formulation. Fig. 3b shows related simulation results where, due to inhomogeneous temperature distribution and static recrystallization, the yield limit near the boundary is reduced in contrast to inner regions of the specimen. It enables the prediction of (initial) microstructure dependent material response, specifically damage, in (subsequent) cold forming. In view of the key importance of phase transformation phenomena occurring during, e.g., intercritical annealing (as considered in A04) and quenching (A05), preliminary work on the formulation of suitable related models has been implemented which, moreover, constitutes ongoing research work, particularly with regard to its finite element implementation into Abaqus. At this stage, the formation of martensite by non-diffusive phase transformation is covered by a purely temperature dependent Koistinen-Marburger model, i.e. with martensite and austenite volume fractions ξ[M] and ξ[A], martensitic start temperature M[S], and transformation rate k. Additional diffusive phase transformation, such as formation of bainite, is modeled with an isothermal Johnson-Mehl-Avrami-Kolmogorov relation, [deO13]. An exemplary cooling path (TTT diagram) and related simulation based prediction of phase volume fraction evolutions is highlighted in Fig. 3c. Further developments shall account for respective mechanical contributions within a homogenization, respectively mixture approach, see [Bar21] and with focus on plasticity related contributions see [And22]. This enables the prediction of (initial) microstructure dependent material response in (subsequent) cold forming. Fig. 3: Finite thermoplasticity with static and dynamic recovery, static and dynamic recrystallization, grain growth and grain refinement (a-b) [Böd23b*] – response (recrystallized volume fraction, grain size) under uniaxial tension, unloading and holding at different temperatures (a) and FE simulation of static recrystallization (b); exemplary cooling path (TTT diagram, martensite finish temperature M[f]) and simulation of phase volume fractions for austenite, martensite, bainite (c). Uncertainty quantification Consideration of uncertainties was originally planned for the 3rd FP. Due to the shift from C05 to C06 within the 3rd FP and related changes in scientific methods, joint preliminary work (C02, C06) focused on uncertainty quantification, [Böd23a*]. This research particularly elaborates on the effect of uncertainty of homogeneously distributed parameters on extremal values of interest, such as stress triaxiality and Lode angle in plastically deformed devices, since these are accepted as key indicators for damage initiation in metal forming. The effects of material parameter uncertainty on stress triaxiality and Lode angle are quantified by means of a numerically efficient variance-based global sensitivity analysis framework together with a Gaussian process regression surrogate model and is combined with Bayesian active learning. This sensitivity analysis, in form of Sobol’ sensitivity indices S[i], is based on expected value E[Θ] and variance V[Θ], i.e. wherein θ corresponds to a set of material parameters realized by random variables Θ with Θ[-i] corresponding to Θ without Θ[i]. Moreover, y = 𝑔(θ) is a maximum value of interest, such as maximum stress triaxiality within the considered domain. Fig. 4 shows results of the established formulation by the analysis of a notched specimen under tension. Due to symmetry and homogeneously distributed material parameters, one eighth of the specimen is considered within the underlying finite element simulations. Fig. 4a shows the distribution of stress triaxiality at a particular loading stage (prior to necking), whereas Fig. 4b shows the evolution of stress triaxiality at two representative positions for different material parameter sets. As a key result of the uncertainty quantification, main and total Sobol’ sensitivity indices S[i] and S[T,i] for maximum stress triaxiality as established damage initiation indicator show particularly high sensitivity for parameters related to nonlinear hardening, Fig. 4c. The results underline the potential of damage control through optimization of the material’s nonlinear hardening behavior influenced by, e.g., heat treatment. In particular, to ensure accurate predictions of damage, special emphasis must be placed on the fitting of material parameters describing the nonlinearity of the hardening response. This preliminary work contributes to reliable simulation based prediction of process-induced properties and is most useful for its extension towards, e.g., more advanced material models and the consideration of inhomogeneous distributions of model parameters. Fig. 4: Notched specimen and stress triaxiality distribution at a load state prior to necking generated by FE simulation of one eighth of the specimen (a); stress triaxiality evolut. at points P1 and P2 for different mat. parameter sets (b); main and total Sobol’ sensitivity indices S[i] and S[T,i] of the underlying parameters associated with maximum stress triaxiality (c). During the 2nd FP, gradient-enhanced ductile damage models were extended – in particular to anisotropic ductile damage suitable for implementation into Abaqus and including gradient-enhancement of plasticity related contributions. This enabled the successful use and application of the models for mesh-independent forming process simulations by analogy with the previously established and verified gradient-enhanced isotropic damage model successfully applied to, e.g., air bending, extrusion processes, and prediction of electrical resistance in a formed specimen. This anisotropic formulation had to follow a different derivation and implementation (suitable for Abaqus) than originally planned. While further model extensions, such as anisotropic distortional hardening, thermal softening and rate-dependent effects over wide temperature ranges, have been established in the 2nd FP, their combination with gradient-enhanced ductile damage still constitutes future research. Instead, in a collaborative work together with new project C06 and as preliminary work for the 3rd FP, a computationally efficient framework for uncertainty quantification was established. In addition to preliminary work on phase transformations, a finite thermoplasticity model for static and dynamic recovery and recrystallization, grain growth and grain refinement was developed and implemented for thermomechanically fully coupled finite element simulations. This enables the prediction of recrystallization dependent softening effects, microstructure evolution and related effects on (subsequent) cold forming processes. Further challenges in the area of macroscopic modeling of damage in forming processes concern the transition of established models to the simulation of other materials. This may require further model extensions in addition to verification. Additional phenomena such as thermomechanically induced reduction of (pre-existing) damage is not comprehensively included in the current state of research, which similarly applies to more advanced uncertainty quantification analysis. Interactions of recrystallization and phase transformation phenomena, which trigger particular microstructures of the respective metals, with damage evolution are, at this stage, neither fully understood nor sufficiently predictable by advanced modeling and simulation. Project- and subject-related list of publications [Alf04] K.S. Alfredsson, U. Stigh (2004) Continuum damage mechanics revised – A principle for mechani-cal and thermal equivalence. Int. J. Solids Struct., 41:4025-4045, DOI:10.1016/ [And22] Andersson, B., Ahlström, J., Ekh, M., Josefson, B.L. (2022) Homogenization based macroscopic model of phase transformations and cyclic plasticity in pearlitic steel. J. Therm. Stress., 45 (6):470-492, DOI:10.1080/01495739.2022.2056557 [Ban10] Banabic, D. (2010) Sheet Metal Forming Processes. Springer, DOI:10.1007/978-3-540-88113-1 [Bar21] Bartel, T., Geuken, G.-L., Menzel, A. (2021) A thermodynamically consistent modelling framework for strongly time-dependent bainitic phase transitions. Int. J. Solids Struct., 232:111172, [Böd23a*] Böddecker, M., Faes, M.G.R., Menzel, A., Valdebenito, M.A. (2023) Effect of uncertainty of ma-terial parameters on stress triaxiality and Lode angle in finite elasto-plasticity – a variance-based global sensitivity analysis. Adv. Ind. Manuf. Eng., 7:100128, DOI:10.1016/j.aime.2023.100128 [Böd23b*] Böddecker, M., Menzel, A. (2023) A large strain thermoplasticity model including recovery, re-crystallisation and grain size effects. Proc. Appl. Math. Mech., 23(4):e202300282, DOI:10.1002/ Cho, H.E., Hammi, Y., Bowman, A.L., Karato, S., Baumgardner, J.R., Horstemeyer, M.F. (2019) A unified static and dynamic recrystallization Internal State Variable (ISV) constitutive model [Cho19] coupled with grain size evolution for metals and mineral aggregates. Int. J. Plast., 112:123-157, DOI:10.1016/j.ijplas.2018.08.009 [deO13] de Oliveira, W.P., Savi, M.A., Pacheco, P.M.C.L. (2013) Finite element method applied to the quenching of steel cylinders using a multi-phase constitutive model. Arch. Appl. Mech., 83:1013-1037, DOI:10.1007/s00419-013-0733-x [Ekh03] Ekh, M., Menzel, A., Runesson, R., Steinmann, P. (2003) Anisotropic damage with the MCR effect coupled to plasticity. Int. J. Eng. Sci., 41(13-14):1535-1551, DOI:10.1016/S0020-7225(03) [Fas19] Fassin, M., Eggersmann, R., Wulfinghoff, S., Reese, S. (2019) Gradient-extended anisotropic brittle damage modeling using a second order damage tensor – theory, implementation and nu-merical examples. Int. J. Solids Struct., 167:93-126, DOI:10.1016/j.ijsolstr.2019.02.009 [Fel22] Felder, S., Kopic-Osmanovic, N., Holthusen, H., Brepols, T., Reese, S. (2022) Thermo-mechanically coupled gradient-extended damage-plasticity modeling of metallic materials at finite strains. Int. J. Plast., 148:103142, DOI:10.1016/j.ijplas.2021.103142 [For09] Forest, S. (2009) Micromorphic approach for gradient elasticity, viscoplasticity, and damage. J. Eng. Mech., 135(3):117-131, DOI:10.1061/(ASCE)0733-9399(2009)135:3(117) Friedlein, J., Mergheim, J., Steinmann, P. (2023) Efficient gradient enhancements for plasticity with ductile damage in the logarithmic strain space. Euro. J. Mech. A/Solids, 99:104946, [Fri23] DOI:10.1016/j.euromechsol.2023.104946 [Fae21] Faes, M.G., Daub, M., Marelli, S., Patelli, E., Beer, M. (2021) Engineering analysis with probability boxes: A review on computational methods. Struct. Saf., 93:102092, DOI:10.1016/ [Guh20] Guhr, F., Sprave, L., Barthold, F.-J., Menzel, A. (2020) Computational shape optimisation for a gradient-enhanced continuum damage model. Comp. Mech., 65:1105-1124, DOI:10.1007/ [Guh23] Guhr, F., Barthold, F.-J. (2023) Variational sensitivity analysis and shape optimisation applied to a non-local ductile damage model. Comp. Mech., DOI:10.1007/s00466-023-002377-w [Güz23] Güzel, D., Kaiser, T., Lücker, L., Baak, N., Walther, F., Menzel, A. (2023) Characterisation of dam-age by means of electrical measurements: numerical predictions. Proc. Appl. Math. Mech., 23(2):e202300013, DOI:10.1002/pamm.202300013 [Hol22] Holthusen, H., Brepols, T., Reese, S., Simon, J.-W. (2022) A two-surface gradient-extended aniso-tropic damage model using a second order damage tensor coupled to additive plasticity in the logarithmic strain space. J. Mech. Phys. Solids, 163:104833, DOI:10.1016/j.jmps.2022.104833 [Kai20] Kaiser, T., Lu, J., Menzel, A., Papadopoulos, P. A covariant formulation of finite plasticity with plasticity-induced evolution of anisotropy: modeling, algorithmics, simulation, and comparison to experiments. Int. J. Solids Struct., 185-186:116-142, DOI:10.1016/j.ijsolstr.2019.08.005 [Kou08] Kouznetsova, V., Geers, M.G.D. (2008). A multi-scale model of martensitic transformation plastici-ty. Mech. Mat., 40:641-657, DOI:10.1016/j.mechmat.2008.02.004 [Kwe13] Kweon, S. (2013) Investigation of shear damage considering the evolution of anisotropy. J. Mech. Phys. Solids, 61(12):2605-2624, DOI:10.1016/j.jmps.2013.08.006 [Lan20] Langenfeld, K., Mosler, J. (2020) A micromorphic approach for grad.-enhanced anisotropic ductile damage. Comp. Methods Appl. Mech. Engrg., 360:112717, DOI:10.1016/j.cma.2019.112717 [Men05] Menzel, A., Ekh, M., Runesson, K., Steinmann, P. (2005) A framework for multiplicative elastoplas-ticity with kinematic hardening coupled to anisotropic damage. Int. J. Plast., 21 (3):397-434, DOI:10.1016/j.ijplas.2003.12.006 [Mey21*] Meyer, K.A., Menzel, A. (2021) A distortional hardening model for finite plasticity. Int. J. Solids Struct., 232:111055, DOI:10.1016/j.ijsolstr.2021.111055 [Moh22] Mohammadpour, A., Geers, M.G.D., Kouznetsova, V.G. (2022) Multi‑scale modeling of the thermo‑mec. behavior of cast iron. Multiscale Sci. Eng., 4:119-136, DOI:10.1007/s42493-022-00081-0 [Muc23] Mucha, M., Rose, L., Wicislo, B., Menzel, A., Pamin, J. (2023) Experiments and numerical simula-tion of Lueders bands and Portevin-Le Chatelier effect in aluminium alloy AW5083. Arch. Mech., 75(3):301-336, DOI:10.24423/aom.4204 [Nol20] Noll, I., Bartel, T., Menzel, A. (2020) A computational phase transformation model for selective laser melting processes. Comp. Mech., 66:1321-1342, DOI:10.1007/s00466-020-01903-4 [Opp19] Oppermann, P., Denzer, R., Menzel, A. (2019) Finite-strain thermo-viscoplasticity for case-hardening steels over a wide temperature range. Proc. Appl. Math. Mech., 19:e201900237, [Opp22*] Oppermann, P., Denzer, R, Menzel, A. (2022) A thermo-viscoplasticity model for metals over wide temperature ranges – application to case hardening steel. Comp. Mech., 69:541-56, DOI:10.1007 [Ost19] Ostwald, R., Kuhl, E., Menzel, A. (2019) On the implementation of finite deformation gradient-enhanced damage models. Comp. Mech., 64(3):847-877, DOI:10.1007/s00466-019-01684-5 [Pee96] Peerlings, R.H.J., de Borst, R., Brekelmans, W.A.M., de Vree, J.H.P., Spee, I. (1996) Some ob-servations on localisation in non-local and gradient damage models, Euro. J. Mech. A/Solids, 15:937-953, https://research.tue.nl/en/publications/2b303112-d927-488a-ad1b-46669e5e28bb [Ros21*] Rose, L., Menzel, A. (2021) Identification of thermal material parameters for thermo-mechanically coupled material models. Meccanica, 56:393-416, DOI:10.1007/s11012-020-01267-2 [Sch20*] Schulte, R., Ostwald, R., Menzel, A. (2020) Gradient-enhanced modelling of damage for rate-dependent material behaviour – a parameter identification framework. Materials, 13(14):3156, [Sch23*] Schulte, R., Karca, C., Ostwald, R., Menzel, A. (2023) Machine learning-assisted parameter identi-fication for constitutive models based on concatenated loading path sequences. Euro. J. Mech. A/Solids, 98:104854, DOI:10.1016/j.euromechsol.2022.104854 [Sob24*] Sobisch, L., Kaiser, T., Furlan, T., Menzel, A. (2024) A user material approach for the solution of multi-field problems in Abaqus: Theoretical foundations, gradient-enhanced damage mechanics and thermo-mech. coupling. Finite Elem. Anal. Des., 232:104105, DOI:10.1016/j.finel.2023.104105 [Son22] Song, J., Wei, P., Valdebenito, M.A., Faes, M., Beer, M. (2022) Data-driven and active learning of variance-based sensitivity indices with Bayesian probabilistic integration. Mech. Syst. Signal Pro-cess., 163:108106, DOI:10.1016/j.ymssp.2021.108106 [Spr20a] Sprave, L., Schowtjak, A., Meya, R., Clausmeyer, T., Tekkaya, A.E., Menzel, A. (2020) On mesh dependencies in finite-element-based damage prediction: Application to sheet metal bending. Prod. Eng., 14:123-134, DOI:10.1007/s11740-019-00937-9 [Spr20b*] Sprave, L., Menzel, A. (2020) A large strain gradient-enhanced ductile damage model – Finite element formulation, experiment and parameter identification. Acta Mech., 231(12):5159-5192, [Spr23*] Sprave, L., Menzel, A. (2023) A large strain anisotropic ductile damage model – Effective driving forces and gradient-enhancement of damage vs. plasticity. Comp. Methods Appl. Mech. Engrg., 416:116284, DOI:10.1016/j.cma.2023.116284 [Voy11] Voyiadjis, G.Z., Shojaei, A., Li, G., Kattan, P.I. (2011) A theory of anisotropic healing and damage mechanics of materials. Proc. R. Soc. A, 468:163-183, DOI:10.1098/rspa.2011.0326 [Yad12] Yadegari, S.Turteltaub, S., Suiker, A.S.J. (2012) Coupled thermomechanical analysis of transfor-mation-induced plasticity in multiphase steels. Mech. Mat., 53:1-14, DOI:10.1016/
{"url":"https://trr188.de/index.php/en/research-results/project-area-c/c02-macroscopic-modeling-of-damage-evolution-in-forming-processes","timestamp":"2024-11-06T13:45:34Z","content_type":"text/html","content_length":"136147","record_id":"<urn:uuid:bf98e7c3-3ad6-41cf-aa2c-2cf1f6c44fc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00055.warc.gz"}
VPEDIGREE directive • Genstat Knowledge Base 2024 Generates an inverse relationship matrix for use when fitting animal or plant breeding models by REML. SEX = string token Possible sex categories of parents (fixed, either); default fixe UNKNOWN = scalar Value to be treated as unknown INDIVIDUALS = factors Individuals on which data has been measured MALEPARENTS = factors Male parents of the progeny FEMALEPARENTS = factors Female parents of the progeny INVERSE = pointer Inverse relationship matrix in sparse matrix form POPULATION = variates Full list of identifiers generated from the individuals and parents VPEDIGREE is used to generate a sparse inverse relationship matrix for use when fitting animal (or plant) breeding models by REML. The algorithm requires three parallel factors as input. The numerical levels of these factors must give identifiers for the individuals from which data are available (INDIVIDUALS) and the identifiers for the male and female parents for each individual (MALEPARENTS and FEMALEPARENTS). Note that an individual may appear as both progeny and a parent (for example, when data has been taken from several generations) and conversely, that if an identifier appears in more than one list then it is assumed to refer to a single individual. Also, the algorithm does not take account of labels, so where textual labels are used the labels vectors of the three factors should be identical in order to generate matching levels vectors and thus avoid errors. A complete list of all individuals in the three factors is compiled and can be saved using the POPULATION option, and on output, the three factors will be redefined with this list as their levels vector. The inverse relationship matrix that is generated is held in a special sparse matrix form (that is, only non-zero values are stored), using a pointer. This is usable in the VSTRUCTURE directive but not, currently, elsewhere in Genstat. The second element of the pointer is a variate storing the non-zero values of the inverse matrix in lower-triangular order. The first element of the pointer is an integer index vector. This vector is not a standard Genstat data structure, and so cannot be used except by VSTRUCTURE. By default, it is assumed that an individual can act as either a male or female parent but not both. Option SEX=either can be used to specify that individuals can act as both male and female parents. This may be useful, for example, in plant breeding analyses. Missing values in any of the factors will be treated as coding for unknown individuals. Option UNKNOWN allows you to specify an additional scalar value used to represent unknown individuals. Options: SEX, UNKNOWN. Parameters: INDIVIDUALS, MALEPARENTS, FEMALEPARENTS, INVERSE, POPULATION. VPEDIGREE ignores any restrictions on the factors. See also Directives: REML, VCOMPONENTS, VSTRUCTURE, VRESIDUAL, VSTATUS. Procedure: VFPEDIGREE. Commands for: REML analysis of linear mixed models. " Example REML-6: genetic analysis example (Harvey, W.R. (1977). Users' guide to LSML76)." SPLOAD '%GENDIR%/Examples/harvey.gsh' VPEDIGREE [UNKNOWN=0] INDIVIDUALS=Calf; MALEPARENTS=Sire; FEMALEPARENTS=Dam;\ INVERSE=inv; POPULATION=pop VCOMPONENTS [FIXED=Line] RANDOM=Calf VSTRUCTURE [TERMS=Calf] MODEL=fixed; INVERSE=inv REML [PRINT=model,components,effects,means,vcovariance,Waldtests]\ Y=Y3; SAVE=harvey " Calculate heritability " VKEEP [VESTIMATE=vestim] CALCULATE h2 = vestim$[1]/(1+vestim$[1]) PRINT 22; HEADING='Heritability'
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/vpedigre/","timestamp":"2024-11-11T20:52:01Z","content_type":"text/html","content_length":"41431","record_id":"<urn:uuid:bb5578fb-70e4-4495-a024-8fd9bf954548>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00470.warc.gz"}
Asset Bubbles and the “What’s 2/3 of the Average” Game | The Incidental Economist I’ve received several e-mails and some comments about the “what’s 2/3 of the average” game, which I analyzed earlier this week. (If you haven’t read that analysis you might find this post hard to follow.) Some of them suggested that players are not rational if they do not select zero. That’s not quite right. I explained why in my analysis. Below I explain again and show how this game is a stylized model of an asset bubble. What makes the game interesting is precisely that all players can be rational and still not select zero. Why? Because it takes more than rationality on the part of players for zero to be the winning answer. Rationality is doing what is in one’s best interest. It would not be in one’s best interest to select zero if one has reason to believe that a number larger than zero will be the winning answer. In that case one will regret one’s choice of zero. Indeed, zero will not be the winning answer unless something much stronger than rationality holds, namely common knowledge of rationality. As explained in the analysis of the game, common knowledge of rationality occurs when everyone knows everyone knows everyone knows…(an infinite number of these)…everyone knows everyone is rational. Rationality by itself is not enough for the winning answer to be zero. Something like this game, and the difference between rationality and common knowledge of rationality, arises in asset markets in the form of bubbles. It can be rational to participate in a market bubble. One can make a lot of money. One can also lose one’s shirt. I’m not suggesting it is wise to speculate in a market bubble, but it can be rational to do so. To make the connection between asset bubbles and the “what’s 2/3 of the average” game explicit, imagine your guess of a number between 0 and 100 is the maximum price at which you’d be willing to buy a share of stock. Call your guess G (for “guess”). That’s the most you think the stock is worth for a purely speculative purchase. The actual market price is 2/3 of the average of all guesses. Call this value P (for “price”). What’s the value of the stock? Will you participate in this market? For example, your guess could be G=$10 and 2/3 of the average of all guesses could be P=$20. In that case you could make a profit if you could buy at any value below $20. Your guess G=$10 is the most you’d be willing to pay. If you could buy at $10 and sell at the market price of $20, you’d make a profit of $10 per share. It would be rational for you to do so. What if no one will sell to you at $10 but someone will sell to you at $15. Should you buy? Yes you should. But your guess of $10 was the maximum you’d pay. You won’t pay $15 or you’d have guessed $15 (or more). So you’re out of the market. You lost an opportunity. Your guess was too low! In fact, it’d be rational for you to pay up to $19.99 since you can turn around and sell at $20. You can see there is an incentive to bid (guess G) higher than is suggested by appeal to common knowledge of rationality. You want to participate in this market (this game) to make money. You can’t make money by sitting on the sidelines. But that is exactly where you’ll be for sure if you guess G=$0. This is still the “what’s 2/3 of the average” game. We’ve just interpreted the guesses and the winning value in a specific way. So we know the “right” price of the stock based on an argument of common knowledge of rationality is $0. But we also know that P (2/3 of the average of guesses) will not be $0. In fact it is likely to be close to $20 (*). Everyone who is able to buy the stock at a price below P can make a profit and they’re rational to do so. It is not irrational to set G above zero. In fact it can be a very smart thing to do (in this game). Therefore, a speculative bubble for a worthless stock can develop for which the price is far above the “right” price. Many market participants are behaving rationally. The bubble exists because an assumption of common knowledge of rationality does not hold. But an asset bubble is not a one-shot game. Players buy and sell multiple times. Eventually additional iterations of assumptions of rationality emerge. The price begins to fall. The bubble bursts, the price goes to zero, and everybody knows it. Moreover, it is common knowledge. (*) One has to be a bit careful. The version of the “what’s 2/3 of the average” game presented here is different than the one presented previously. In particular the payoffs are different. In the version presented before there was a winner: the player(s) that guessed closest to 2/3 of the average of the guesses. In the version in this post many players can profit (by differing amounts). Though someone may make the most money (per share) I haven’t really defined a “winner.” Payoffs (incentives) can change strategy. It is therefore possible that 2/3 of the average will be different in the two versions of the game. Strictly speaking it is not correct to assume that a number near 20 will be 2/3 of the average in this version even if it is for the previous version. Nevertheless, I’d bet a fortune that 2/3 of the average will not be zero in either game played by a population not familiar with it. After that population plays many times the answer will likely tend toward zero. How many iterations will it take to converge? I don’t know.
{"url":"https://theincidentaleconomist.com/wordpress/asset-bubbles-and-the-whats-23-of-the-average-game/","timestamp":"2024-11-11T23:23:20Z","content_type":"text/html","content_length":"69118","record_id":"<urn:uuid:b6ccaf7e-837c-4157-bb2e-9c7e3b5ad868>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00727.warc.gz"}
New Orbs 08-05-2017, 12:17 AM (This post was last modified: 08-05-2017, 12:30 AM by Gandor.) #1: Ascension Orb. Ascension points plus reset timer reduction. 1* 2 ascension points, 1 hour reduction. 2* 8 ascension points, 2 hour reduction. 3* 18 ascension points, 3 hour reduction. 4* 32 ascension points, 4 hour reduction. 5* 50 ascension points, 5 hour reduction. 6* 72 ascension points, 6 hour reduction. #2: Beast Orb. Heck if I know what it does, but it was teased! #3: Cost Reduction Orb. Reduces AP cost of various things including: Standard upgrades, wood upgrades, stone upgrades, dynamite, AP during pet arena, AP when fast tracking pet arena, pet food from shop, etc. 1* 1% AP cost reduction. 2* 4% AP cost reduction. 3* 9% AP cost reduction. 4* 16% AP cost reduction. 5* 25% AP cost reduction. 6* 36% AP cost reduction. #4: Expedition Orb. Increases rewards of expeditions. 1* 1% extra expedition reward. 2* 4% extra expedition reward. 3* 9% extra expedition reward. 4* 16% extra expedition reward. 5* 25% extra expedition reward. 6* 36% extra expedition reward. #5: Pet Rage Orb. Extends range of Final Blow pet map ability activation. (Normally 25% of max HP remaining) 1* 26% max enemy HP remaining to activate. 2* 28% max enemy HP remaining to activate. 3* 31% max enemy HP remaining to activate. 4* 35% max enemy HP remaining to activate. 5* 40% max enemy HP remaining to activate. 6* 46% max enemy HP remaining to activate. #6: Nullifier Orb. Chance to nullify damage in ship arena (AI battles only). 1* 1% chance to nullify damage. 2* 2% chance to nullify damage. 3* 3% chance to nullify damage. 4* 4% chance to nullify damage. 5* 5% chance to nullify damage. 6* 6% chance to nullify damage. #7: Card Booster Orb. Increases Card Drop rate, Card Drop size, and Card damage. 1* 2% increased drop rate, 3% drop size, 2% damage. 2* 8% increased drop rate, 12% drop size, 5% damage. 3* 18% increased drop rate, 27% drop size, 9% damage. 4* 32% increased drop rate, 48% drop size, 14% damage. 5* 50% increased drop rate, 75% drop size, 20% damage. 6* 72% increased drop rate, 108% drop size, 27% damage. #8: Inferno Grind Orb. Reduces max HP of mob based on Inferno tier but maintains selected Inferno tier rewards. Uses average of all party members orb levels to calculate final reduction. 1* Mob HP reduced 2 inferno tiers. 2* Mob HP reduced 8 inferno tiers. 3* Mob HP reduced 18 inferno tiers. 4* Mob HP reduced 32 inferno tiers. 5* Mob HP reduced 50 inferno tiers. 6* Mob HP reduced 72 inferno tiers. #9: Bastion Orb. Increase challenge timer. (Series and regular.) Average orb levels of all challenge party members are used to calculate final timer length bonus. 1* Add 1 second to challenge timer. 2* Add 2 seconds to challenge timer. 3* Add 3 seconds to challenge timer. 4* Add 4 seconds to challenge timer. 5* Add 5 seconds to challenge timer. 6* Add 6 seconds to challenge timer. #10: Treasure Orb. Drastically improve treasure mob rewards. Both gold AND xp. 1* 20% better rewards. 2* 80% better rewards. 3* 180% better rewards. 4* 320% better rewards. 5* 500% better rewards. 6* 720% better rewards. 08-05-2017, 01:07 AM All these orbs are so good!!! Ray, add em! 08-05-2017, 01:34 AM Definitely some interesting ideas here. I hadn't even really thought much beyond the beast orb, but there's no reason an arrow couldn't be added for a whole new set. 08-05-2017, 01:34 AM #1: Ascension Orb i like #3: Cost Reduction Orb i like but could be broken af #4: Expedition Orb. good if it doesnt work like max rewards and being a normal multipler instead #5: Pet Rage Orb. i like #6: Nullifier Orb. fuck this shit #7: Card Booster Orb cool idea i like especially the carddmg and dropchance increase #8: Inferno Grind Orb #9: Bastion Orb. only cool if it really gives time for series and if ray finally decides to remove all unnecessary series levels to decrease time-investment #10: Treasure Orb would prefer a rework of treasures above more gold and to fulfil gand0rs request: lmao 08-06-2017, 07:07 AM (This post was last modified: 08-06-2017, 07:09 AM by ExWhiteWolf. Edit Reason: better wording for expedition ) #1: Ascension Orb Some sort of ascension/lvl multiplier for more ascension points would be cool. Or even just a % increase so the only useful part isn't just the time reduction. #3: Cost Reduction Orb This would probably be pretty broken. #4: Expedition Orb. I would like if this is a normal multiplier to all rewards. Unless the expedition is a special crit expeditions. When those happen the crit expeditions should have a chance to get the multiplier. #5: Pet Rage Orb. yes just yes #6: Nullifier Orb. No i don't fkin need rng into ship arena. #7: Card Booster Orb Might as well we already gotten some other card boosts. #8: Inferno Grind Orb Don't feel this would be very impactful maybe a % too for balance reasons. #9: Bastion Orb. Ehhh I always felt it should be 30 seconds and then increase from there. There is also the struggle later that series doesn't mean much and the time investment is huge. #10: Treasure Orb I think ray was planning a rework for treasure anyway. 08-09-2017, 06:50 AM (08-06-2017, 07:07 AM)ExWhiteWolf Wrote: #6: Nullifier Orb. No i don't fkin need rng into ship arena. Aww but why not? Don't you want to grind your face into dust against the wall of RNG that could be?! But yes I actually agree this probably isn't the best idea, let's just make it a straight damage reduction, 2% per orb level. Hooray!
{"url":"https://iourpg.com/forum/showthread.php?tid=2458&pid=13611#pid13611","timestamp":"2024-11-04T17:43:48Z","content_type":"application/xhtml+xml","content_length":"44001","record_id":"<urn:uuid:e750851f-d686-4526-9ca5-da938951a499>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00211.warc.gz"}
irect and inverse proportion Variation - direct and inverse proportion Direct Proportion If two quantities exist such that the increase in one leads to an increase in the other, they are in direct proportion. E.g. The speed of a car and the distance covered during a certain time - the greater the speed , the greater the distance. The cost of five books is £20. Find the cost of 7 books. How many books can someone buy for £36? 5 books ----------> £20 1 book -----:-5----> £4 7 books ----X4------> £28 Now, reverse the calculation as follows: £20 ----------> 5books £1 -----:-20----> 5/20 book £36 ----X36------> 9 books Inverse Proportion If two quantities exist such that the increase in one leads to the decrease in the other, they are said to be in inverse proportion. The speed of a car is in inverse proportion to the time taken for a certain journey. If a car moves at 20 mph, it covers a distance in 6 hrs. How long will it take to cover the same distance at 30 mph? It the time taken for the same journey is 8 hrs, calculate the speed. 20 mph ----------> 6 hrs :-20 ----------> X 20 1 mph --------->120 hrs X 30 ----------> :- 30 30 mph ----------> 4 hrs Now, reverse the calculation as follows: 6 hrs ----------> 20 mph :-6 ----------> X 6 1 hour --------->120 mph X 8 ----------> :- 8 8 hrs ----------> 15 mph Direct Variation When one quantity increases, if another quantity increases with that, it is called direct variation. The length of a rubber band (l) and the force that stretches it (F), are in direct variation. l ∝ F l = kF, where k is a constant. Please move the slider and experiment with it. l ∝ F The length of a rubber band, when stretched by 20N is 6cm. Find the length, when the force is raised to 60N. Find the Force that keeps the length at 15cm. l ∝ F l = kF When l = 6 and F = 20, 6 = k x 20 k = 6/20 When F = 60, l = 6/20 x 60 = 18 F = 18 N. When l = 15, 15 = 6/20 x F F = 50N. The power of a certain device (P) is directly proportional to the square of the current through it. The power is 60W, when the current is 2Amps. Find the power, when the current is 3Amps. Find the current when the power is 90W. P ∝ I^2 P = k I^2 When I = 2 and P = 60, 60 = K X 4 k = 15 When I = 3, P = 15 x 9 P = 135W. When P = 90, 90 = 15 * I^2 I = 2.3A. Inverse Variation When one quantity increases, if another quantity decreases with that, it is called inverse variation. The volume of a fixed mass of gas is inversely proportional to the pressure. V ∝ 1/P V = k1/P Please move the slider and experiment with it. P ∝ 1 / V The volume of a gas container is 3cm^3 when pressure is 12Pa. Find the volume, when the pressure is 36N. At what pressure, will the volume be 9cm^3? When P = 12 and V = 3, 3 = k x 1/12 k = 36 When P = 36, V = 36 x 1/36 = 1 V = 1cm^3 When v = 9, 9 = 36 x 1/P P = 4Pa. The force between two magnets is inversely proportional to the distance between them. When the distance is 3cm, the force is 12N. Find the force, when the distance is 2cm. Calculate the distance, when the force is 432N as well. F ∝ 1/d^2 F = k 1/d^2 When d = 3 and F = 12, 12 = k x 1/9 k = 108 When d = 2, F = 108 x 1/4 F = 27N. When F = 432, 432 = 108 x 1/d^2 d^2 = 1/4 d = 1/2cm. Now, in order to complement what you have just learnt, work out the following questions: 1. The distance d through which a feather falls from rest is proportional to the square of the time taken t. if the feather falls 15m in 3 seconds, how far will it fall in 9 seconds? How long will it take to fall 45m? 2. The energy E stored in a square-plated capacitor varies as the square of the length of the side x. When the length is 3cm, the energy stored is 81 Joules. What is the energy stored when the length is 5cm? What is the length when the stored energy is 243Joules? 3. The length L of an embryo of a certain mammal is proportional to the square root of the number of hours left before its birth. If the embryo is 12 cm long before 2 hours, how long will it be after 8 hours? How long will it take to grow to a length of 18cm? 4. p is directly proportional to q. Fill in the banks. 5. L is directly proportional to (M - 2). L is 36 when M is 4. Find L when M = 10. Find M when L = 108. 6. The kinetic energy of an object is proportional to the square of its speed. The kinetic energy is 64J when the speed is 4m/s. Find the kinetic energy when the speed is 20m/s. At which speed will the Kinetic energy be 900J? 7. The frictional force against the motion of a car is proportional to the square of the speed of the car. If this force is 16000N at 10 m/s, what is the force at 30 m/s? At what speed is the force equal to 900N? 8. The force of attraction between any two stars is inversely proportional to the square of the distance between them. The force between two, when they are 4 light years apart is 24x10^64N.Find the force when the distance is 3 light years. If the force is 48x10^64N, find the distance between them too. 9. In the following table, p is inversely proportional to q. Fill in the blanks. 10. y is inversely proportional to (x - 3). y = 4 when x = 5. Find y when x = 18. Find x when y = 144.
{"url":"https://www.vivaxsolutions.com/maths/gcsevariation.aspx","timestamp":"2024-11-06T04:44:40Z","content_type":"text/html","content_length":"24648","record_id":"<urn:uuid:1ede73aa-7b6e-4b4e-a129-38e8d7ce49bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00232.warc.gz"}
GRMNOMIAL procedure • Genstat Knowledge Base 2024 Generates multinomial pseudo-random numbers (D.B. Baird). NVALUES = scalar Number of values to generate SEED = scalar Seed to generate the random numbers; default 0 continues an existing sequence or initializes the sequence automatically if no random numbers have been generated in this job PROBABILITIES = variates or tables Probabilities for the categories NUMBERS = factors Saves the random numbers COUNTS = tables Saves counts of the numbers generated in each category GRMNOMIAL generates pseudo-random numbers from a multinomial distribution. The probabilities for the categories are specified by the PROBABILITIES option, in either a variate or a table. The numbers can be saved, in a factor, using the NUMBERS parameter. The NVALUES option specifies the number of values to be generated. If this is not set, the length of the NUMBERS factor is used or, if that has not been defined, a single value is generated. The COUNTS parameter can save a table with counts of the numbers generated in each category. If COUNTS has not already been defined as a table with a suitable classifying factor, it is defined as follows. Firstly, if NUMBERS has been set, COUNTS is defined as a table with NUMBERS as the classifying factor. Otherwise, if PROBABILITIES has supplied a table rather than a variate, COUNTS is defined as a table classified by the same classifying factor as PROBABILITIES. Finally, the fall-back is to define COUNTS as a table with an unnamed classifying factor. The SEED option can be set to initialize the random-number generator. The default of zero continues an existing sequence, or initializes the sequence automatically if no random numbers have been generated in this job. Options: NVALUES, SEED. Parameters: PROBABILITIES, NUMBERS, COUNTS. The pseudo-random numbers are generated using the GRUNIFORM function. Any restrictions are ignored. Directive: CALCULATE. Procedures: GRANDOM, GRCSR, GREJECTIONSAMPLE, GRLABEL, GRMULTINORMAL, GRTHIN, GRTORSHIFT, SAMPLE, SVSAMPLE. Functions: GRBETA, GRBINOMIAL, GRCHISQUARE, GRF, GRGAMMA, GRHYPERGEOMETRIC, GRLOGNORMAL, GRNORMAL, GRPOISSON, GRSAMPLE, GRSELECT, GRT, GRUNIFORM. Commands for: Calculations and manipulation. CAPTION 'GRMNOMIAL example',\ !t('Generate 200 values from a multinomial distribution',\ 'with probabilities 0.4, 0.3, 0.2 and 0.1.'); STYLE=meta,plain VARIATE [VALUES=0.4,0.3,0.2,0.1] Probabilities FACTOR [LABELS=!t(%40,%30,%20,%10)] Numbers GRMNOMIAL [NVALUES=200; SEED=259452] Probabilities; NUMBERS=Numbers;\ PRINT Counts
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/grmnomia/","timestamp":"2024-11-14T11:40:17Z","content_type":"text/html","content_length":"42073","record_id":"<urn:uuid:2540353d-d909-48b1-b900-611c8432af81>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00378.warc.gz"}
A large goose lands in a lake and bobs up and down for a short time. A fisherman notices that the first wave crest created by the goose reac - DocumenTVA large goose lands in a lake and bobs up and down for a short time. A fisherman notices that the first wave crest created by the goose reac A large goose lands in a lake and bobs up and down for a short time. A fisherman notices that the first wave crest created by the goose reac A large goose lands in a lake and bobs up and down for a short time. A fisherman notices that the first wave crest created by the goose reaches the shore 8.3 s after the goose first contacts the water. The distance between two wave crests is 90 cm, and in 2.0 s he sees four wave crests hit the shore. How can the fisherman use these observations to determine how far from the shore the goose in progress 0 Physics 3 years 2021-08-11T14:01:47+00:00 2021-08-11T14:01:47+00:00 1 Answers 28 views 0
{"url":"https://documen.tv/question/a-large-goose-lands-in-a-lake-and-bobs-up-and-down-for-a-short-time-a-fisherman-notices-that-the-15318656-38/","timestamp":"2024-11-04T04:54:53Z","content_type":"text/html","content_length":"81105","record_id":"<urn:uuid:c297a636-03a4-4628-9f91-6265b16c0c43>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00669.warc.gz"}
MISSION: Southwest Research and Information Center is a multi-cultural organization working to promote the health of people and communities, protect natural resources, ensure citizen participation, and secure environmental and social justice now and for future generations Nukes Globally and in Eleven Localities On November 29, 2006, the Department of Energy (DOE) announced that Global Nuclear Energy Partnership (GNEP) siting grants totaling up to $16 million (with $4 million in reserve) would be awarded in 11 localities – six DOE sites and five non-DOE sites. The actual awards are being negotiated and the grants will be made in early 2007. The grantees will then have 90 days to complete a detailed study of their site. The information from the site studies will be used by DOE in its GNEP environmental impact statement (EIS). Even if the awards are granted, the future of the GNEP local sites, the EIS process, and congressional support for GNEP are highly uncertain. While the Bush administration remains strongly in favor of GNEP, there are not a lot of other public supporters. And some of the localities that are now considered part of GNEP say they had little or no information and have not been able to effectively participate in the process. Public opposition is building at many of the sites, as well as to the GNEP siting process. For the past 20 years, various communities have been targeted or have “volunteered” as the nation’s commercial spent fuel storage location. None of the sites have gone forward, even though one, the Private Fuel Storage Site in Utah, was licensed by the Nuclear Regulatory Commission (NRC) on February 21, 2006. But continuing local and state opposition has also killed that site. The GNEP site would not only store virtually all of the nation’s commercial spent fuel, perhaps for a 100 years or more. But it could also be a “Consolidated Fuel Treatment Center” or reprocessing plant to extract uranium and transuranic elements from the spent fuel. And the site would also likely be the site for the Advanced Burner Reactor (ABR), a new reactor that would use as fuel some of the elements extracted from the spent fuel during reprocessing. The reprocessing plant and ABR would use commercial-scale technologies that do not currently exist, and would certainly cost more than $100 billion. Whether future Congresses would appropriate such funding and whether the technologies would work makes those two facilities highly speculative. Thus, if a local site is chosen, it would most likely be a long-term spent fuel storage site, as the Yucca Mountain, Nevada repository site remains years behind schedule. In January, the new Congress may look at GNEP and will decide what level of funding it will receive for the rest of Fiscal Year 2007. In February, the Bush proposed budget for 2008 will be submitted to Congress, and will likely include substantial increased funding. For Fiscal Year 2006, Congress provided $79.2 million for the “Advanced Fuel Cycle Initiative” (GNEP), and funded a “Integrated spent fuel recycling” program not included in the Bush Budget request. Congress appropriated $50 million for the program, including $20 million for the local siting grants (with a maximum of $5 million per site). The $30 million not for the siting grants was for DOE “to develop a spent nuclear fuel recycling plan” and for the EIS process. In 2006, Congress did not pass a DOE Budget for Fiscal Year 2007; instead a “Continuing Resolution” will fund DOE, providing continuing funding at approximately the level in FY2006, less the $50 million for the spent fuel program. In passing its version of the FY2007 DOE Budget, the House was highly critical of some aspects of GNEP. The House Appropriations Committee originated the idea of the local siting studies and spent fuel program. On May 19, 2006, the Committee reported the Energy and Water Development Appropriations Bill, 2007. The Committee stated that it has serious reservations about the GNEP as proposed by the Administration. The overriding concern is simply that the Department of Energy has failed to provide sufficient detailed information to enable Congress to understand fully all aspects of this initiative, including the cost, schedule, technology development plan, and waste streams from GNEP.…the GNEP proposal differs in several significant aspects from what the conferees directed last year, and the GNEP proposal falls short in a number of critical areas. There is no mention of the requirements that these facilities be integrated and co-located at a single site, nor (as is detailed below) is there any mention of the need for interim storage as part of an integrated recycling complex. Interim Storage.—In the Committee’s view, any such integrated spent fuel recycling facility must be capable of accumulating sufficient volumes of spent fuel to provide efficient operation of the facility. A first test of any site’s willingness to receive into the interim storage spent fuel in dry casks that provide safe storage of the spent fuel for 50 to 100 years or longer. In this Committee’s view, if any site refuses to provide interim storage as needed to support the operation of an integrated recycling facility, at whatever scale, then that site should be eliminated from all further consideration under GNEP. Seven are local entities. All but one entity from the DOE sites are existing local economic development groups (at Hanford, Idaho, Oak Ridge, Paducah, and SRS). The Piketon Initiative for Nuclear Independence, LLC was created to apply for GNEP funding. One of the private sites also has a newly formed limited liability corporation to request GNEP siting funds (Eddy Lea Energy Alliance). Two major international corporations – General Electric (GE) and Energy Solutions – are designated for the other four siting grants. GE’s Morris, IL site is near the Dresden Nuclear Plants, and was constructed as a reprocessing plant in the early 1970s. The Morris Plant never operated, because reprocessing did not function economically or in an environmental safe way at another site at West Valley, NY during that time and because federal policy turned against reprocessing, primarily because of concerns about proliferation of nuclear materials. Energy Solutions is a rapidly expanding nuclear corporation, which owns and operates the Envirocare low-level waste disposal site in Utah, and says it’s the nation’s “largest transporter of radioactive material – more than 300 shipments each month.” Companies it acquired have had contracts at DOE sites, including Fernald, Hanford, Idaho, Los Alamos, Oak Ridge, Paducah, Portsmouth, and SRS. It also says that it is has existing reprocessing technology (the Thermal Oxide Reprocessing (THORP) in Sellafield, Britain). The company is very favorably inclined to GNEP and has a “What is GNEP” button on its home page. However, the three Energy Solutions sites are different – two (Atomic City, ID and Roswell (Triassic Park), NM) are private sites with no particular known characteristics favorable to GNEP, and they haven’t previously been identified for any nuclear projects. On the other hand, the Barnwell, SC site has the Allied General Nuclear Services (AGNS) facility, which was constructed beginning in 1970, as a reprocessing plant. The AGNS facility, like the GE-Morris site, was constructed but never used because the federal government stopped supporting reprocessing. Each of the six DOE sites have been in existence for more than 50 years. Three (Hanford, Idaho, SRS) carried out reprocessing for plutonium and uranium for nuclear weapons, and each site has substantial amounts of contamination from that reprocessing, cleanup of which will continue for decades. Three (Oak Ridge, Paducah, Piketon) are uranium enrichment facilities and have substantial amounts of waste and contamination that has not been cleaned up. The five private sites, include two (Barnwell, SC and Morris, IL) where reprocessing plants were constructed more than 30 years ago. One (Atomic City, ID) seems to have applied primarily because of its proximity to Idaho National Lab. The two New Mexico sites have no nuclear facilities or experience. Gandy-Marley has only one facility – the Triassic Park Waste Disposal Facility (about 40 miles east of Roswell) – a New Mexico-permitted hazardous waste facility (radioactive wastes are prohibited). It received its operating permit in 2002, but has never operated or received any waste. The Eddy Lea Energy Alliance has an option for about two years to purchase the location that will be used for the siting study. Because DOE has refused to release the grantee’s proposals, even in truncated form, the specific locations being proposed are not always clearly identified publicly. In response to a Freedom of Information Act request for the Paducah application, DOE denied the request based on the application being under the “Trade Secret” exemption. The DOE Funding Opportunity Announcement required: • the location to be at least 300 contiguous acres for one facility and at least 500 contiguous acres for both facilities. • the site must allow facilities to be above the 100-year flood plain, • an electric transmission line that can provide at least 13 kilovolts must be available within 10 miles of the site, • population density out to 20 miles cannot exceed 500 persons per square mile • if applicable, zoning must allow heavy industrial use • road access capable of 80,000-pound loads must be within five miles • no significant seismic events can have occurred at the site • reliable water supplies must be available. Non-DOE sites and DOE sites were to be evaluated separately. The three criteria for Non-DOE sites were: • relevance and extent of the Applicant’s capabilities and experience in conducting nuclear siting studies and experience of key personnel in performing those studies (40 points) • suitability and availability of the site data to ensure that the study could be completed within 90 days of the award (50 points) • applicant’s capabilities to identify stakeholder support or concerns (10 points) Criteria for the DOE site were: • Applicants capability to provide the regulatory and permitting requirements for the GNEP facility (80 points) • Applicant’s capability to identify stakeholder support or concerns (20 points) In its November 29 announcement, DOE did not specifically state that each site met all of the requirements. Since Triassic Park is 480-acres, it does not appear to meet the minimum size requirement for both facilities. If the awards are granted, each of the studies could be substantially different. There is much available information about the DOE sites regarding the kinds of environmental information needed for an EIS. In contrast, there is little existing information about the two New Mexico and Atomic City, ID locations. So the study should have to provide much new detail about the site and its characteristics. The Morris, IL and Barnwell, SC sites were analyzed 30 years ago, but some updating will be necessary. At some sites, there is likely to be considerable public controversy about the study and attempts may be made to try to include information on community opposition as part of the study or in some other form. DOE would then have to determine whether it will include all 11 sites in the EIS or whether it will limit the sites or add any other DOE sites. On January 4, 2007, DOE announced in the Federal Register that it would prepare a programmatic EIS (PEIS) for GNEP, which would include scoping meetings at the 11 sites and in Washington, DC between February 13 and March 19, 2007. The DOE announcement also added two additional DOE sites – Argonne National Laboratory near Chicago, and the Los Alamos National Lab in New Mexico – for an “advanced fuel cycle research facility.” Four other DOE sites – Hanford, INL, Oak Ridge, and SRS – would also be considered for the research facility. DOE expects to have a draft PEIS for public comment in 2007 and a final PEIS in 2008. After the final PEIS is issued, DOE would select one or more sites for the interim storage/reprocessing and ABR and a site for the research facility. For the Bush administration, GNEP is part of its support for nuclear power worldwide. The Yucca Mountain, Nevada repository site is also an essential element, as there is no disposal place for the more than 53,000 tons of spent fuel at the nation’s 103 nuclear reactors. The Nuclear Waste Policy Act of 1982 stated that a repository would begin accepting that spent fuel in 1998. The nuclear utilities have filed many lawsuits, pointing out that the government has not met that commitment and asking for billions of dollars in damages. As in 2006, there will likely be legislative efforts in 2007 to address Yucca Mountain, in which the administration wants new legislation to streamline many existing requirements. Some in Congress would like some limits on the present unlimited amounts of liability that the government faces for not accepting spent fuel, so having an interim storage site may appear to be an alternative to meet the legal timeframe more quickly and to have the federal government taken ownership and possession of the commercial spent fuel. Any such legislation will be controversial, and Congressional action on nuclear waste over the past 25 years has been infrequent and not very successful. For example, the “Screw Nevada” legislation in 1987 did not speed up operation of the repository, as it was supposed to do, but rather fueled opposition in Nevada, as the political nature of the siting decision by Congress was apparent. THE ELEVEN SITES PRIVATELY OWNED GRANTEE Atomic City, ID EnergySolutions, LLC Barnwell, SC EnergySolutions, LLC Roswell, NM EnergySolutions, LLC Hobbs, NM Eddy Lea Energy Alliance Morris, IL General Electric DOE-OWNED GRANTEE Hanford, WA Tri-City Industrial Development Council/Columbia Basin Consulting Group Idaho National Lab Regional Development Alliance, Inc. Oak Ridge, TN Community Reuse Organization of East Tennessee Paducah, KY Paducah Uranium Plant Asset Utilization, Inc. Portsmouth, OH Piketon Initiative for Nuclear Independence, LLC Savannah River Site Economic Development Partnership of Aiken and Edgefield Counties Regarding what to do about the current situation in which spent fuel remains in long-term storage at reactors and will remain there in some large quantities for at least several decades and, given the fact that Yucca Mountain is considered to be a flawed, unusable site, many citizen groups have supported “Principles for Safeguarding Nuclear Waste at Reactors.” Those Principles advocate various safety enhancements at power plants, both in the spent fuel pools, and with “hardened on-site storage” (HOSS). HOSS storage will be near the surface and would not be permanent storage. Such facilities would be at the reactor site or as close as possible. While the spent fuel would remain at the power plants, it would be better contained and much better able to withstand accidents or terrorist attacks. The nuclear waste conundrum continues, as does the national politics that has yet to come up with a successful policy. Geologic disposal at Yucca Mountain is, if anything, farther off in the future. Just like Congress picked Yucca Mountain in 1987 for political reasons, in 2007, Senate Majority Leader Harry Reid from Nevada is a strong opponent of Yucca Mountain, making its political viability even less strong. Interim storage sites, whether promoted by Congress, DOE, or private utilities have never succeeded. What to do about commercial spent fuel will again likely be considered in legislation. Whether GNEP, and its commercial interim storage component, directly becomes central to that debate remains to be seen. Certainly some of the 11 GNEP sites may be promoted for commercial spent fuel storage, though that idea will create a lot of opposition at many of those sites. – Don Hancock
{"url":"http://www.sric.org/voices/2006/v7n4/gnep_v7n4.php","timestamp":"2024-11-13T06:23:10Z","content_type":"application/xhtml+xml","content_length":"31319","record_id":"<urn:uuid:0c560814-5c23-4b39-a090-f19f7ef2d555>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00739.warc.gz"}
Place Value 4th Grade Alabama Course of Study Standards: 6 Using models and quantitative reasoning, explain that in a multi-digit whole number, a digit in any place represents ten times what it represents in the place to its right. Arizona Academic Standards: 4.NBT.A.1 Apply concepts of place value, multiplication, and division to understand that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. Common Core State Standards: Math.4.NBT.1 or 4.NBT.A.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division. Georgia Standards of Excellence (GSE): 4.NR.1.2 Recognize and show that a digit in one place has a value ten times greater than what it represents in the place to its right and extend this understanding to determine the value of a digit when it is shifted to the left or right, based on the relationship between multiplication and division. North Carolina - Standard Course of Study: 4.NBT.1 Explain that in a multi-digit whole number, a digit in one place represents 10 times as much as it represents in the place to its right, up to 100,000. New York State Next Generation Learning Standards: 4.NBT.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. e.g., Recognize that 70 × 10 = 700 (and, therefore, 700 ÷ 10 = 70) by applying concepts of place value, multiplication, and division. Ohio's Learning Standards: 4.NBT.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right by applying concepts of place value, multiplication, or division. Tennessee Academic Standards: 4.NBT.A.1 Recognize that in a multi-digit whole number (less than or equal to 1,000,000), a digit in one place represents 10 times as much as it represents in the place to its right. For example, recognize that 7 in 700 is 10 times bigger than the 7 in 70 because 700 ÷ 70 = 10 and 70 × 10 = 700. Wisconsin Academic Standards: 4.NBT.A.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division. Alabama Course of Study Standards: 7,8 Read and write multi-digit whole numbers using standard form, word form, and expanded form. Use place value understanding to compare two multi-digit numbers using >, =, and < symbols. Common Core State Standards: Math.4.NBT.2 or 4.NBT.A.2 Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. Georgia Standards of Excellence (GSE): 4.NR.1.1 Read and write multi-digit whole numbers to the hundred-thousands place using base-ten numerals and expanded form. North Carolina - Standard Course of Study: 4.NBT.2 Read and write multi-digit whole numbers up to and including 100,000 using numerals, number names, and expanded form. New York State Next Generation Learning Standards: 4.NBT.2 1. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. e.g., 50,327 = 50,000 + 300 + 20 + 7 2. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. Ohio's Learning Standards: 4.NBT.2 Read and write multi-digit whole numbers using standard form, word form, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. Grade 4 expectations in this domain are limited to whole numbers less than or equal to 1,000,000. Tennessee Academic Standards: 4.NBT.A.2 Read and write multi-digit whole numbers (less than or equal to 1,000,000) using standard form, word form, and expanded form (e.g. the expanded form of 4256 is written as 4 × 1000 + 2 × 100 + 5 × 10 + 6 × 1). Compare two multidigit numbers based on meanings of the digits in each place and use the symbols >, =, and < to show the relationship. Wisconsin Academic Standards: 4.NBT.A.2 Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place and describe the result of the comparison using words and symbols ( >, =, and < ). Pennsylvania Core Standards: CC.2.1.4.B.1 Apply place-value concepts to show an understanding of multi-digit whole numbers. Pennsylvania Core Standards: M04.A-T.1.1.1 Demonstrate an understanding that in a multi-digit whole number (through 1,000,000), a digit in one place represents ten times what it represents in the place to its right. Pennsylvania Core Standards: M04.A-T.1.1.2 Read and write whole numbers in expanded, standard, and word form through 1,000,000. Florida - Benchmarks for Excellent Student Thinking: MA.4.NSO.1.1 Express how the value of a digit in a multi-digit whole number changes if the digit moves one place to the left or right. Florida - Benchmarks for Excellent Student Thinking: MA.4.NSO.1.2 Read and write multi-digit whole numbers from 0 to 1,000,000 using standard form, expanded form and word form. Georgia Standards of Excellence (GSE): 4.NR.1.2 Recognize and show that a digit in one place has a value ten times greater than what it represents in the place to its right and extend this understanding to determine the value of a digit when it is shifted to the left or right, based on the relationship between multiplication and division. Arkansas Academic Standards: 4.NPV.1 Recognize that a digit in a given place represents ten times what it represents in the place to its right.
{"url":"https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=468","timestamp":"2024-11-07T02:38:02Z","content_type":"application/xhtml+xml","content_length":"49812","record_id":"<urn:uuid:6610066e-b181-4db7-8e99-ebf03095a65d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00621.warc.gz"}
Evaluation of uncertainties Evaluation of uncertainties The evaluation of uncertainties is a fundamental part of the measurement analysis in metrology. The analysis of dynamic measurements typically involves methods from signal processing, such as digital filtering, the discrete Fourier transform (DFT), or simple tasks like interpolation. For most of these tasks, methods are readily available, for instance, as part of scipy.signal. This module of PyDynamic provides the corresponding methods for the evaluation of uncertainties. The package consists of the following modules: Uncertainty evaluation for convolutions This module assists in uncertainty propagation for the convolution operation The convolution operation is a common operation in signal and data processing. Convolving signals is mathematically similar to a filter application. This module contains the following function: • convolve_unc(): Convolution with uncertainty propagation based on FIR-filter PyDynamic.uncertainty.propagate_convolution.convolve_unc(x1, U1, x2, U2, mode='full')[source] Discrete convolution of two signals with uncertainty propagation This function supports the convolution modes of numpy.convolve() and scipy.ndimage.convolve1d(). The option to provide the uncertainties as 1D-arrays of standard uncertainties is given for convenience only. It does not result in any performance benefits, as they are internally just converted into a diagonal covariance matrix. Moreover, the output will always be a full covariance matrix (and will almost always have off-diagonal entries in practical scenarios). ☆ x1 (np.ndarray, (N,)) – first input signal ☆ U1 (np.ndarray, (N,) or (N, N)) – ○ 1D-array: standard uncertainties associated with x1 (corresponding to uncorrelated entries of x1) ○ 2D-array: full 2D-covariance matrix associated with x1 ○ None: corresponds to a fully certain signal x1, results in more efficient calculation (compared to using np.zeros(…)) ☆ x2 (np.ndarray, (M,)) – second input signal ☆ U2 (np.ndarray, (M,) or (M, M)) – ○ 1D-array: standard uncertainties associated with x2 (corresponding to uncorrelated entries of x2) ○ 2D-array: full 2D-covariance matrix associated with x2 ○ None: corresponds to a fully certain signal x2, results in more efficient calculation (compared to using np.zeros(…)) ☆ mode (str, optional) – ○ full: len(y) == N+M-1 (default) ○ valid: len(y) == max(M, N) - min(M, N) + 1 ○ same: len(y) == max(M, N) (value+covariance are padded with zeros) ○ nearest: len(y) == N (value+covariance are padded with by stationary assumption) ○ reflect: len(y) == N ○ mirror: len(y) == N ☆ conv (np.ndarray) – convoluted output signal ☆ Uconv (np.ndarray) – full 2D-covariance matrix of y Uncertainty evaluation for the DFT Functions for the propagation of uncertainties in the application of the DFT The PyDynamic.uncertainty.propagate_DFT module implements functions for the propagation of uncertainties in the application of the DFT, inverse DFT, deconvolution and multiplication in the frequency domain, transformation from amplitude and phase to real and imaginary parts and vice versa. The corresponding scientific publications is S. Eichstädt und V. Wilkens GUM2DFT — a software tool for uncertainty evaluation of transient signals in the frequency domain. Measurement Science and Technology, 27(5), 055001, 2016. [DOI: This module contains the following functions: PyDynamic.uncertainty.propagate_DFT.AmpPhase2DFT(A: ndarray, P: ndarray, UAP: ndarray, keep_sparse: bool = False) Tuple[ndarray, ndarray][source] Transformation from magnitude and phase to real and imaginary parts Calculate the vector F=[real,imag] and propagate the covariance matrix UAP associated with [A, P] ☆ A (np.ndarray of shape (N,)) – vector of magnitude values ☆ P (np.ndarray of shape (N,)) – vector of phase values (in radians) ☆ UAP (np.ndarray of shape (2N,2N) or of shape (2N,)) – covariance matrix associated with (A,P) or vector of squared standard uncertainties [u^2(A),u^2(P)] ☆ keep_sparse (bool, optional) – whether to transform sparse matrix to numpy array or not ☆ F (np.ndarray of shape (2N,)) – vector of real and imaginary parts of DFT result ☆ UF (np.ndarray of shape (2N,2N)) – covariance matrix associated with F ValueError – If dimensions of A, P and UAP do not match. PyDynamic.uncertainty.propagate_DFT.AmpPhase2Time(A: ndarray, P: ndarray, UAP: ndarray) Tuple[ndarray, ndarray][source] Transformation from amplitude and phase to time domain GUM propagation of covariance matrix UAP associated with DFT amplitude A and phase P to the result of the inverse DFT. Uncertainty UAP is assumed to be given for amplitude and phase with blocks: UAP = [[u(A,A), u(A,P)],[u(P,A),u(P,P)]] ☆ A (np.ndarray of shape (N, )) – vector of amplitude values ☆ P (np.ndarray of shape (N, )) – vector of phase values (in rad) ☆ UAP (np.ndarray of shape (2N, 2N)) – covariance matrix associated with [A,P] ☆ x (np.ndarray of shape (N, )) – vector of time domain values ☆ Ux (np.ndarray of shape (2N, 2N)) – covariance matrix associated with x ValueError – If dimension of UAP is not even. PyDynamic.uncertainty.propagate_DFT.DFT2AmpPhase(F: ndarray, UF: ndarray, keep_sparse: bool = False, tol: float = 1.0, return_type: str = 'separate') Tuple[ndarray, ndarray] | Tuple[ndarray, ndarray, Transformation from real and imaginary parts to magnitude and phase Calculate the matrix U_AP = [[U1,U2],[U2^T,U3]] associated with magnitude and phase of the vector F=[real,imag] with associated covariance matrix U_F=[[URR,URI],[URI^T,UII]] ☆ F (np.ndarray of shape (2M,)) – vector of real and imaginary parts of a DFT result ☆ UF (np.ndarray of shape (2M,2M)) – covariance matrix associated with F ☆ keep_sparse (bool, optional) – if true then UAP will be sparse if UF is one-dimensional ☆ tol (float, optional) – lower bound for A/uF below which a warning will be issued concerning unreliable results ☆ return_type (str, optional) – If “separate” then magnitude and phase are returned as separate arrays A and P. Otherwise the list [A, P] is returned ☆ separate (If return_type ==) ☆ A (np.ndarray) – vector of magnitude values ☆ P (np.ndarray) – vector of phase values in radians, in the range [-pi, pi], but only present if return_type = 'separate' ☆ UAP (np.ndarray) – covariance matrix associated with (A,P) ☆ Otherwise ☆ AP (np.ndarray) – vector of magnitude and phase values ☆ UAP (np.ndarray) – covariance matrix associated with AP PyDynamic.uncertainty.propagate_DFT.DFT_deconv(H: ndarray, Y: ndarray, UH: ndarray, UY: ndarray) Tuple[ndarray, Tuple[ndarray, ndarray, ndarray] | ndarray][source] Deconvolution in the frequency domain GUM propagation of uncertainties for the deconvolution X = Y/H with Y and H being the Fourier transform of the measured signal and of the system’s impulse response, respectively. This function returns the covariance matrix as a tuple of blocks if too large for complete storage in memory. ☆ H (np.ndarray of shape (2M,)) – real and imaginary parts of frequency response values (M an even integer) ☆ Y (np.ndarray of shape (2M,)) – real and imaginary parts of DFT values ☆ UH (np.ndarray of shape (2M,2M) or (2M,)) – full covariance or diagonal of the covariance matrix associated with H ☆ UY (np.ndarray of shape (2M,2M) or (2M,)) – full covariance or diagonal of the covariance matrix associated with Y ☆ X (np.ndarray of shape (2M,)) – real and imaginary parts of DFT values of deconv result ☆ UX (np.ndarray of shape (2M,2M) or 3-tuple of np.ndarray of shape (M,M)) – Covariance matrix associated with real and imaginary part of X. If the matrix fully assembled does not fit the memory, we return the auto-covariance for the real parts URRX and the imaginary parts UIIX and the covariance between the real and imaginary parts URIX as separate np.ndarrays arranged as follows: (URRX, URIX, UIIX) ValueError – If dimensions of H, Y, UY and UH do not match accordingly. PyDynamic.uncertainty.propagate_DFT.DFT_multiply(Y: ndarray, F: ndarray, UY: ndarray, UF: ndarray | None = None) Tuple[ndarray, ndarray][source] Multiplication in the frequency domain GUM uncertainty propagation for multiplication in the frequency domain, where the second factor F may have an associated uncertainty. This method can be used, for instance, for the application of a low-pass filter in the frequency domain or the application of deconvolution as a multiplication with an inverse of known uncertainty. ☆ Y (np.ndarray of shape (2M,)) – real and imaginary parts of the first factor ☆ F (np.ndarray of shape (2M,)) – real and imaginary parts of the second factor ☆ UY (np.ndarray either of shape (2M,) or of shape (2M,2M)) – covariance matrix or squared uncertainty associated with Y ☆ UF (np.ndarray of shape (2M,2M), optional) – covariance matrix associated with F ☆ YF (np.ndarray of shape (2M,)) – the product of Y and F ☆ UYF (np.ndarray of shape (2M,2M)) – the uncertainty associated with YF ValueError – If dimensions of Y and F do not match. PyDynamic.uncertainty.propagate_DFT.DFT_transferfunction(X, Y, UX, UY)[source] Calculation of the transfer function H = Y/X in the frequency domain Calculate the transfer function with X being the Fourier transform of the system’s input signal and Y that of the output signal. ☆ X (np.ndarray) – real and imaginary parts of the system’s input signal ☆ Y (np.ndarray) – real and imaginary parts of the system’s output signal ☆ UX (np.ndarray) – covariance matrix associated with X ☆ UY (np.ndarray) – covariance matrix associated with Y ☆ H (np.ndarray) – real and imaginary parts of the system’s frequency response ☆ UH (np.ndarray) – covariance matrix associated with H ☆ This function only calls DFT_deconv. PyDynamic.uncertainty.propagate_DFT.GUM_DFT(x: ndarray, Ux: ndarray | float, N: int | None = None, window: ndarray | None = None, CxCos: ndarray | None = None, CxSin: ndarray | None = None, returnC: bool = False, mask: ndarray | None = None) Tuple[ndarray, Tuple[ndarray, ndarray, ndarray] | ndarray] | Tuple[ndarray, Tuple[ndarray, ndarray, ndarray] | ndarray, Dict[str, ndarray]][source] Calculation of the DFT with propagation of uncertainty Calculation of the DFT of the time domain signal x and propagation of the squared uncertainty Ux associated with the time domain sequence x to the real and imaginary parts of the DFT of x. ☆ x (np.ndarray of shape (M,)) – vector of time domain signal values ☆ Ux (np.ndarray of shape (M,) or of shape (M,M) or float) – covariance matrix associated with x, or vector of squared standard uncertainties, or noise variance as float ☆ N (int, optional) – length of time domain signal for DFT; N>=len(x) ☆ window (np.ndarray of shape (M,), optional) – vector of the time domain window values ☆ CxCos (np.ndarray, optional) – cosine part of sensitivity matrix ☆ CxSin (np.ndarray, optional) – sine part of sensitivity matrix ☆ returnC (bool, optional) – if True, return sensitivity matrix blocks, if False (default) do not return them ☆ mask (ndarray of dtype bool, optional) – calculate DFT values and uncertainties only at those frequencies where mask is True ☆ F (np.ndarray) – vector of complex valued DFT values or of its real and imaginary parts ☆ UF (np.ndarray) – covariance matrix associated with real and imaginary part of F ☆ CxCos and CxSin (Dict) – Keys are “CxCos”, “CxSin” and values the respective sensitivity matrix entries ValueError – If N < len(x) PyDynamic.uncertainty.propagate_DFT.GUM_DFTfreq(N, dt=1)[source] Return the Discrete Fourier Transform sample frequencies ☆ N (int) – window length ☆ dt (float) – sample spacing (inverse of sampling rate) f – Array of length n//2 + 1 containing the sample frequencies Return type: See also PyDynamic.uncertainty.propagate_DFT.GUM_iDFT(F: ndarray, UF: ndarray, Nx: int | None = None, Cc: ndarray | None = None, Cs: ndarray | None = None, returnC: bool = False) Tuple[ndarray, ndarray] | Tuple[ndarray, ndarray, Dict[str, ndarray]][source] Propagation of squared uncertainties UF associated with the DFT values F GUM propagation of the squared uncertainties UF associated with the DFT values F through the inverse DFT. The matrix UF is assumed to be for real and imaginary part with blocks: UF = [[u(R,R), u(R,I)],[u(I,R),u(I,I)]] and real and imaginary part obtained from calling rfft (DFT for real-valued signal) ☆ F (np.ndarray of shape (2M,)) – vector of real and imaginary parts of a DFT result ☆ UF (np.ndarray of shape (2M,2M)) – covariance matrix associated with real and imaginary parts of F ☆ Nx (int, optional) – length of iDFT result ☆ Cc (np.ndarray, optional) – cosine part of sensitivities (without scaling factor 1/N) ☆ Cs (np.ndarray, optional) – sine part of sensitivities (without scaling factor 1/N) ☆ returnC (bool, optional) – If True, return sensitivity matrix blocks (without scaling factor 1/N), if False do not return them ☆ x (np.ndarray) – vector of time domain signal values ☆ Ux (np.ndarray) – covariance matrix associated with x ☆ Cc and Cs (Dict) – Keys are “Cc”, “Cs” and values the respective sensitivity matrix entries PyDynamic.uncertainty.propagate_DFT.Time2AmpPhase(x: ndarray, Ux: ndarray | float) Tuple[ndarray, ndarray, ndarray][source] Transformation from time domain to amplitude and phase via DFT ☆ x (np.ndarray of shape (N,)) – time domain signal ☆ Ux (np.ndarray of shape (N,) or of shape (N,N) or float) – covariance matrix associated with x, or vector of squared standard uncertainties, or noise variance as float ☆ A (np.ndarray) – amplitude values ☆ P (np.ndarray) – phase values ☆ UAP (np.ndarray) – covariance matrix associated with [A,P] PyDynamic.uncertainty.propagate_DFT.Time2AmpPhase_multi(x, Ux, selector=None)[source] Transformation from time domain to amplitude and phase Perform transformation for a set of M signals of the same type. ☆ x (np.ndarray of shape (M, nx)) – M time domain signals of length nx ☆ Ux (np.ndarray of shape (M,)) – squared standard deviations representing noise variances of the signals x ☆ selector (np.ndarray of shape (L,), optional) – indices of amplitude and phase values that should be returned; default is 0:N-1 ☆ A (np.ndarray of shape (M,N)) – amplitude values ☆ P (np.ndarray of shape (M,N)) – phase values ☆ UAP (np.ndarray of shape (M, 3N)) – diagonals of the covariance matrices: [diag(UAA), diag(UAP), diag(UPP)] Uncertainty evaluation for the DWT This module assists in uncertainty propagation for the discrete wavelet transform The PyDynamic.uncertainty.propagate_DWT module implements methods for the propagation of uncertainties in the application of the discrete wavelet transform (DWT). This modules contains the following functions: Uncertainty evaluation for digital filtering This module contains functions for the propagation of uncertainties through the application of a digital filter using the GUM approach. This modules contains the following functions: • FIRuncFilter(): Uncertainty propagation for signal y and uncertain FIR filter theta • IIRuncFilter(): Uncertainty propagation for the signal x and the uncertain IIR filter (b,a) • IIR_get_initial_state(): Get a valid internal state for IIRuncFilter() that assumes a stationary signal before the first value. The Elster-Link paper for FIR filters assumes that the autocovariance is known and that noise is stationary! PyDynamic.uncertainty.propagate_filter.FIRuncFilter(y, sigma_noise, theta, Utheta=None, shift=0, blow=None, kind='corr', return_full_covariance=False)[source] Uncertainty propagation for signal y and uncertain FIR filter theta A preceding FIR low-pass filter with coefficients blow can be provided optionally. This method keeps the signature of PyDynamic.uncertainty.FIRuncFilter, but internally works differently and can return a full covariance matrix. Also, sigma_noise can be a full covariance matrix. ☆ y (np.ndarray) – filter input signal ☆ sigma_noise (float or np.ndarray) – ○ float: standard deviation of white noise in y ○ 1D-array: interpretation depends on kind ○ 2D-array: full covariance of input ☆ theta (np.ndarray) – FIR filter coefficients ☆ Utheta (np.ndarray, optional) – ○ 1D-array: coefficient-wise standard uncertainties of filter ○ 2D-array: covariance matrix associated with theta if the filter is fully certain, use Utheta = None (default) to make use of more efficient calculations. see also the comparison given in <examplesDigital ☆ shift (int, optional) – time delay of filter output signal (in samples) (defaults to 0) ☆ blow (np.ndarray, optional) – optional FIR low-pass filter ☆ kind (string, optional) – only meaningful in combination with sigma_noise a 1D numpy array ○ ”diag”: point-wise standard uncertainties of non-stationary white noise ○ ”corr”: single sided autocovariance of stationary colored noise (default) ☆ return_full_covariance (bool, optional) – whether or not to return a full covariance of the output, defaults to False ☆ x (np.ndarray) – FIR filter output signal ☆ Ux (np.ndarray) – ○ return_full_covariance == False : point-wise standard uncertainties associated with x (default) ○ return_full_covariance == True : covariance matrix containing uncertainties associated with x PyDynamic.uncertainty.propagate_filter.IIR_get_initial_state(b, a, Uab=None, x0=1.0, U0=1.0, Ux=None)[source] Calculate the internal state for the IIRuncFilter-function corresponding to stationary non-zero input signal. ☆ b (np.ndarray) – filter numerator coefficients ☆ a (np.ndarray) – filter denominator coefficients ☆ Uab (np.ndarray, optional (default: None)) – covariance matrix for (a[1:],b) ☆ x0 (float, optional (default: 1.0)) – stationary input value ☆ U0 (float, optional (default: 1.0)) – stationary input uncertainty ☆ Ux (np.ndarray, optional (default: None)) – single sided autocovariance of stationary (colored/correlated) noise (needed in the kind=”corr” case of IIRuncFilter()) internal_state – dictionary of state Return type: PyDynamic.uncertainty.propagate_filter.IIRuncFilter(x, Ux, b, a, Uab=None, state=None, kind='corr')[source] Uncertainty propagation for the signal x and the uncertain IIR filter (b,a) ☆ x (np.ndarray) – filter input signal ☆ Ux (float or np.ndarray) – float: standard deviation of white noise in x (requires kind=”diag”) 1D-array: interpretation depends on kind ☆ b (np.ndarray) – filter numerator coefficients ☆ a (np.ndarray) – filter denominator coefficients ☆ Uab (np.ndarray, optional (default: None)) – covariance matrix for (a[1:],b) ☆ state (dict, optional (default: None)) – An internal state (z, dz, P, cache) to start from - e.g. from a previous run of IIRuncFilter. ○ If not given, (z, dz, P) are calculated such that the signal was constant before the given range ○ If given, the input parameters (b, a, Uab) are ignored to avoid repetitive rebuild of the internal system description (instead, the cache is used). However a valid new state (i.e. with new b, a, Uab) can always be generated by using IIR_get_initial_state(). ☆ kind (string, optional (default: "corr")) – defines the interpretation of Ux, if Ux is a 1D-array “diag”: point-wise standard uncertainties of non-stationary white noise “corr”: single sided autocovariance of stationary (colored/correlated) noise (default) ☆ y (np.ndarray) – filter output signal ☆ Uy (np.ndarray) – uncertainty associated with y ☆ state (dict) – dictionary of updated internal state In case of a == [1.0] (FIR filter), the results of IIRuncFilter() and FIRuncFilter() might differ! This is because IIRuncFilter propagates uncertainty according to the (first-order Taylor series of the) GUM, whereas FIRuncFilter takes full variance information into account (which leads to an additional term). This is documented in the description of formula (33) of [Elster2008] . The difference can be visualized by running PyDynamic/examples/digital_filtering/validate_FIR_IIR_MC.py Monte Carlo methods for digital filtering “Monte Carlo methods for the propagation of uncertainties for digital filtering The propagation of uncertainties via the FIR and IIR formulae alone does not enable the derivation of credible intervals, because the underlying distribution remains unknown. The GUM-S2 Monte Carlo method provides a reference method for the calculation of uncertainties for such cases. This module contains the following functions: Uncertainty evaluation for interpolation This module assists in uncertainty propagation for 1-dimensional interpolation The PyDynamic.uncertainty.interpolate module implements methods for the propagation of uncertainties in the application of standard interpolation methods as provided by scipy.interpolate.interp1d. This module contains the following functions: • interp1d_unc(): Interpolate arbitrary time series considering the associated uncertainties • make_equidistant(): Interpolate a 1-D function equidistantly considering associated uncertainties PyDynamic.uncertainty.interpolate.interp1d_unc(x_new: ndarray, x: ndarray, y: ndarray, uy: ndarray, kind: str | None = 'linear', copy=True, bounds_error: bool | None = None, fill_value: float | Tuple [float, float] | str | None = nan, fill_unc: float | Tuple[float, float] | str | None = nan, assume_sorted: bool | None = True, returnC: bool | None = False) Tuple[ndarray, ndarray, ndarray] | Tuple[ndarray, ndarray, ndarray, ndarray][source] Interpolate a 1-D function considering the associated uncertainties x and y are arrays of values used to approximate some function \(f \colon y = f(x)\). Note that calling interp1d_unc() with NaNs present in input values results in undefined behaviour. An equal number of each of the original x and y values and associated uncertainties is required. ☆ x_new ((M,) array_like) – A 1-D array of real values to evaluate the interpolant at. x_new can be sorted in any order. ☆ x ((N,) array_like) – A 1-D array of real values. ☆ y ((N,) array_like) – A 1-D array of real values. The length of y must be equal to the length of x. ☆ uy ((N,) array_like) – A 1-D array of real values representing the standard uncertainties associated with y. ☆ kind (str, optional) – Specifies the kind of interpolation for y as a string (‘previous’, ‘next’, ‘nearest’, ‘linear’ or ‘cubic’). Default is ‘linear’. ☆ copy (bool, optional) – If True, the method makes internal copies of x and y. If False, references to x and y are used. The default is to copy. ☆ bounds_error (bool, optional) – If True, a ValueError is raised any time interpolation is attempted on a value outside of the range of x (where extrapolation is necessary). If False, out of bounds values are assigned fill_value. By default, an error is raised unless fill_value="extrapolate". ☆ fill_value (array-like or (array-like, array_like) or “extrapolate”, optional) – ○ if a ndarray (or float), this value will be used to fill in for requested points outside of the data range. If not provided, then the default is NaN. The array-like must broadcast properly to the dimensions of the non-interpolation axes. ○ If a two-element tuple, then the first element is used as a fill value for x_new < t[0] and the second element is used for x_new > t[-1]. Anything that is not a 2-element tuple (e.g., list or ndarray, regardless of shape) is taken to be a single array-like argument meant to be used for both bounds as below, above = fill_value, fill_value. ○ If “extrapolate”, then points outside the data range will be set to the first or last element of the values. ○ If cubic-interpolation, C2-continuity at the transition to the extrapolation-range is not guaranteed. This behavior might change in future implementations, see issue #210 for details. Both parameters fill_value and fill_unc should be provided to ensure desired behaviour in the extrapolation range. ☆ fill_unc (array-like or (array-like, array_like) or “extrapolate”, optional) – Usage and behaviour as described in fill_value but for the uncertainties. Both parameters fill_value and fill_unc should be provided to ensure desired behaviour in the extrapolation range. ☆ assume_sorted (bool, optional) – If False, values of x can be in any order and they are sorted first. If True, x has to be an array of monotonically increasing values. ☆ returnC (bool, optional) – If True, return sensitivity coefficients for later use. This is only available for interpolation kind ‘linear’ and for fill_unc=”extrapolate” at the moment. If False sensitivity coefficients are not returned and internal computation is slightly more efficient. ☆ x_new ((M,) array_like) – values at which the interpolant is evaluated ☆ y_new ((M,) array_like) – interpolated values ☆ uy_new ((M,) array_like) – interpolated associated standard uncertainties ☆ C ((M,N) array_like) – sensitivity matrix \(C\), which is used to compute the uncertainties \(U_{y_{new}} = C \cdot \operatorname{diag}(u_y^2) \cdot C^T\), only returned if returnC is False, which is the default behaviour. PyDynamic.uncertainty.interpolate.make_equidistant(x: ndarray, y: ndarray, uy: ndarray, dx: float | None = 0.05, kind: str | None = 'linear') Tuple[ndarray, ndarray, ndarray][source] Interpolate a 1-D function equidistantly considering associated uncertainties Interpolate function values equidistantly and propagate uncertainties accordingly. x and y are arrays of values used to approximate some function \(f \colon y = f(x)\). Note that calling interp1d_unc() with NaNs present in input values results in undefined behaviour. An equal number of each of the original x and y values and associated uncertainties is required. ☆ x ((N,) array_like) – A 1-D array of real values. ☆ y ((N,) array_like) – A 1-D array of real values. The length of y must be equal to the length of x. ☆ uy ((N,) array_like) – A 1-D array of real values representing the standard uncertainties associated with y. ☆ dx (float, optional) – desired interval length (defaults to 5e-2) ☆ kind (str, optional) – Specifies the kind of interpolation for y as a string (‘previous’, ‘next’, ‘nearest’, ‘linear’ or ‘cubic’). Default is ‘linear’. ☆ x_new ((M,) array_like) – values at which the interpolant is evaluated ☆ y_new ((M,) array_like) – interpolated values ☆ uy_new ((M,) array_like) – interpolated associated standard uncertainties Uncertainty evaluation for multiplication This module assists in uncertainty propagation for multiplication tasks The multiplication of signals is a common operation in signal and data processing. This module contains the following functions: PyDynamic.uncertainty.propagate_multiplication.hadamar_product(x1: ndarray, U1: ndarray | None, x2: ndarray, U2: ndarray | None, real_valued: bool = False)[source] Hadamar product of two uncorrelated signals with uncertainty propagation This is also known as elementwise multiplication. By default, both input signals are assumed to represent a complex signal, where the real and imaginary part are concatenated into a single vector: [Re(x), Im(x)] ☆ x1 (np.ndarray, (2N,) or (N,)) – first input signal ☆ U1 (np.ndarray, (2N, 2N), (N, N) or (2N,), (N,)) – ○ 1D-array: standard uncertainties associated with x1 (corresponding to uncorrelated entries of x1) ○ 2D-array: full 2D-covariance matrix associated with x1 ○ None: corresponds to a fully certain signal x1, results in more efficient calculation (compared to using np.zeros(…)) ☆ x2 (np.ndarray, (2N,) or (N,)) – second input signal, same length as x1 ☆ U2 (np.ndarray, (2N, 2N), (N, N) or (2N,), (N,)) – ○ 2D-array: full 2D-covariance matrix associated with x2 ○ 1D-array: standard uncertainties associated with x2 (corresponding to uncorrelated entries of x2) ○ None: corresponds to a fully certain signal x2, results in more efficient calculation (compared to using np.zeros(…)) ☆ real_valued (bool, optional) – By default, both input signals are assumed to represent a complex signal, where the real and imaginary part are concatenated into a single vector [Re(x), Im (x)]. Alternatively, if both represent purely real signals, performance gains can be achieved by enabling this switch. ☆ prod (np.ndarray, (2N,) or (N,)) – multiplied output signal ☆ Uprod (np.ndarray, (2N, 2N) or (N, N)) – full 2D-covariance matrix of output prod PyDynamic.uncertainty.propagate_multiplication.window_application(A: ndarray, W: ndarray, cov_A: ndarray | None = None, cov_W: ndarray | None = None)[source] Application of a real window to a complex signal ☆ A (np.ndarray, (2N,)) – signal the window will be applied to ☆ W (np.ndarray, (N,)) – window ☆ cov_A (np.ndarray, (2N,2N) or (2N,)) – ○ 2D-array: full 2D-covariance matrix associated with A ○ 1D-array: standard uncertainties associated with A (corresponding to uncorrelated entries of A) ○ None: corresponds to a fully certain signal A, results in more efficient calculation (compared to using np.zeros(…)) ☆ cov_W (np.ndarray, (N, N) or (N,)) – ○ 2D-array: full 2D-covariance matrix associated with W ○ 1D-array: standard uncertainties associated with W (corresponding to uncorrelated entries of W) ○ None: corresponds to a fully certain signal W, results in more efficient calculation (compared to using np.zeros(…)) ☆ y (np.ndarray) – signal with window applied ☆ Uy (np.ndarray) – full 2D-covariance matrix of windowed signal
{"url":"https://pydynamic.readthedocs.io/en/main/PyDynamic.uncertainty.html","timestamp":"2024-11-02T22:10:08Z","content_type":"text/html","content_length":"240520","record_id":"<urn:uuid:6c5530b6-6b9d-4d28-9430-d27ec022bc30>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00584.warc.gz"}
Dear _____, First of all, I want to thank you for your continuing interest in what Vernon and I are trying to do. As I think you know, we have asked a substantial number of people, scientists, mathematicians, philosophers, and serious thinkers of various kinds, to review our work. The most important of these reviewers, from my point of view, are those with sufficient training in mathematics and physics to be able to understand the basic concepts underlying Transcendental Physics and TDVP. These thinkers, one would hope, would be very open-minded and objective. However, based on the comments and statements of those who have responded to date, open-mindedness is quite rare. In, my opinion, those who have commented fall into of one the following categories: 1.) Those who assume we don’t know what we are talking about. Their approach is to look for obvious errors and be done with us. 2.) Those who have unconventional theories of their own. Their approach is to determine to what extent our ideas might support theirs. When they find at some point that they do not, they quickly lose 3.) Those with heavily vested interest in the so-called Standard Model. As with group #1, they search for any error at all to discredit our ideas because, if we are right, the paradigm they have invested their lives in must be wrong, and their life’s work is threatened. Our experience shows that they will do anything they can to defend the current paradigm. 4.) Those who are truly open minded, as all scientists should be, and are willing to follow the logic of new ideas wherever it leads, even when the results conflict with their life’s work and/or their own belief system. So far, our experience is that such minds are very few and far between. I have found that discussions with people in the first three categories are largely a waste of time. The first two have no interest in anything outside their own ideas, or ideas with which they are familiar, and people in the third group have no interest in anything outside the box of the current materialistic paradigm. I think you can probably identify some individuals in the discussions of the past couple of years who fall in to these categories. Two or three, for example, who sought to find flaws in the math, started by criticizing references to, and use of, some simple, basic mathematical concepts, but failed to understand the significance of these concepts in the new paradigm. Next, they offered counter examples to certain mathematical proofs and conclusions. When I pointed out flaws in their reasoning, they at first grudgingly admitted a mistake or two, and then got upset. One even told me that the discussion was not about criticizing his ideas. He even posted that he had “refuted” certain of my arguments, when it was easy to show that he had not. I want to address your comments, but first, please permit me to recount a little more history to put the current discussions into proper perspective: After studying the works of Planck, Einstein, Minkowski, Lorentz, Schrödinger, Bohr, and Heisenberg, while I was an undergrad physics student, I was convinced as early as 1956, that a theory of everything, even if it was defined only as a theory that would unify the known forces of nature, was not possible without including the actions of consciousness in the equations. I wrote about this around 1957 - 1959 or so. I also recognized that new math was needed, and that I needed to know more about number theory. I earned a degree in mathematics in 1962, and started in a graduate program in theoretical physics. I also found the basis for the new math I needed in George Spencer Brown’s ‘Laws of Form’ in 1962; coincidentally about the same time I studied John Bell’s inequality theorem and learned of Alain Aspect’s experimental resolution of the EPR paradox. I developed the Calculus of Distinctions to deal with the interaction of consciousness and physical reality; and I first published some of the results of including consciousness in the equations in 1989 in “Infinite Continuity”. Prior to publication, I sent a copy of the manuscript to Stephen Hawking for review. He had a problem with the concept of three-dimensional time and rejected, out of hand the idea that consciousness had anything to do with reality at the quantum level. I expanded some of the ideas, and published them again in ‘Transcendental Physics’ in 1996. My ideas were accepted and heralded as the new paradigm uniting relativity and quantum physics by several, including experimental physicist Dr. Henry Swift and astrophysicist Dr. Philip Anderson. It was discussed for months on the Karl Jaspers Forum and other internet forums and in the journal “Science within Consciousness” in the period 1996 – 1998. One participant suggested that I should be careful about sharing my ideas so freely on the internet, because “the ideas clearly represent a new paradigm, and unscrupulous individuals will steal them and call them their own.” Another participant quipped: “If you have something truly new, no one will steal it. You’ll have to cram it down their throats!” Unfortunately, the second comment proved to be the more accurate. Most people it seems, even scientists, believe that if they don’t understand something immediately, it must be wrong. As Max Planck said, “Science advances one funeral at a time.” Now, on to your comments: I am thankful for your comments because they give me an opportunity to more fully explain the concepts underlying the mathematics of TDVP. I will copy from your email and then respond: You wrote: “Here´s some new feedback, specifically about theoretical physics (or at least its required methods) within your theories, but not about the *whole* TDVP theory (except physics, possibly extended to broader, non-materialist realms), and especially about QM (again) in relation to your (or Ed's) 9 dim. Spin model: When I first mentioned to you in some mails years ago theories with hidden variables in QM (David Bohm), and now when you have a 9D spin model with apparently more detailed variables, I point to the necessity of QM again, or at least consistency with experiments.” My response: You refer to ‘hidden variables’. This term, as used in QM, refers to attempts by some physicists, notably David Bohm, Eugene Wigner, and a few others, to explain the non-locality quantum entanglement of Bell’s inequality and the EPR paradox in terms of underlying, inaccessible variables which, if there, would validate Einstein’s opinion that QM as developed by Bohr and Heisenberg was incomplete, and eliminate, theoretically, the probabilistic uncertainty formalized by Heisenberg’s Uncertainty Principle. The additional variables of TRUE units in TDVP are in no way hidden variables of this sort. They are hidden only in the sense that they are not directly measurable. The magnitudes of TRUE units, including mass, energy and gimmel (the third form) for specific sub-atomic entities are all indirectly determinable, like many things in science. The amount of gimmel in a given particle does not affect the quantum uncertainty between the measurements of location and angular momentum any more than the amount of mass or energy in the particle does. Heisenberg’s relationship of probabilistic uncertainty is not affected by Gimmel, an indirectly measurable variable combining with units of mass and energy to make up the TRUE quantum units of all stable particles, and thus they are not ‘hidden’ variables in the sense proposed by Bohm, et al. You wrote: “I do not see how the use of integers only will improve consistency with the known QM framework and the experiments; there needs to be some randomization as well i -strongly-suspect. The reason is, like I posted, because of the (statistical nature of the) measurements. My response: Let me explain the importance of using integers in the analysis of the combination of elementary particles to form stable compound particles like protons and neutrons: Planck’s discovery that elementary particles are always integer multiples of a basic unit means that we are dealing with discrete, as opposed to continuous values in the statistical analysis of collider data. A discrete random variable has a very different probability distribution than a continuous random variable, and functions representing compound entities formed by the combination of integral numbers of basic units can only have integer values. This means we are dealing with equations with integer variables, known to mathematicians as Diophantine equations. I’ll elaborate on this more as I respond to your further comments. You wrote: In some way with some quantum logic Ed or your other math assistant might be able to add this to this spin-model, in order to generate the most common QM experiments, but it won't be easy. My response: This has already been done. And, while it wasn’t as difficult as you might imagine, it may be hard to grasp at first. This is actually what Dimensional Extrapolation with unitary extensions and integer solutions to the Conveyance Equations in TRUE units are all about. The 9-D spin model is an outcome of the mathematics, not the other way around. You wrote: “For the rest when describing physics at the Planck scale, just a 9D spin model is not enough of course, because it's not only *particle* spin, there also is QM involved with (mass-less) *photons*, and in fact with a whole of other particles (some with rapid decay); called the particle 'zoo'. In fact the existence of most of these particles seems to come out of the quantum field theories inherent in the Standard Model, but that's another -and slightly broader- subject.” My response: Actually, the mathematics of TDVP accurately encompasses all known QM processes. The derivation of TRUE units is fully consistent with, and actually dependent upon the statistical nature of the quantum measurements of collider data. TDVP is also consistent with the existence of mass-less photons and the entire ‘particle zoo’. All of the particles detected in the debris from high-energy collisions decay under ‘normal’ conditions into photons, electrons and up- and down-quarks, the four sub-atomic entities that make up the physical structures of our everyday world. The other particles are teased into existence by high-energy collisions here on Earth, or are naturally produced in extreme conditions of heat and pressure like those in stellar processes. This becomes clear in the 9-D integral model. TRUE units describe the total mass/energy/consciousness composition of sub-atomic particles under normal conditions. Only the particle combinations with the TRUE unit values that satisfy integer solutions of the Conveyance Equation for n = m = 3, dictated by Fermat’s Last Theorem achieve symmetric stability. You wrote: “Now I understand you or Ed have to start somewhere within the TDVP framework, i.e. with the 9D particle spin model, but then this would only be a (minor) start, and certainly not a description or foundation of some 'new physics' or new paradigm in physics at large.” My response: As you might gather from my explanations above, I strongly disagree with this statement. While our presentations, as you say, must ‘start somewhere’, as a new paradigm, TDVP does not start with randomly chosen concepts. As a new paradigm must, it starts with proven fundamental mathematical and physical basics and expands them with new mathematics, new definitions and a new, more comprehensive theoretical framework, just as relativity and quantum mechanics did.in the early part of the last century. Since 1935, there have been no paradigm-shifting new physics discoveries, only experimental and technological evidence of fields, particles and concepts that fit within the current materialistic paradigm. TDVP, on the other hand, with the inclusion of the action of consciousness, and detailed mathematical and dimensionometric applications of the Calculus of Distinctions, Dimensional Extrapolation and the Conveyance Equations, provides a new, expanded paradigm that not only encompasses and integrates known physics, chemistry, the life sciences, and verified paranormal phenomena, it provides answers and explanations for quantum and macro-scale phenomena not explained, or even explainable in the current paradigm. You wrote: Also i would like to come back on the subject of peer review (about stuff like this 9D spin model) in the area of -theoretical- *physics*. … I would be more interested with which modern physicists you and-or Ed are talking in detail about the mathematical theories within your TDVP context. My response: We certainly agree about the importance of colleague review, and we have been eagerly seeking it for years. I have approached people like Stephen Hawking, Menos Kafatos, Henry Stapp, and Roger Penrose, and a number of mathematicians to get these ideas peer reviewed, and Vernon has approached many others. But finding qualified reviewers who are willing to spend their precious time reviewing something outside the box is easier said than done. See my comments above regarding the four categories of reviewers. You continue: This also because imho inventing workable validated theories in mathematical physics usually should not be much determined only by 'creativity' (although this may apply for generating hypotheses) but by having an understanding of experimental physics as w ell. Without the experiments, imho you are most of the time -in fact almost always-only creating (highly) speculative 'theories' (in fact more hypotheses than theories); a well known fact in the philosophy of science btw, and mentioning things as new paradigms or being decades ahead of the others do not change such facts. NB it is not intended as (severe) criticism, but more as a suggestion how to continue the research and if also focus on the interconnections between your 'new physics' and the well-established older theories. My response: I certainly understand and appreciate this, as I’m sure Vernon does, but you apparently didn’t know that I’ve relied heavily on experimental physics in developing the mathematics. You continued: At least in your latest response on Ning you now seem to have acknowledged this, but then you also should admit that there is not much 'linkage' (yet?) between all the new ideas (including these TRUE units) with conventional 'fundamental' physics (apart from the periodic table, but that's not enough. Like i wrote earlier, in theoretical physics the quantum mechanics of the electron bands around the core (and the Pauli Exclusion Principle) are determining the periodic index; and also their reactivity with other elements. But that's another subject. While it looks like an 'Eureka' experience finding such a new way of setting up the periodic table, together with some possible new findings, imho again more validation and verification is to be done to make it a workable theory instead of just a (wild) hypothesis.” My response: Again, you seem to be unaware of much of what we’ve published. As my previous discussion shows, there are multiple linkages between TDVP and fundamental physics, quantum mechanics and relativity. In addition to explaining why quarks combine in triads, the unique value of the Cabibbo angle, explaining the Periodic Table consistent with electron shell theory and Pauli’s exclusion principle, -but in much more detail-, explaining why fermions have an intrinsic spin number of ½, why photons, electrons, protons, and neutrons have the physical characteristics they do, and the quantization of angular momentum, as well as explaining non-locality and quantum entanglement, we are finding more links and explanations almost daily. If this doesn’t make TDVP “a workable theory instead of just a (wild) hypothesis”, I’d like to know what would. You wrote: About the methods used in general in your physics theories as part of your Toe: maybe this '' C.O.D.' theory by Ed indeed is innovative new method which can be used in mathematical physics, to be recognized as such it should be studied more in detail by others i suppose, and only then later it might be acknowledged as such.” My response: Unless you’ve read the published articles associated with “Reality Begins with Consciousness” and/or the posts on my Transcendental Physics blog, located at www.ERCloseTPhysics.com, you wouldn’t know that over the years, I have discussed the mathematical basis of Transcendental Physics and TDVP with a substantial number of professional mathematicians and mathematical physicists; I have approached at least fifty or sixty. I have documentation of many of these discussions in the form of letters and emails. Of these, only a handful, I believe 5 or 6 actually took the time to look at my work in any detail. The problem, as with getting reviewers for TDVP, is few are willing to invest much time to review anything outside the mainstream paradigm, especially if it involves new math. One professional mathematician, Dr. Vladimir Brandin, has endorsed CoD and used it in his studies of intelligence. We published one paper together: Brandin V, Close ER: The calculus of dimensional distinctions: Elements of mathematical theory of intellect. Moscow: 2003. Details of CoD and applications have been published in journal articles. We can supply copies if you want to see them. I’d like to elaborate a bit more on the mathematical and dimensionometric bases of Transcendental Physics and TDVP for you by pointing out the difference between classical Aristotelian and Platonic reasoning as it pertains to mathematics and scientific research: Aristotelians see mathematics simply as a man-made tool, while Platonists see mathematics as symbolic reflections of an underlying reality. As such, the imperfect tools of human formulation can always be improved to more and more closely mimic and reveal the perfectly logical patterns of reality. I see the value of the Aristotelian point of view in technological and engineering applications, but I also subscribe to the Platonic point of view as valid in constructing a paradigm to illustrate the nature of reality. To understand TDVP and the use of the CoD, I think it will be helpful for you to know that there are three basic concepts behind the mathematics of TDVP: 1.) There is a direct correlation between the structures of number theory and the structures of the universe. This is why even the most abstract mathematical theorem finds application in the real world sooner or later. 2.) The experimental resolution (Aspect, et al) of the EPR paradox implies consciousness involvement and non-locality. These features are explainable within the framework of a universe with more dimensions than the four of the space-time of Minkowski and Einstein (3S – 1T). 3.) Max Planck discovered that we live in a quantized universe, but mathematical physics has not been properly adjusted to accommodate this fact. You may agree or disagree with basic concept #1. If you are primarily Aristotelian in your thinking, as many scientists are, you may see the correlation between the types of numbers and the measures of dimensions, as shown by Dimensional Extrapolation, as a remarkable coincidence. If you accept basic concept #1, you will agree that the correlations we’ve discovered are not coincidental. Basic concept #2 by itself does not imply any specific number of discrete dimensions, but when combined with #1, a maximum number of nine finite orthogonal dimensions are derived. Concerning basic concept #3, to adjust mathematical physics to a quantized universe, we must first recognize that the calculus of Newton and Leibniz does not apply at the quantum level because, for applications of differential and integral calculus to yield valid results, the domains of the variables involved must be continuous. In a quantized universe, distinctions of content, like mass, energy and consciousness, exist only in finite, discrete integer amounts, and, as we have shown in TDVP, the distinctions of extent, like space and time, are also limited to finite, discrete volumes due to the relativistic light-speed limitation on velocity. So the dimensional domains of a quantized universe are not continuous. No variable in a quantized universe can approach zero infinitesimally closely, as it must be able to do for Newtonian calculus to yield valid results at the quantum scale. In TDVP, the infinitesimals of Newtonian calculus are replaced by the minimal distinctions of TRUE units, and the Calculus of Distinctions is the mathematical system I’ve developed to extend calculus to the sub-quark level. To be clear, the CoD does not replace Newtonian calculus at the macro scale; it extends calculation to the quark scale, analogous to the way relativity does not replace Newtonian physics at normal scales of measurement, and only comes to play in the vicinity of extremely massive objects, or when relative velocities are near light-speed. In summary, contrary to the impressions you have portrayed in your email of Feb. 3, 2015, TDVP is, in fact a comprehensive new paradigm with new mathematics, allowing consciousness to be included in the equations of mathematical physics for the first time. This paradigm has explained a number of empirical observations not explained by the current paradigm. The new mathematics, rigorously defined in several published books and articles, reveals an existential nine-dimensional finite domain embedded in an infinite substrate which contains the blueprints of all of the stable forms that support life and consciousness in the universe. I have tried repeatedly since 1989 to obtain competent peer review, and Vernon and I have sought colleague review in every venue available to us over the past six years as we’ve continued to develop the new paradigm. If you have any questions or concerns, or require more detailed explanations, please let me know. Edward R. Close, PhD, PE, DISPE 1 comment: 1. The relevance of the class 10th (SSLC) mock question paper for Tamil Nadu SSLC students in preparing for the final test cannot be overstated. As a result, the Tamil Nadu board publishes the TN SSLC model question paper in PDF format online. TN 10th Question Paper These PDF files will assist students in comprehending exam patterns and mark distribution for each chapter. Students' confidence and time management abilities will improve when they complete the class 10th example question papers.
{"url":"https://www.erclosetphysics.com/2015/02/response-to-comments-by-reviewer.html","timestamp":"2024-11-11T10:27:25Z","content_type":"text/html","content_length":"243665","record_id":"<urn:uuid:39f70292-9125-448a-b389-84001ee74d61>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00788.warc.gz"}
Creating and Interpreting Graphs Learning Objectives • Explain how to construct a simple graph that shows the relationship between two variables It’s important to know the terminology of graphs in order to understand and manipulate them. Let’s begin with a visual representation of the terms (shown in Figure 1), and then we can discuss each one in greater detail. Throughout this course we will refer to the horizontal line at the base of the graph as the x-axis. We will refer to the vertical line on the left hand side of the graph as the y-axis. This is the standard convention for graphs. In economics, we commonly use graphs with price (p) represented on the y-axis, and quantity (q) represented on the x-axis. An intercept is where a line on a graph crosses (“intercepts”) the x-axis or the y-axis. Mathematically, the x-intercept is the value of x when y = 0. Similarly, the y-intercept is the value of y when x = 0. You can see the x-intercepts and y-intercepts on the graph above. The point where two lines on a graph cross is called an intersection point. The other important term to know is slope. The slope tells us how steep a line on a graph is as we move from one point on the line to another point on the line. Technically, slope is the change in the vertical axis divided by the change in the horizontal axis. The formula for calculating the slope is often referred to as the “rise over the run”—again, the change in the distance on the y-axis (rise) divided by the change in the x-axis (run). Now that you know the “parts” of a graph, let’s turn to the equation for a line: In any equation for a line, m is the slope and b is the y-intercept. Let’s use the same equation we used earlier, in the section on solving algebraic equations, y = 9 + 3x, which can also be written as: y = 3x + 9 In this equation for a line, the b term is 9 and the m term is 3. The table below shows the values of x and y for this equation. To construct the table, just plug in a series of different values for x, and then calculate the resulting values for y. for the x y Next we can place each of these points on a graph. We can start with 0 on the x-axis and plot a point at 9 on the y-axis. We can do the same with the other pairs of values and draw a line through all the points, as on the graph in Figure 2, below. This example illustrates how the b and m terms in an equation for a straight line determine the position of the line on a graph. As noted above, the b term is the y-intercept. The reason is that if x = 0, the b term will reveal where the line intercepts, or crosses, the y-axis. In this example, the line hits the vertical axis at 9. The m term in the equation for the line is the slope. Remember that slope is defined as rise over run; the slope of a line from one point to another is the change in the vertical axis divided by the change in the horizontal axis. In this example, each time the x term increases by 1 (the run), the y term rises by 3. Thus, the slope of this line is therefore 3/1 = 3. Specifying a y-intercept and a slope—that is, specifying b and m in the equation for a line—will identify a specific line. Although it is rare for real-world data points to arrange themselves as a perfectly straight line, it often turns out that a straight line can offer a reasonable approximation of actual data. Watch It Watch this video to take a closer look at graphs and how variables can be represented in graph form. NOTE: Around the two-minute mark, the narrator inadvertently says “indirect,” rather than “inverse.” This is corrected later in the video. the point on a graph where a line crosses the vertical axis or horizontal axis the change in the vertical axis divided by the change in the horizontal axis a quantity that can assume a range of values the horizontal line on a graph, commonly represents quantity (q) on graphs in economics the vertical line on a graph, commonly represents price (p) on graphs in economics
{"url":"https://courses.lumenlearning.com/wm-microeconomics/chapter/creating-and-interpreting-graphs/","timestamp":"2024-11-04T11:06:29Z","content_type":"text/html","content_length":"55798","record_id":"<urn:uuid:2306fb8f-962f-4c7e-8adf-264feb8aa5ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00301.warc.gz"}
The Effects of Racket Inertia Tensor on Elbow Loadings and Racket Behavior for Central and Eccentric Impacts #sportsscience #sportsmedicine #exercisescience Steven M. Nesbit, Michael Elzinga, Catherine Herchenroder, Monika Serrano. (2006) The Effects of Racket Inertia Tensor on Elbow Loadings and Racket Behavior for Central and Eccentric Impacts. Journal of Sports Science and Medicine (05), 304 - 317.
{"url":"https://jssm.org/jssm-05-304.xml%3EFulltext","timestamp":"2024-11-02T18:01:22Z","content_type":"text/html","content_length":"123743","record_id":"<urn:uuid:30f05605-4536-400b-aa5c-c0d6189bd374>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00411.warc.gz"}
Dask is a specification to encode a graph – specifically, a directed acyclic graph of tasks with data dependencies – using ordinary Python data structures, namely dicts, tuples, functions, and arbitrary Python values. A Dask graph is a dictionary mapping keys to computations: {'x': 1, 'y': 2, 'z': (add, 'x', 'y'), 'w': (sum, ['x', 'y', 'z']), 'v': [(sum, ['w', 'z']), 2]} A key is a str, bytes, int, float, or tuple thereof: A task is a tuple with a callable first element. Tasks represent atomic units of work meant to be run by a single worker. Example: We represent a task as a tuple such that the first element is a callable function (like add), and the succeeding elements are arguments for that function. An argument may be any valid computation. A computation may be one of the following: 1. Any key present in the Dask graph like 'x' 2. Any other value like 1, to be interpreted literally 3. A task like (inc, 'x') (see below) 4. A list of computations, like [1, 'x', (inc, 'x')] So all of the following are valid computations: (add, 1, 2) (add, 'x', 2) (add, (inc, 'x'), 2) (sum, [1, 2]) (sum, ['x', (inc, 'x')]) (np.dot, np.array([...]), np.array([...])) [(sum, ['x', 'y']), 'z'] To encode keyword arguments, we recommend the use of functools.partial or toolz.curry. What functions should expect¶ In cases like (add, 'x', 'y'), functions like add receive concrete values instead of keys. A Dask scheduler replaces keys (like 'x' and 'y') with their computed values (like 1, and 2) before calling the add function. Entry Point - The get function¶ The get function serves as entry point to computation for all schedulers. This function gets the value associated to the given key. That key may refer to stored data, as is the case with 'x', or to a task, as is the case with 'z'. In the latter case, get should perform all necessary computation to retrieve the computed value. >>> from dask.threaded import get >>> from operator import add >>> dsk = {'x': 1, ... 'y': 2, ... 'z': (add, 'x', 'y'), ... 'w': (sum, ['x', 'y', 'z'])} >>> get(dsk, 'x') >>> get(dsk, 'z') >>> get(dsk, 'w') Additionally, if given a list, get should simultaneously acquire values for multiple keys: >>> get(dsk, ['x', 'y', 'z']) [1, 2, 3] Because we accept lists of keys as keys, we support nested lists: >>> get(dsk, [['x', 'y'], ['z', 'w']]) [[1, 2], [3, 6]] Internally get can be arbitrarily complex, calling out to distributed computing, using caches, and so on. Why use tuples¶ With (add, 'x', 'y'), we wish to encode the result of calling add on the values corresponding to the keys 'x' and 'y'. We intend the following meaning: add('x', 'y') # after x and y have been replaced But this will err because Python executes the function immediately before we know values for 'x' and 'y'. We delay the execution by moving the opening parenthesis one term to the left, creating a tuple: Before: add( 'x', 'y') After: (add, 'x', 'y') This lets us store the desired computation as data that we can analyze using other Python code, rather than cause immediate execution. LISP users will identify this as an s-expression, or as a rudimentary form of quoting.
{"url":"https://dask.pydata.org/en/latest/spec.html","timestamp":"2024-11-07T10:22:09Z","content_type":"text/html","content_length":"43159","record_id":"<urn:uuid:ab56546a-a5c7-42e9-8263-fbb7db0438d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00887.warc.gz"}
Eleventh of the Eleventh [Transum: Scroll down the page to see the answer] Q. What is 324 x 11 ? A. Write down the first digit ... 3 .......Add the first and second digits ... 3 + 2 = 5 .......Add the second and third digits .. 2 + 4 = 6 .......Write down the last digit ........... 4 The answer is 3564. Try it yourself ... multiply 543 by 11 Do you know why it works? Does it work for all three digit numbers? Divide each of these numbers by eleven in your head. Convert each of your answers to a letter and make a word (A=1, B=2 etc). Also add up all of your answers and multiply the sum by eleven in your head. Notes for the teacher: 1. You can drag the numbers in the animation above and they will stop moving. Your pupils may prefer a stationary Starter rather than trying to keep track of the moving numbers. 2. It might be necessary to do the Eleven In Your Head activity first (the link is can also be found further down this page) before pupils are able to see the patterns and appreciate the place value 3. Doing this activity once with a class helps students develop strategies. It is only when they do this activity a second time that they will have the opportunity to practise those strategies. That is when the learning is consolidated. Click the button below to regenerate another version of this starter with a different word. Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page. Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website. Educational Technology on Amazon Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access to it here is a concise URL for a version of this page without the comments: However it would be better to assign one of the student interactive activities below. Here is the URL which will take them to a related student activity. Curriculum Reference
{"url":"https://transum.org/Software/sw/Starter_of_the_day/Starter_November11.asp","timestamp":"2024-11-14T23:33:20Z","content_type":"text/html","content_length":"29911","record_id":"<urn:uuid:4b80d970-6635-4708-beac-99144d5abce7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00835.warc.gz"}
Continuous Uniform Distribution Probability Density Function (PDF) Related Calculators Below you will find complete descriptions and links to 9 different analytics calculators that are related to the continuous uniform distribution probability density function (pdf) calculator. The related calculators have been alphabetized and organized into categories in order to make things a bit easier to find.
{"url":"https://analyticscalculators.com/related.aspx?id=100","timestamp":"2024-11-15T04:34:18Z","content_type":"text/html","content_length":"31270","record_id":"<urn:uuid:463f5599-5833-47d0-9eb0-7e89a8f9ceac>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00700.warc.gz"}
Tutorial Understanding the Complexity of B Animation: Difference between revisions m (grammar and punctuation) (→Complexity of Animation: fixed grammatical errors, changed word order, changed "possible enablings of the operation" to "possible ways to enable the operation") Newer edit → ← Older edit Line 4: Line 4: We assume that you have grasped the way that ProB setups up the initial states of a B machine as We assume that you have grasped the way that ProB setups up the initial states of a B machine as outlined in [[Tutorial Setup Phases]]. outlined in [[Tutorial Setup Phases]]. In this lesson, we examine the complexity of animation of B models in general, how ProB solves this In this lesson, we examine the complexity of animation of B models in general, how ProB solves problem and what the ramification for users are. this problemand what the ramification for users are. Line 42: Line 42: We can see that 10 can be expressed as the sum 5+5 or 3+7. At least until 16, Goldbach's conjecture We can see that 10 can be expressed as the sum 5+5 or 3+7. At least until 16, Goldbach's is confirmed, as NotGoldbachNumber is disabled. conjecture is confirmed, as NotGoldbachNumber is disabled. Note, that this restriction (of enumerating only between MININT and MAXINT) means that ProB is in Note, that this restriction (of enumerating only between MININT and MAXINT) means that ProB is in general incomplete and sometimes unsound when using mathematical integers. We recommend using the general incomplete and sometimes unsound when using mathematical integers. We recommend using the implementable integers only (INT, NAT, NAT1). In future, we plan to integrate an interval analysis implementable integers only (INT, NAT, NAT1). In future, we plan to integrate an interval into ProB[DEL:, :DEL]so as to highlight predicates over mathematical integers which are potentially analysis into ProB so as to highlight predicates over mathematical integers which are potentially problematic. problematic. The mathematical integers are, however, not the only source of undecidability. Another source stems The mathematical integers are, however, not the only source of undecidability. Another source from the deferred sets. Indeed, the size of those sets can be unknown, and they may even be stems from the deferred sets. Indeed, the size of those sets can be unknown, and they may even be infinite. infinite. Line 49: Line 49: == Complexity of Animation == == Complexity of Animation == However, [DEL:having addressed :DEL]the undecidability issue still [DEL:leaves open :DEL]the However, the undecidability issuestill the problem of complexity. The animation task can be problem of complexity. The animation task can be arbitrarily complex even for finite sets and arbitrarily complex even for finite sets and implementable integers. implementable integers. Take for example the following predicate, which declares <tt>rr</tt> to be a binary relation over Take for example the following predicate, which declares <tt>rr</tt> to be a binary relation over -3..3: -3..3: <pre> <pre> Revision as of 12:30, 18 April 2011 We assume that you have grasped the way that ProB setups up the initial states of a B machine as outlined in Tutorial Setup Phases. In this lesson, we examine the complexity of animation of B models in general, how ProB solves this problem, and what the ramification for users are. In general, animation of a B model is undecidable. More precisely, it is undecidable to find out • whether a solution to the PROPERTIES can be found, • whether a valid INITIALISATION exists and • whether any given operation can be applied. For example, the following B machine encodes Goldbach's conjecture (that every even number greater than 2 is a Goldbach number, i.e., it can be expressed as the sum of two primes): MACHINE Goldbach prime(x) == x>1 & !y.(y:NATURAL & y>1 & y<x => x mod y /= 0) GoldbachNumber(x,p1,p2) = SELECT x:NATURAL & x mod 2 = 0 & x>2 & p1<x & p2<x & p1<=p2 & prime(p1) & prime(p2) & x = p1+p2 THEN NotGoldbachNumber(x) = SELECT x:NATURAL & x mod 2 = 0 & x>2 & !(p1,p2).(p1<x & p2<x & p1<=p2 & prime(p1) & prime(p2) => x /= p1+p2) THEN If the conjecture is true, then the operation NotGoldbachNumber is disabled; if the conjecture is false then it is enabled. How does ProB overcome undecidability? First, it will enumerate integer variables only between MININT and MAXINT (unless the machine itself fixes the value to be outside of that range). Hence, ProB will look for solutions to the parameter x of NotGoldbachNumber only until MAXINT. Hence, if we set MAXINT to 16 (adding a definiton SET_PREF_MAXINT == 16) we get the following picture after executing the INITIALISATION: We can see that 10 can be expressed as the sum 5+5 or 3+7. At least until 16, Goldbach's conjecture is confirmed, as NotGoldbachNumber is disabled. Note, that this restriction (of enumerating only between MININT and MAXINT) means that ProB is in general incomplete and sometimes unsound when using mathematical integers. We recommend using the implementable integers only (INT, NAT, NAT1). In future, we plan to integrate an interval analysis into ProB so as to highlight predicates over mathematical integers which are potentially The mathematical integers are, however, not the only source of undecidability. Another source stems from the deferred sets. Indeed, the size of those sets can be unknown, and they may even be infinite. To overcome this issue, ProB is required to fix the cardinality of every deferred set to a finite number before animation (see also Understanding the ProB Setup Phases on how you can control the cardinality of the deferred sets). Complexity of Animation However, after addressing the undecidability issue, there is still the problem of complexity. The animation task can be arbitrarily complex even for finite sets and implementable integers. Take for example the following predicate, which declares rr to be a binary relation over -3..3: rr : (-3..3) <-> (-3..3) This predicate could be part of a precondition of an operation. In order to find possible ways to enable the operation, the possible values for rr need to be examined. Even if ProB did enumerate 100,000 candidate solutions for rr per second, it would take over 178 years to check all solutions for rr. Indeed, there are 49 possible pairs of values between -3 and 3 and hence 2^49 = 562,949,953,421,312 binary relations over -3..3. The following table shows how the size of a deferred set S influences the number of subsets, partial functions, relations over S, and a total function over relations of S: Number of distinct solutions │card(S)│sub:POW(S)│ pf:S+->S │rl: S<->S │tclos: (S<->S) --> (S<->S) │ │1 │2 │2 │2 │4 │ │2 │4 │9 │16 │1.84e+19 │ │3 │8 │64 │512 │overflow │ │4 │16 │624 │65,536 │overflow │ │5 │32 │7,776 │33,554,432│overflow │ │6 │64 │117,649 │6.87e+10 │overflow │ │7 │128 │2,097,152 │5.63e+14 │overflow │ │8 │256 │43,046,721 │1.85e+19 │overflow │ │9 │512 │1,000,000,000│2.42e+24 │overflow │ │n │2^n │(n+1)^n │2^n*n │(2^n*n)^(2^n*n) │ tclos is actually the type of a function that does appear in some models (often Event-B models): namely, the transitive closure which gets a relation as argument and computes its transitive closure. As you can see, we already run into problems with a card(S)=2. (Note that the age of the universe is about 4e+17 seconds and 4e+26 nanoseconds.) For card(S)=3 the number of possible solutions (512^ 512) can no longer be represented as a standard floating point number. In order to make a B model amenable to animation, we have three options: • use small cardinalities for the deferred sets and avoid using overly complicated datastructures (e.g., relations over relations or total functions over relations) • ensure that ProB does not have to enumerate variables with complicated types, by providing concrete values (e.g., in the form of an equality such as rel = {1|->2, 2|->3, 3|->1}) • hope that the constraint solver of ProB will be able to find a solution. In the next part of the tutorial, we will explain in more detail how the constraint solver of ProB works, so that you will be in a better position to write specifications which can be successfully animated or validated.
{"url":"https://prob.hhu.de/w/index.php?title=Tutorial_Understanding_the_Complexity_of_B_Animation&diff=next&oldid=917","timestamp":"2024-11-07T01:20:39Z","content_type":"application/xhtml+xml","content_length":"26604","record_id":"<urn:uuid:35021d81-4ce7-49db-90ff-99b6f7b10e79>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00541.warc.gz"}
Value: The Third Factor of Investing A stock’s valuation is the final factor of the Fama-French three-factor model of investment returns. A stock’s valuation is measured on a continuum from “value” to “growth.” In broad strokes, value stocks are cheap and growth stocks are expensive. But there are compelling reasons why an investor might be willing to pay more for a growth stock than a value stock. Consider a local utility company whose stock is selling for $10 a share. The price has not changed much in the past 20 years. The company only services a specific geographic area that is not experiencing population growth. It has also had consistent earnings each year and paid the entire amount to shareholders at $1 per share. This company has a price per earnings (P/E) ratio of 10. If you were to buy a share of this company, it would take 10 years to get paid back in earnings. After a decade you would have been paid back the price of your purchase and still own your share of stock. This type of stock is considered a value stock. In contrast, consider a technology startup company that has shown meteoric growth in the past three years. Annual company earnings were $0.25. Then they doubled to $0.50, and now they are $1 per share. The business has just entered into several synergistic deals with other companies that should vastly increase its market penetration. Expectations are for the earnings to continue to double for the next three years. Investors might rightly decide that the growing technology company is worth more than the static regional utility. If earnings grow to $2, then $4 and then $8, this technology company might be worth $80 a share instead of just $10. If so, the current P/E ratio of this stock would be 80. This type of stock is considered a growth stock. The P/E ratio is one common measurement used to place stocks on the value to growth continuum. Low P/E ratios are value and high P/E ratios are growth. When computing P/E ratios, the price (P) is easy to compute. Price is the share price on the stock market exchange throughout the day. Calculating earnings, however, is a little more difficult. Some measurements use the past four quarters of earnings, which is often called the trailing P/E ratio. A projected or forward P/E uses the analyst consensus of the next four quarters’ worth of earnings. For companies whose earnings are in flux, these two numbers can be very different. Additionally, during hard economic times a company’s earnings can decrease significantly or even disappear entirely. When earnings go to zero or turn negative, the P/E ratio ceases to make sense. For these reasons a more common way to place a stock on the value-to-growth scale is the price-to-book ratio. Book value is the value shareholders would theoretically receive if a company was liquidated and all of its assets sold. Using book value ensures that you have some positive number and you won’t be dividing by zero. Many other metrics are used to analyze the value or growth attributes of a stock. These include price per cash flow, cash flow growth, price per sales, sales growth, book value growth and dividend yield. Each metric is simply trying to measure a future return on your investment. Value and growth stocks react differently to new information. If the local utility company has some additional capital projects one year and only manages to pay $0.80 in dividends, it might not even change the stock’s price. Value stocks are more resilient in flat or down markets. Not so with growth stocks. If the technology startup’s earnings grow from $1.00 to $1.80 the next year, the stock’s price might tumble 27% from $80 per share down to $58 a share. The company’s phenomenal track record of growth would be broken, and a more realistic valuation would be given to the company on account of lowered expectations. Investors would wonder if instead of doubling earnings from $1 to $2 to $4 to $8, the company might only experience an 80% growth each year. That would mean earnings would grow from $1 to $1.80 to $3.24 to $5.83. As a result the stock might immediately drop 27% from $80 per share down to $58 per share. Many investors find it baffling that a company could grow their earnings by 80% and the stock would drop 27% as a result. But everything is based on the expectation that the company would grow by 100% each year for the next three years. That expectation was already factored into the share price. On average, value beats growth. The analysis that Eugene Fama presented during our conference last fall analyzed returns of the U.S. markets from 1927 through 2010. Rather than the price-to-book ratio, he used the book-to-price ratio. Because price is the market price, this is commonly called the book-to-market (BtM) ratio. When the BtM is high, a stock is a value stock. And when it is low, a stock is a growth stock. Fama analyzed each decile from the lowest 10% of the BtM growth stocks through the highest 10% of the value stocks. The results were remarkable. The average return of the lowest BtM decile was 10.99%, and the highest decile was 17.32%. There was a continuum such that except for a few, each more valuable decile showed a higher average return. The average return for the top 30% value was 16.30%, a full 4.80% higher than the bottom 30% growth’s 11.50%. The neutral 40% in the middle averaged 12.91%. Most major indexes are split between the value half and the growth half. The S&P 500 has an S&P 500 value and an S&P 500 growth. Similarly, the Russell 1000 Large Cap and the Russell 2000 Small Cap have subindexes for the value and growth side of each. In my own analysis I measured the returns of the Russell 1000 Value against the Russell 1000 Growth from 1979 through the end of February 2012. During that period, large-cap value returned 13.27% versus growth’s 12.47%, a premium of 0.80%. The difference in small cap was much greater. The Russell 2000 Small Cap Value returned 15.07% versus Small Cap Growth’s 12.44% for a premium of 2.63%. In future articles we will explore how the value premium is higher for small cap than it is for large cap. The landscape of investment returns gets more complex with just the two dimensions of size and value. We begin to see that expected returns do not fit our simple linear formula. Rather they follow a curve. Adding size and value as factors in our capital asset pricing model shows us certain truths about the markets. Many people follow the Dow and invest like the S&P 500, but tilting small and value provides a better return on average. And while large-cap growth technology companies got all the headlines and captured everyone’s attention during the last half of the 1990s, there is quiet money to be made by growing rich slowly with value stocks. E(R[i]) = R[f] + ß[i](E(R[m]) – R[f]) + s[i]SMB[p] + v[i]HML[p] E(R[i]) is the expected return on an investment R[f] is the risk-free rate of interest (about 3.67%) For example, what you could get in short-term Treasuries E(R[m]) is the expected return of the market (about 11.71%) E(R[m]) – R[f]) is sometimes called the market premium (about 8.04%) ß[i] = Cov(R[i], R[m]) / Var(R[m]) The sensitivity of the expected excess asset returns to the expected excess market returns. S[i] is the size measurement of the investment SMB[p] is the Small Minus Big premium (about 3.92%) V[i] is the value measurement of the investment HBL[p] is the High Minus Low premium (about 5.04%)
{"url":"https://www.marottaonmoney.com/value-the-third-factor-of-investing/","timestamp":"2024-11-02T11:56:45Z","content_type":"text/html","content_length":"74194","record_id":"<urn:uuid:b384867f-149d-415e-9607-afb0d7ade56c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00748.warc.gz"}
Lepenski Vir - leyhunters.co.uk Lyle Borst is a professor of physics at the University of New York at Buffalo. His findings relating to research carried out at ancient sites in Britain can be found in Megalithic Software (Part1): a book we highly recommend (see advertisement elsewhere in this issue). The article here has been assembled by your editor from work in progress material supplied by Professor Borst. The Danube River forms a whirlpool: Lepenski Vir, near Orsova. Large fish come up the Danube to eat the little fish, confused by the currents, and men have founded a colony to eat the big fish. The small community by the river is protected by hills and the stream, and no defence works have been identified. The community is small and the houses appear triangular! A careful statistical study was made, and this follows. Lepenski Vir is unique in conferring upon the inhabitants monopoly control of a durable, reliable food supply: the fish which come to the whirlpool to feed. The site is difficult of access and apparently defensible, and has no man-made fortifications. Its occupation, given by Srejovic, from c.5400BC (L.V.I-a) to 4600 BC (I-e) shows cultural continuity. Proto-Lepenski Vir appears to be a different culture and L.V.111 is identified with Starcevo. Quantitative sources are limited to numerical values and illustrations in the English translation of Srejovic's book: New Discoveries at Lepenski Vir. The scales of the drawings are assumed to be accurate and the direction of north is assumed to be true. Figure A shows a histogram of the sizes of structures. There is only one dimension which stands out: 4.6-4.7 metres. Less well defined clusters occur elsewhere. To determine the unit of length used, the design of the structure must be reconstructed. Figure B shows a large foundation plan of building #37 (L.V.I-c,d). Srejovic has properly identified as an equilateral triangle. In Western Europe, among sites perhaps as early as 3000 BC, the equilateral triangle is uniformly represented by two right triangles with sides 15,26 and hypotenuse 30 units. This approximation cannot be used at Lepenski Vir for it does not give a coherent design. The position of the altar, which we believe to be an omphalos, just beyond the hearth is at a distance of 1/4 the base. An isosceles triangle with altitude on fourth the base is usually formed from triangle 17,34,38. The two triangles are then not compatible, for the base of one should be 2 x 34 whereas the base of the other should be 2 x 15. They could be used only is separate units were used. In religious structures in England, such as Canterbury and Westminster, Charlemagne's Chapel, Aachen and in the Emperor's Mausoleums of Japan, there are examples so such pairs of isosceles triangles with a common base. In every case the unit is the same or doubled for each triangle (Canterbury:2(34,17,38); 2(17,17,24), Westminster:2 (12,12,17)2; (45,24,51), Aachen 2(12,5,13)2; 2(50,10,51). It is clear that triangles 2(7,4,8); 2(4,8,9)1/2 have the correct relation, for all dimensions become integral and the bases are equal. We therefore assume that the unit of length for the equilateral triangle is 1/8 the length of the base, and for the altar, 1/16. The length of the unit for building #37 is then 0.86 metres (as compared to 0.83m found in many sites in Europe and Asia) Figure B in addition shows an oval stone off axis beyond the altar. In other buildings this is matched by a symmetrically placed sculpture on the opposite side of the axis. The center the oval stone forms a 3,4,5 triangle with the altar and axis, and it is presumed that in other structures the isosceles triangle is complete 2(4,3,5) The unit of this triangle in building #37 of 0.19 metres is unique and is not found in any other neolithic or bronze age site. If the other structures of L.V.1 are based upon the same geometry, the unit is 1/8 of the width of the building. This gives for the peak of the histogram 4.65 divided8 = 0.58 metres as the unit. The Babylonian cubit has a length of 0.545 metres, and it is evident that although Babylon and Mesopotamia are not contemporary with Lepenski Vir, the cubit is anticipated. The unit of the 3,4,5 triangle is one third of this value, and the oval stone is one cubit from the axis. Little can be said about the other structures, for the unit varies from 0.225m (building #49) to 1.33m. (building #57) A statistical treatment of the dimensions of the hearths is not possible with available data. They are characterised as about one metre long with a length-to-width ratio of 3:1 or 4:1. Dimensions of rectangular altars given in Srejovic's book show concentrations around a length-to-width ratio of 1.2 (FigC). The width of the altars show less variation than the length. The average of fifteen altars from 14 to 17 cm is 15.3cm. The most frequently used unit in house dimensions is 0.58m, and one quarter of this is 14.5cm. slightly less than the average alter width of 15.3cm. Three altars all from L.V.1-c buildings (#45 and 37) show ratios of 1.75=7/4, suggesting two 4,7,8 triangles with a common hypotenuse. The unit then is between 3.5and 5 cm. Approximately 1/4 of the altar width. Since building dimensions use multiples of four it is not surprising to find 1/4 of this unit in the altar width and a further reduction of 1/4 x 1/4 = 1/16 as a useful unit in proportioning the alters. If a width of four units is accepted for the altars, the length to width ration should be 1.25, not far from that observed, with a unit of about 3.6cm. The position of the altar seems to be the focus of the building design. For this reason we believe this to be an omphalos. Figure D (a) shows the pattern of these positions for L.V.1-a. Buildings 36,54a, 58 have axes which intersect at a common point. The axis of #36 passes through the centre of 34. We have also drawn lines through alignments of three or more centres. L.V.1-b(b) shows one strong alignment of which five points are retained from L.V.1-a. The indicated azimuth is 168 degrees (as compared to 166 degrees for L.V.1-a). This is not likely to be a stellar alignment, and so, for the present is unidentified. A prominent fiducial mark on the horizon, however, may be involved. L.V.1-c) retains the 166-168 degree alignment, measured as 165 degrees. Three points are preserved from L.V.1-b but none from L.V.1-a. The alignments are similar to those found at El Mina, north of Mecca, Arabia, and alignments in England. In the case of Lepenski Vir the date is established whereas no exact date can be assigned to the other sites. Lepenski Vir 1-e (not illustrated) shows a very different pattern. The axes of the houses are nearly parallel and point north of east. Thirteen houses in the north half of the site have a range of 54-72 degrees with an average azimuth of 65 degrees. None of these is present in L.V.1-d(not illustrated). The clustering around 65 degrees is not strong, so a celestial alignment is not expected. Information on the first settlement is mostly by inference. Hearths from proto-Lepenski Vir have survived, but house size and form can only be suggested from the pattern of debris remaining. The hearths were well constructed of slabs 'usually 30x25x10cm have lengths of 0.8-1.0 metres and show a ration of 3:1 or 4:1 Again the width seems to show least variation (0.2-0.25m); and this width is not far from the unit of the 3,4,5 triangle of building #37 (0.19m). The form of the buildings was not triangular. Srejovic suggest a length of 3,4-4.0 metres, and an egg shaped form. Such an egg shaped form is common throughout Europe. The trilithos of Stonehenge are tangent to an inferred oval. Many oval sanctuaries, both pagan and Christian, have been found throughout Europe. The inner horseshoe of Sarmizugetusa has such a form. Predynastic graves in Egypt and Palestine of the same age result from the same geometry. We would expect the 3,4,5 triangle to be set out in half megalithic yards (0.83/2=0.415m), as at many other sites. The position of the hearth is suggested. The length of the oval would be 4.15m and the maximum width, 3.32m.. There may be the same variation in size found at the later site. The foregoing analysis strongly supports Srejovic's inference 'that the builders of Lepenski Vir possessed quite definite mathematical knowledge which they employed skillfully in measuring the terrain and fixing the proportions, shapes and dimensions of the houses' (164,P.51). The culture was using the cubit 2-3000 years before it appeared in Mesopotamia. The use of Pythagorean triangles is well supported. The non-random alignment of house altars again suggests a coherent plan not evident from a cursory inspection. The alignment with an azimuth 165-168 degrees in L.V.1-a,b,c is apparently significant. It cannot be a solar alignment. It may be an attempt to establish a N-S meridian, but the error is great. Its most probable explanation is that the alignment is upon a prominent feature on the We have noted a cluster of houses around a single point (L.V.1-a) The altars of houses 36 and 54a as well as the curved end of 58 are equidistant from this point at a distance of about seven metres. We suggest no explanation. We have noticed with interest, as has Srejovic, the frequent use of the integer four. This appears in the dimensions of houses and the position of the altar and sculptures, in the dimensions of the altars and hearths. This seems to be a very special number. Our geographical discussion has largely ignored chronology, for this does not come immediately from our analysis. We have assumed that most of our sites are third millennium BC or later.
{"url":"http://leyhunters.co.uk/archives/arch19.html","timestamp":"2024-11-01T22:10:32Z","content_type":"text/html","content_length":"14425","record_id":"<urn:uuid:e0f16d74-612d-4ce6-8ddf-08f67b3569fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00002.warc.gz"}
Convert binary to decimal I have just worked on a project where I had to create a binary to decimal converter. For the same project, I developed other binary, hex, and octal conversion tools. For my machine learning project, I developed a binary to decimal converter. My client insisted to develop such to enhance security for the project. Honestly, considering recent security threats, it is really important to enhance security for an app. When I came to know about developing this binary to decimal converter, I had no idea who I would do that because I had very little knowledge about this number conversion system. I spent one full day learning about it. For that reason, I thought, sharing code for such conversion can be beneficial for the developer who may work on such a project in the future. About binary and decimal numbers: Binary numbers are written in just zeros and ones. They are the 2-bit numbers and all the electronic devices and computer processors can understand binary numbers only. For the last several years, binary numbers have been used by almost all devices. Decimal numbers are the base-10 numbers. It contains 0 to 9 numbers. After binary numbers, decimals are the most used numbering system right now. Most machine learning, artificial intelligence products, and IoT products use this numbering system. Why did I need to develop a binary to the decimal converter? As decimal numbers are represented in 0 to 9 and binary contain zeros and ones. Decimal numbers can easily be read and understood by humans while it is pretty difficult to read the convert binary numbers. For that reason, my client wanted me to develop such a tool. Javascript code to convert binary to decimal: function bintodec() { var input = document.getElementById("text_value").value; var cc = new BigNumber(input, 2); document.getElementById("ans").innerHTML = cc; And done! Copy and paste the above code to convert binary to decimal. In this code, I used the Bigumber.js javascript library. If you are dealing with such big numbers, then you need to use Bignumber.js otherwise you might get an error while getting the answer. So, this is the technique I used to convert binary to decimal. If you have used any other technique for such conversion, then do let me know by dropping a comment. I would love to hear from you. Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/mallaya_mathur/convert-binary-to-decimal-5cc8","timestamp":"2024-11-14T15:27:00Z","content_type":"text/html","content_length":"64606","record_id":"<urn:uuid:091bd2fa-77f3-409a-9d86-5519babafa9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00072.warc.gz"}
7.2: Right Triangle Trigonometry (2024) Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vectorC}[1]{\textbf{#1}}\) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Learning Objectives • Use right triangles to evaluate trigonometric functions. • Find function values for 30°(\(\dfrac{\pi}{6}\)),45°(\(\dfrac{\pi}{4}\)),and 60°(\(\dfrac{\pi}{3}\)). • Use equal cofunctions of complementary angles. • Use the definitions of trigonometric functions of any angle. • Use right-triangle trigonometry to solve applied problems. Mt. Everest, which straddles the border between China and Nepal, is the tallest mountain in the world. Measuring its height is no easy task and, in fact, the actual measurement has been a source of controversy for hundreds of years. The measurement process involves the use of triangles and a branch of mathematics known as trigonometry. In this section, we will define a new group of functions known as trigonometric functions, and find out how they can be used to measure heights, such as those of the tallest mountains. We have previously defined the sine and cosine of an angle in terms of the coordinates of a point on the unit circle intersected by the terminal side of the angle: \[ \begin{align*} \cos t &= x \\ \sin t &=y \end{align*} \] In this section, we will see another way to define trigonometric functions using properties of right triangles. Using Right Triangles to Evaluate Trigonometric Functions In earlier sections, we used a unit circle to define the trigonometric functions. In this section, we will extend those definitions so that we can apply them to right triangles. The value of the sine or cosine function of \(t\) is its value at \(t\) radians. First, we need to create our right triangle. Figure \(\PageIndex{1}\) shows a point on a unit circle of radius 1. If we drop a vertical line segment from the point \((x,y)\) to the x-axis, we have a right triangle whose vertical side has length \(y\) and whose horizontal side has length \(x\). We can use this right triangle to redefine sine, cosine, and the other trigonometric functions as ratios of the sides of a right triangle. We know \[ \cos t= \frac{x}{1}=x \] Likewise, we know \[ \sin t= \frac{y}{1}=y \] These ratios still apply to the sides of a right triangle when no unit circle is involved and when the triangle is not in standard position and is not being graphed using \((x,y)\) coordinates. To be able to use these ratios freely, we will give the sides more general names: Instead of \(x\),we will call the side between the given angle and the right angle the adjacent side to angle \(t\). (Adjacent means “next to.”) Instead of \(y\),we will call the side most distant from the given angle the opposite side from angle \(t\). And instead of \(1\),we will call the side of a right triangle opposite the right angle the hypotenuse. These sides are labeled in Figure \(\PageIndex{2}\). Understanding Right Triangle Relationships Given a right triangle with an acute angle of \(t\), \[ \sin (t) &= \dfrac{\text{opposite}}{\text{hypotenuse}} \label{sindef}\\ \cos (t) &= \dfrac{\text{adjacent}}{\text{hypotenuse}} \label{cosdef}\\ \tan (t) &= \dfrac{\text{opposite}}{\text{adjacent}} A common mnemonic for remembering these relationships is SohCahToa, formed from the first letters of “Sine is opposite over hypotenuse, Cosine is adjacent over hypotenuse, Tangent is opposite over a how to: Given the side lengths of a right triangle and one of the acute angles, find the sine, cosine, and tangent of that angle 1. Find the sine as the ratio of the opposite side to the hypotenuse. 2. Find the cosine as the ratio of the adjacent side to the hypotenuse. 3. Find the tangent is the ratio of the opposite side to the adjacent side. Example \(\PageIndex{1}\): Evaluating a Trigonometric Function of a Right Triangle Given the triangle shown in Figure \(\PageIndex{3}\), find the value of \(\cos α\). The side adjacent to the angle is 15, and the hypotenuse of the triangle is 17, so via Equation \ref{cosdef}: \[\begin{align*} \cos (α) &= \dfrac{\text{adjacent}}{\text{hypotenuse}} \\[4pt] &= \dfrac{15}{17} \end{align*}\] Exercise \(\PageIndex{1}\) Given the triangle shown in Figure \(\PageIndex{4}\), find the value of \(\sin t\). Relating Angles and Their Functions When working with right triangles, the same rules apply regardless of the orientation of the triangle. In fact, we can evaluate the six trigonometric functions of either of the two acute angles in the triangle in Figure \(\PageIndex{5}\). The side opposite one acute angle is the side adjacent to the other acute angle, and vice versa. We will be asked to find all six trigonometric functions for a given angle in a triangle. Our strategy is to find the sine, cosine, and tangent of the angles first. Then, we can find the other trigonometric functions easily because we know that the reciprocal of sine is cosecant, the reciprocal of cosine is secant, and the reciprocal of tangent is cotangent. how to: Given the side lengths of a right triangle, evaluate the six trigonometric functions of one of the acute angles 1. If needed, draw the right triangle and label the angle provided. 2. Identify the angle, the adjacent side, the side opposite the angle, and the hypotenuse of the right triangle. 3. Find the required function: □ sine as the ratio of the opposite side to the hypotenuse □ cosine as the ratio of the adjacent side to the hypotenuse □ tangent as the ratio of the opposite side to the adjacent side □ secant as the ratio of the hypotenuse to the adjacent side □ cosecant as the ratio of the hypotenuse to the opposite side □ cotangent as the ratio of the adjacent side to the opposite side Example \(\PageIndex{2}\): Evaluating Trigonometric Functions of Angles Not in Standard Position Using the triangle shown in Figure \(\PageIndex{6}\), evaluate \( \sin α, \cos α, \tan α, \sec α, \csc α,\) and \( \cot α\). \[ \begin{align*} \sin α &= \dfrac{\text{opposite } α}{\text{hypotenuse}} = \dfrac{4}{5} \\ \cos α &= \dfrac{\text{adjacent to }α}{\text{hypotenuse}}=\dfrac{3}{5} \\ \tan α &= \dfrac{\text{opposite } α}{\text{adjacent to }α}=\dfrac{4}{3} \\ \sec α &= \dfrac{\text{hypotenuse}}{\text{adjacent to }α}= \dfrac{5}{3} \\ \csc α &= \dfrac{\text{hypotenuse}}{\text{opposite }α}=\dfrac{5}{4} \\ \cot α &= \ dfrac{\text{adjacent to }α}{\text{opposite }α}=\dfrac{3}{4} \end{align*}\] Exercise \(\PageIndex{2}\) Using the triangle shown in Figure \(\PageIndex{7}\), evaluate \( \sin t, \cos t,\tan t, \sec t, \csc t,\) and \(\cot t\). \[\begin{align*} \sin t &= \frac{33}{65}, \cos t= \frac{56}{65},\tan t= \frac{33}{56}, \\ \\ \sec t &= \frac{65}{56},\csc t= \frac{65}{33},\cot t= \frac{56}{33} \end{align*}\] Finding Trigonometric Functions of Special Angles Using Side Lengths We have already discussed the trigonometric functions as they relate to the special angles on the unit circle. Now, we can use those relationships to evaluate triangles that contain those special angles. We do this because when we evaluate the special angles in trigonometric functions, they have relatively friendly values, values that contain either no or just one square root in the ratio. Therefore, these are the angles often used in math and science problems. We will use multiples of \(30°, 60°,\) and \(45°\), however, remember that when dealing with right triangles, we are limited to angles between \(0° \text{ and } 90°\). Suppose we have a \(30°,60°,90°\) triangle, which can also be described as a \(\frac{π}{6}, \frac{π}{3},\frac{π}{2}\) triangle. The sides have lengths in the relation \(s,\sqrt{3}s,2s.\) The sides of a \(45°,45°,90° \)triangle, which can also be described as a \(\frac{π}{4},\frac{π}{4},\frac{π}{2}\) triangle, have lengths in the relation \(s,s,\sqrt{2}s.\) These relations are shown in Figure \(\ We can then use the ratios of the side lengths to evaluate trigonometric functions of special angles. Given trigonometric functions of a special angle, evaluate using side lengths. 1. Use the side lengths shown in Figure \(\PageIndex{8}\) for the special angle you wish to evaluate. 2. Use the ratio of side lengths appropriate to the function you wish to evaluate. Example \(\PageIndex{3}\): Evaluating Trigonometric Functions of Special Angles Using Side Lengths Find the exact value of the trigonometric functions of \(\frac{π}{3}\), using side lengths. \[\begin{align*} \sin (\dfrac{π}{3}) &= \dfrac{\text{opp}}{\text{hyp}}=\dfrac{\sqrt{3}s}{2s}=\dfrac{\sqrt{3}}{2} \\ \cos (\dfrac{π}{3}) &= \dfrac{\text{adj}}{\text{hyp}}=\dfrac{s}{2s}=\dfrac{1}{2} \\ \tan (\dfrac{π}{3}) &= \dfrac{\text{opp}}{\text{adj}} =\dfrac{\sqrt{3}s}{s}=\sqrt{3} \\ \sec (\dfrac{π}{3}) &= \dfrac{\text{hyp}}{\text{adj}} = \dfrac{2s}{s}=2 \\ \csc (\dfrac{π}{3}) &= \dfrac{\text {hyp}}{\text{opp}} =\dfrac{2s}{\sqrt{3}s}=\dfrac{2}{\sqrt{3}}=\dfrac{2\sqrt{3}}{3} \\ \cot (\dfrac{π}{3}) &= \dfrac{\text{adj}}{\text{opp}}=\dfrac{s}{\sqrt{3}s}=\dfrac{1}{\sqrt{3}}=\dfrac{\sqrt{3}} {3} \end{align*}\] Exercise \(\PageIndex{3}\) Find the exact value of the trigonometric functions of \(\frac{π}{4}\) using side lengths. \( \sin (\frac{π}{4})=\frac{\sqrt{2}}{2}, \cos (\frac{π}{4})=\frac{\sqrt{2}}{2}, \tan (\frac{π}{4})=1,\) \( \sec (\frac{π}{4})=\sqrt{2}, \csc (\frac{π}{4})=\sqrt{2}, \cot (\frac{π}{4}) =1 \) Using Equal Cofunction of Complements If we look more closely at the relationship between the sine and cosine of the special angles relative to the unit circle, we will notice a pattern. In a right triangle with angles of \(\frac{π}{6}\) and \(\frac{π}{3}\), we see that the sine of \(\frac{π}{3}\), namely \(\frac{\sqrt{3}}{2}\), is also the cosine of \(\frac{π}{6}\), while the sine of \(\frac{π}{6}\), namely \(\frac{1}{2},\) is also the cosine of \(\frac{π}{3}\) (Figure \(\PageIndex{9}\)). \[\begin{align*} \sin \frac{π}{3} &= \cos \frac{π}{6}=\frac{\sqrt{3}s}{2s}=\frac{\sqrt{3}}{2} \\ \sin \frac{π}{6} &= \cos \frac{π}{3}=\frac{s}{2s}=\frac{1}{2} \end{align*}\] This result should not be surprising because, as we see from Figure \(\PageIndex{9}\), the side opposite the angle of \(\frac{π}{3}\) is also the side adjacent to \(\frac{π}{6}\), so \(\sin (\frac{π} {3})\) and \(\cos (\frac{π}{6})\) are exactly the same ratio of the same two sides, \(\sqrt{3} s\) and \(2s.\) Similarly, \( \cos (\frac{π}{3})\) and \( \sin (\frac{π}{6})\) are also the same ratio using the same two sides, \(s\) and \(2s\). The interrelationship between the sines and cosines of \(\frac{π}{6}\) and \(\frac{π}{3}\) also holds for the two acute angles in any right triangle, since in every case, the ratio of the same two sides would constitute the sine of one angle and the cosine of the other. Since the three angles of a triangle add to π, and the right angle is \(\frac{π}{2}\), the remaining two angles must also add up to \(\frac{π}{2}\). That means that a right triangle can be formed with any two angles that add to \(\frac{π}{2}\)—in other words, any two complementary angles. So we may state a cofunction identity: If any two angles are complementary, the sine of one is the cosine of the other, and vice versa. This identity is illustrated in Figure \(\PageIndex{10}\). Using this identity, we can state without calculating, for instance, that the sine of \(\frac{π}{12}\) equals the cosine of \(\frac{5π}{12}\), and that the sine of \(\frac{5π}{12}\) equals the cosine of \(\frac{π}{12}\). We can also state that if, for a certain angle \(t, \cos t= \frac{5}{13},\) then \( \sin (\frac{π}{2}−t)=\frac{5}{13}\) as well. The cofunction identities in radians are listed in Table \(\PageIndex{1}\). Table \(\PageIndex{1}\) \( \cos t= \sin (\frac{π}{2}−t)\) \( \sin t= \cos (\dfrac{π}{2}−t)\) \( \tan t= \cot (\dfrac{π}{2}−t) \) \( \cot t= \tan (\dfrac{π}{2}−t)\) \( \sec t= \csc (\dfrac{π}{2}−t) \) \( \csc t= \sec (\dfrac{π}{2}−t)\) how to: Given the sine and cosine of an angle, find the sine or cosine of its complement. 1. To find the sine of the complementary angle, find the cosine of the original angle. 2. To find the cosine of the complementary angle, find the sine of the original angle. Example \(\PageIndex{4}\): Using Cofunction Identities If \( \sin t = \frac{5}{12},\) find \(( \cos \frac{π}{2}−t)\). According to the cofunction identities for sine and cosine, \[ \sin t= \cos (\dfrac{π}{2}−t). \nonumber\] \[ \cos (\dfrac{π}{2}−t)= \dfrac{5}{12}. \nonumber\] Exercise \(\PageIndex{4}\) If \(\csc (\frac{π}{6})=2,\) find \( \sec (\frac{π}{3}).\) Using Trigonometric Functions In previous examples, we evaluated the sine and cosine in triangles where we knew all three sides. But the real power of right-triangle trigonometry emerges when we look at triangles in which we know an angle but do not know all the sides. how to: Given a right triangle, the length of one side, and the measure of one acute angle, find the remaining sides 1. For each side, select the trigonometric function that has the unknown side as either the numerator or the denominator. The known side will in turn be the denominator or the numerator. 2. Write an equation setting the function value of the known angle equal to the ratio of the corresponding sides. 3. Using the value of the trigonometric function and the known side length, solve for the missing side length. Example \(\PageIndex{5}\): Finding Missing Side Lengths Using Trigonometric Ratios Find the unknown sides of the triangle in Figure \(\PageIndex{11}\). We know the angle and the opposite side, so we can use the tangent to find the adjacent side. \[ \tan (30°)= \dfrac{7}{a} \nonumber\] We rearrange to solve for \(a\). \[ a &=\dfrac{7}{ \tan (30°)} \\ & =12.1 \nonumber\] We can use the sine to find the hypotenuse. \[ \sin (30°)= \dfrac{7}{c} \nonumber\] Again, we rearrange to solve for \(c\). \[\begin{align*} c &= \dfrac{7}{\sin (30°)} =14 \end{align*}\] Exercise \(\PageIndex{5}\): A right triangle has one angle of \(\frac{π}{3}\) and a hypotenuse of 20. Find the unknown sides and angle of the triangle. \(\mathrm{adjacent=10; opposite=10 \sqrt{3}; }\) missing angle is \(\frac{π}{6}\) Using Right Triangle Trigonometry to Solve Applied Problems Right-triangle trigonometry has many practical applications. For example, the ability to compute the lengths of sides of a triangle makes it possible to find the height of a tall object without climbing to the top or having to extend a tape measure along its height. We do so by measuring a distance from the base of the object to a point on the ground some distance away, where we can look up to the top of the tall object at an angle. The angle of elevation of an object above an observer relative to the observer is the angle between the horizontal and the line from the object to the observer's eye. The right triangle this position creates has sides that represent the unknown height, the measured distance from the base, and the angled line of sight from the ground to the top of the object. Knowing the measured distance to the base of the object and the angle of the line of sight, we can use trigonometric functions to calculate the unknown height. Similarly, we can form a triangle from the top of a tall object by looking downward. The angle of depression of an object below an observer relative to the observer is the angle between the horizontal and the line from the object to the observer's eye. See Figure \(\PageIndex{12}\). how to: Given a tall object, measure its height indirectly 1. Make a sketch of the problem situation to keep track of known and unknown information. 2. Lay out a measured distance from the base of the object to a point where the top of the object is clearly visible. 3. At the other end of the measured distance, look up to the top of the object. Measure the angle the line of sight makes with the horizontal. 4. Write an equation relating the unknown height, the measured distance, and the tangent of the angle of the line of sight. 5. Solve the equation for the unknown height. Example \(\PageIndex{6}\): Measuring a Distance Indirectly To find the height of a tree, a person walks to a point 30 feet from the base of the tree. She measures an angle of 57° 57° between a line of sight to the top of the tree and the ground, as shown in Figure \(\PageIndex{13}\). Find the height of the tree. We know that the angle of elevation is \(57°\) and the adjacent side is 30 ft long. The opposite side is the unknown height. The trigonometric function relating the side opposite to an angle and the side adjacent to the angle is the tangent. So we will state our information in terms of the tangent of \(57°\), letting \(h\) be the unknown height. \[\begin{array}{cl} \tan θ = \dfrac{\text{opposite}}{\text{adjacent}} & \text{} \\ \tan (57°) = \dfrac{h}{30} & \text{Solve for }h. \\ h=30 \tan (57°) & \text{Multiply.} \\ h≈46.2 & \text{Use a calculator.} \end{array} \] The tree is approximately 46 feet tall. Exercise \(\PageIndex{6}\): How long a ladder is needed to reach a windowsill 50 feet above the ground if the ladder rests against the building making an angle of \(\frac{5π}{12}\) with the ground? Round to the nearest foot. About 52 ft Access these online resources for additional instruction and practice with right triangle trigonometry. Visit this website for additional practice questions from Learningpod. Key Equations Cofunction Identities \[\begin{align*} \cos t &= \sin ( \frac{π}{2}−t) \\ \sin t &= \cos (\frac{π}{2}−t) \\ \tan t &= \cot (\frac{π}{2}−t) \\ \cot t &= \tan (\frac{π}{2}−t) \\ \sec t &= \csc (\frac{π}{2}−t) \\ \csc t &= \ sec (\frac{π}{2}−t) \end{align*}\] Key Concepts • We can define trigonometric functions as ratios of the side lengths of a right triangle. See Example. • The same side lengths can be used to evaluate the trigonometric functions of either acute angle in a right triangle. See Example. • We can evaluate the trigonometric functions of special angles, knowing the side lengths of the triangles in which they occur. See Example. • Any two complementary angles could be the two acute angles of a right triangle. • If two angles are complementary, the cofunction identities state that the sine of one equals the cosine of the other and vice versa. See Example. • We can use trigonometric functions of an angle to find unknown side lengths. • Select the trigonometric function representing the ratio of the unknown side to the known side. See Example. • Right-triangle trigonometry permits the measurement of inaccessible heights and distances. • The unknown height or distance can be found by creating a right triangle in which the unknown height or distance is one of the sides, and another side and angle are known. See Example. adjacent side in a right triangle, the side between a given angle and the right angle angle of depression the angle between the horizontal and the line from the object to the observer’s eye, assuming the object is positioned lower than the observer angle of elevation the angle between the horizontal and the line from the object to the observer’s eye, assuming the object is positioned higher than the observer opposite side in a right triangle, the side most distant from a given angle the side of a right triangle opposite the right angle
{"url":"https://archerfieldmotoplex.info/article/7-2-right-triangle-trigonometry-2","timestamp":"2024-11-07T15:51:26Z","content_type":"text/html","content_length":"100302","record_id":"<urn:uuid:a3ea2075-396a-40f9-b202-94b3bf171221>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00764.warc.gz"}
Compute Outputs Compute Outputs can be used to update the output calculations on the worksheet. Compute Outputs can be accessed by clicking on the Compute Outputs button in the Active Worksheet menu on the main screen or by clicking the key. The figure below shows the screenshot of the Active Worksheet toolbar. The Compute Outputs button does all the required calculations, updates the text and graph outputs, and displays conclusions, if any. Use it whenever you have changed the data on your worksheet and want to update the results displayed on the worksheet. This section contains some frequently asked questions. If you have other questions about this module, please get in touch with technical support. I do not see any changes when I click the Compute Outputs button. Check the date/time field to see if the date has been updated. If the date has been updated, then the calculations have been completed. The Excel worksheet update has been turned off, so you may not notice changes to the worksheet unless all the calculations are completed. If the text output and figures are the same, you will not notice any change in the output area. The software does not append the calculations below the old ones but replaces any calculations already made earlier with new values, if any. Why must we click the Compute Output button? Why can't Sigma Magic do this automatically? If no worksheet input data is required, then the computations are done automatically for you. However, if any worksheet data is required, Sigma Magic does not know if you have completed entering/ changing the input data cells. Hence, you must manually click the Compute Outputs button after entering all the required data. Do I have to click on Compute Output for all tools? You do not need to click Compute Outputs if no calculations are required. However, there is no harm if you click on Compute Outputs. You may also want to look at the Conclusion box, which sometimes indicates that it expects you to click on Compute Output after entering the data. Is there any way to undo after I click on Compute Outputs? Once Compute Outputs is clicked, the text output and graph are updated based on your changes to the input data. There is no way to get the old graph backā the back button does not work here! One option is to replace the data you have changed with the old data set, and the graph will probably change back to the one you had earlier. Another option is to create a new worksheet tool for different data sets so you can retain the old worksheet graphs (if required). &copy Rapid Sigma Solutions LLP. All rights reserved.
{"url":"https://sigmamagic.com/help/SM17/Compute%20Outputs/index.php","timestamp":"2024-11-04T17:51:28Z","content_type":"text/html","content_length":"36187","record_id":"<urn:uuid:cbc1c050-a28f-488a-91e0-55cb3a4029ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00246.warc.gz"}