content
stringlengths
86
994k
meta
stringlengths
288
619
locally strongly finitely presentable category nLab locally strongly finitely presentable category Category theory Compact objects objects $d \in C$ such that $C(d,-)$ commutes with certain colimits Relative version A locally strongly finitely presentable category is like a locally finitely presentable category, but where the class of filtered colimits (respectively the class of finite limits) is replaced by the class of sifted colimits (respectively the class of finite products). Locally strongly finitely presentable category are precisely those categories equivalent to varieties of algebras. A category $C$ satisfying (any of) the following equivalent conditions is said to be locally strongly finitely presentable (or lsfp): Related pages • Jiri Adamek, Jiri Rosicky, On sifted colimits and generalized varieties, TAC 8 (2001) pp.33-53. (tac) • Jiri Adamek, Jiri Rosicky, Enrico Vitale, What are sifted colimits?, TAC 23 (2010) pp. 251–260. (tac) • Jiri Adamek, Jiri Rosicky, Enrico Vitale, Algebraic Theories - a Categorical Introduction to General Algebra , Cambrige UP 2010. (ch. 2) (draft) Last revised on August 28, 2024 at 21:18:42. See the history of this page for a list of all contributions to it.
{"url":"https://ncatlab.org/nlab/show/locally+strongly+finitely+presentable+category","timestamp":"2024-11-14T01:58:30Z","content_type":"application/xhtml+xml","content_length":"26053","record_id":"<urn:uuid:ecb03806-9eea-47dc-a120-fbc6bc82391d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00601.warc.gz"}
class optuna.importance.FanovaImportanceEvaluator(*, n_trees=64, max_depth=64, seed=None)[source] fANOVA importance evaluator. Implements the fANOVA hyperparameter importance evaluation algorithm in An Efficient Approach for Assessing Hyperparameter Importance. fANOVA fits a random forest regression model that predicts the objective values of COMPLETE trials given their parameter configurations. The more accurate this model is, the more reliable the importances assessed by this class are. This class takes over 1 minute when given a study that contains 1000+ trials. We published optuna-fast-fanova library, that is a Cython accelerated fANOVA implementation. By using it, you can get hyperparameter importances within a few seconds. Requires the sklearn Python package. The performance of fANOVA depends on the prediction performance of the underlying random forest model. In order to obtain high prediction performance, it is necessary to cover a wide range of the hyperparameter search space. It is recommended to use an exploration-oriented sampler such as RandomSampler. ☆ n_trees (int) – The number of trees in the forest. ☆ max_depth (int) – The maximum depth of the trees in the forest. ☆ seed (int | None) – Controls the randomness of the forest. For deterministic behavior, specify a value other than None. evaluate(study[, params, target]) Evaluate parameter importances based on completed trials in the given study. evaluate(study, params=None, *, target=None)[source] Evaluate parameter importances based on completed trials in the given study. This method is not meant to be called by library users. ○ study (Study) – An optimized study. ○ params (list[str] | None) – A list of names of parameters to assess. If None, all parameters that are present in all of the completed trials are assessed. ○ target (Callable[[FrozenTrial], float] | None) – A function to specify the value to evaluate importances. If it is None and study is being used for single-objective optimization, the objective values are used. Can also be used for other trial attributes, such as the duration, like target=lambda t: t.duration.total_seconds(). Specify this argument if study is being used for multi-objective optimization. For example, to get the hyperparameter importance of the first objective, use target=lambda t: t.values [0] for the target parameter. A dict where the keys are parameter names and the values are assessed importances. Return type: dict[str, float]
{"url":"https://optuna.readthedocs.io/en/stable/reference/generated/optuna.importance.FanovaImportanceEvaluator.html","timestamp":"2024-11-11T16:34:20Z","content_type":"text/html","content_length":"23016","record_id":"<urn:uuid:b20d1ba8-faa4-42ab-9ca1-f30ebf79b964>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00004.warc.gz"}
Filtering - Fundamentals of Signal Processing - VRU June 2, 2021 Back to: Fundamentals of Signal Processing A filter is an operation that discriminates the components of an object over some domain. For example: • In 3-dimensional space, filtering may include spatial discrimination: rejecting dirt and other large particles while admitting clean water to pass through. • In the domain of time, Olympic qualifying races use temporal discrimination: rejecting slower runners while admitting faster ones. In statistics, filtering applies statistical discrimination. For example, a median-5 filter admits the median value of every set of five samples and rejects the other four. These filters are applicable to image processing and financial analysis. In the frequency domain, filtering applies frequency discrimination. Filters reject or accept data components based on how frequently they change. Frequency Discriminating Filters Apply a filter when you want to discriminate data based on frequency. General Examples • Rejecting (eliminate or filter out) frequency components. • Accepting (admit or pass) frequency components. • Boosting the energy level (magnitude) of some components. Specific Examples • Filtering out a DC offset (a constant offset) • Filtering (notching) out 60Hz noise • Filtering out components outside of a microphone, accelerometer, or other transducers’ specifications • Filtering out components outside of an audio speaker or other shakers’ specifications • Equalizing an audio recording by attenuating some frequency bands and boosting others • Smoothing a noisy data sequence by attenuating rapidly changing (high-frequency) components Types of Filter Operations A filter performs some operation on a data sequence to yield a new sequence. The operation could be anything (or nothing in the case of an identity operation). However, there are some filters that engineers use so frequently that they have a title. Low-Pass Filter A low-pass filter attenuates high-frequency components above a specified corner frequency (f[c]) and allows low-frequency components below f[c] to pass through. For example, a low-pass filter can clean up (or smooth) data contaminated with noise so that patterns are more readily apparent. High-Pass Filter A high-pass filter attenuates low-frequency components below a specified corner frequency and allows the high-frequency components above it to pass through. For example, if data have a large DC offset, a high-pass filter with a small corner frequency can remove the offset so that patterns of interest are more readily apparent. Bandpass Filter A bandpass filter attenuates low and high frequencies and allows the middle frequencies to pass through. Notch Filter A notch filter is the reverse of a bandpass filter. A notch filter rejects a band of frequencies and accepts everything outside the rejected band. Filter Characteristics Corner Frequency Typically, the corner frequency of a filter response H(ω) is the frequency ωc, where the spectral power |H(ω)|^2 has dropped from |H(ω)|^2 = 1 to |H(ω)|^2 = ½ or |H(ω)|=1/√2. On a decibel (dB) scale, the spectral power drops from 10log[10](1) = 0dB to: There are no perfect filters in the real world—that is, none drop straight down after reaching their corner frequency. Instead, they roll off at some slope. Real-world filters in the frequency domain are not like cliffs but like hills. This hill-like roll-off at the end leaves the upper 5% frequencies of the data invalid. How Filters Work Filtering in Time with Convolution Often, a computer uses simple arithmetic to perform frequency-domain filtering. Specifically, each output y(n) is often a linear combination of inputs x(n). This type of filtering is called For example, suppose we want to filter out direct current (DC). A simple filter with input x(n) and output y(n) will set y(n) = x(n) – x(n-1). Then, any DC signal at the input (x(n) = constant) will be filtered out because y(n) = constant – constant = 0. Note: this filter is a digital differentiation. Filtering in Frequency with Multiplication Engineers may also filter a data sequence by first projecting it onto a basis. Often, this basis is a sequence of sines and cosines, and the tool for the projection is the fast Fourier transform According to the convolution theorem, filtering data in the time domain can be performed in the frequency domain by simple point-by-point multiplication. Before the multiplication in frequency can be carried out, we must first transform the data from time to frequency with the FFT. Filter Architectures Mathematics and engineering often synthesize filtering using polynomials. Mathematics: By the Weierstrass approximation theorem, any continuous function on a closed interval (0 ≤ time ≤ 10) can be approximated to arbitrary precision using polynomials. Engineering: A polynomial can be implemented with addition and multiplication only. It is easy to implement both operations using digital logic gates in CPUs, DSPs, FPGAs, ASICs, etc. For example, y = 5x^2 + 3x + 2 means: • Multiply 5 and x and x to get 5x^2 • Multiply 3 and x to get 3x • Add 5x^2 and 3x and 2 to get y Filters are often designed as polynomials over z, where z^-n represents a delay by n samples. Therefore, the z^-n factors can be implemented into hardware with a simple memory buffer. If a filter in the z-domain has the transfer function: Then, in the z-domain: In the time domain: Therefore, at time n, the filter output y(n) equals:
{"url":"https://vru.vibrationresearch.com/lesson/filtering/","timestamp":"2024-11-03T22:36:07Z","content_type":"text/html","content_length":"78990","record_id":"<urn:uuid:d188650c-5210-4bcb-ad85-cdcb8b3ebd76>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00868.warc.gz"}
Optimization for Machine Learning Crash Course Last Updated on October 17, 2021 Optimization for Machine Learning Crash Course. Find function optima with Python in 7 days. All machine learning models involve optimization. As a practitioner, we optimize for the most suitable hyperparameters or the subset of features. Decision tree algorithm optimize for the split. Neural network optimize for the weight. Most likely, we use computational algorithms to optimize. There are many ways to optimize numerically. SciPy has a number of functions handy for this. We can also try to implement the optimization algorithms on our own. In this crash course, you will discover how you can get started and confidently run algorithms to optimize a function with Python in seven days. This is a big and important post. You might want to bookmark it. Kick-start your project with my new book Optimization for Machine Learning, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. Who Is This Crash-Course For? Before we get started, let’s make sure you are in the right place. This course is for developers that may know some applied machine learning. Perhaps you have built some models and did some projects end-to-end, or modified from existing example code from popular tools to solve your own problem. The lessons in this course do assume a few things about you, such as: You know your way around basic Python for programming. You may know some basic NumPy for array manipulation. You heard about gradient descent, simulated annealing, BFGS, or some other optimization algorithms and want to deepen your understanding. You do NOT need to be: A math wiz! A machine learning expert! This crash course will take you from a developer who knows a little machine learning to a developer who can effectively and competently apply function optimization algorithms. Note: This crash course assumes you have a working Python 3 SciPy environment with at least NumPy installed. If you need help with your environment, you can follow the step-by-step tutorial here: How to Set Up Your Python Environment for Machine Learning With Anaconda Crash-Course Overview This crash course is broken down into seven lessons. You could complete one lesson per day (recommended) or complete all of the lessons in one day (hardcore). It really depends on the time you have available and your level of enthusiasm. Below is a list of the seven lessons that will get you started and productive with optimization in Python: Lesson 01: Why optimize? Lesson 02: Grid search Lesson 03: Optimization algorithms in SciPy Lesson 04: BFGS algorithm Lesson 05: Hill-climbing algorithm Lesson 06: Simulated annealing Lesson 07: Gradient descent Each lesson could take you 60 seconds or up to 30 minutes. Take your time and complete the lessons at your own pace. Ask questions, and even post results in the comments below. The lessons might expect you to go off and find out how to do things. I will give you hints, but part of the point of each lesson is to force you to learn where to go to look for help with and about the algorithms and the best-of-breed tools in Python. (Hint: I have all of the answers on this blog; use the search box.) Post your results in the comments; I’ll cheer you on! Hang in there; don’t give up. Lesson 01: Why optimize? In this lesson, you will discover why and when we want to do optimization. Machine learning is different from other kinds of software projects in the sense that it is less trivial on how we should write the program. A toy example in programming is to write a for loop to print numbers from 1 to 100. You know exactly you need a variable to count, and there should be 100 iterations of the loop to count. A toy example in machine learning is to use neural network for regression, but you have no idea how many iterations you need exactly to train the model. You might set it too few or too many and you don’t have a rule to tell what is the right number. Hence many people consider machine learning models as a black box. The consequence is that, while the model has many variables that we can tune (the hyperparameters, for example) we do not know what should be the correct values until we tested it out. In this lesson, you will discover why machine learning practitioners should study optimization to improve their skills and capabilities. Optimization is also called function optimization in mathematics that aimed to locate the maximum or minimum value of certain function. For different nature of the function, different methods can be applied. Machine learning is about developing predictive models. Whether one model is better than another, we have some evaluation metrics to measure a model’s performance subject to a particular data set. In this sense, if we consider the parameters that created the model as the input, the inner algorithm of the model and the data set in concern as constants, and the metric that evaluated from the model as the output, then we have a function constructed. Take decision tree as an example. We know it is a binary tree because every intermediate node is asking a yes-no question. This is constant and we cannot change it. But how deep this tree should be is a hyperparameter that we can control. What features and how many features from the data we allow the decision tree to use is another. A different value for these hyperparameters will change the decision tree model, which in turn gives a different metric, such as average accuracy from k-fold cross validation in classification problems. Then we have a function defined that takes the hyperparameters as input and the accuracy as output. From the perspective of the decision tree library, once you provided the hyperparameters and the training data, it can also consider them as constants and the selection of features and the thresholds for split at every node as input. The metric is still the output here because the decision tree library shared the same goal of making the best prediction. Therefore, the library also has a function defined, but different from the one mentioned above. The function here does not mean you need to explicitly define a function in the programming language. A conceptual one is suffice. What we want to do next is to manipulate on the input and check the output until we found the best output is achieved. In case of machine learning, the best can mean Highest accuracy, or precision, or recall Largest AUC of ROC Greatest F1 score in classification or R2 score in regression Least error, or log-loss or something else in this line. We can manipulate the input by random methods such as sampling or random perturbation. We can also assume the function has certain properties and try out a sequence of inputs to exploit these properties. Of course, we can also check all possible input and as we exhausted the possibility, we will know the best answer. These are the basics of why we want to do optimization, what it is about, and how we can do it. You may not notice it, but training a machine learning model is doing optimization. You may also explicitly perform optimization to select features or fine-tune hyperparameters. As you can see, optimization is useful in machine learning. Your Task For this lesson, you must find a machine learning model and list three examples that optimization might be used or might help in training and using the model. These may be related to some of the reasons above, or they may be your own personal motivations. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover how to perform grid search on an arbitrary function. Lesson 02: Grid searcch In this lesson, you will discover grid search for optimization. Let’s start with this function: f (x, y) = x2 + y2 This is a function with two-dimensional input (x, y) and one-dimensional output. What can we do to find the minimum of this function? In other words, for what x and y, we can have the least f (x, y)? Without looking at what f (x, y) is, we can first assume the x and y are in some bounded region, say, from -5 to +5. Then we can check for every combination of x and y in this range. If we remember the value of f (x, y) and keep track on the least we ever saw, then we can find the minimum of it after exhausting the region. In Python code, it is like this: from numpy import arange, inf # objective function def objective(x, y): return x**2.0 + y**2.0 # define range for input r_min, r_max = -5.0, 5.0 # generate a grid sample from the domain sample = list() step = 0.1 for x in arange(r_min, r_max+step, step): for y in arange(r_min, r_max+step, step): # evaluate the sample best_eval = inf best_x, best_y = None, None for x,y in sample: eval = objective(x,y) if eval < best_eval: best_x = x best_y = y best_eval = eval # summarize best solution print(‘Best: f(%.5f,%.5f) = %.5f’ % (best_x, best_y, best_eval)) This code scan from the lowerbound of the range -5 to upperbound +5 with each step of increment of 0.1. This range is same for both x and y. This will create a large number of samples of the (x, y) pair. These samples are created out of combinations of x and y over a range. If we draw their coordinate on a graph paper, they form a grid, and hence we call this grid search. With the grid of samples, then we evaluate the objective function f (x, y) for every sample of (x, y). We keep track on the value, and remember the least we ever saw. Once we exhausted the samples on the grid, we recall the least value that we found as the result of the optimization. Your Task For this lesson, you should lookup how to use numpy.meshgrid() function and rewrite the example code. Then you can try to replace the objective function into f (x, y, z) = (x – y + 1)2 + z2, which is a function with 3D input. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will learn how to use scipy to optimize a function. Lesson 03: Optimization algorithms in SciPy In this lesson, you will discover how you can make use of SciPy to optimize your function. There are a lot of optimization algorithms in the literature. Each has its strengths and weaknesses, and each is good for a different kind of situation. Reusing the same function we introduced in the previous lesson, f (x, y) = x2 + y2 we can make use of some predefined algorithms in SciPy to find its minimum. Probably the easiest is the Nelder-Mead algorithm. This algorithm is based on a series of rules to determine how to explore the surface of the function. Without going into the detail, we can simply call SciPy and apply Nelder-Mead algorithm to find a function’s minimum: from scipy.optimize import minimize from numpy.random import rand # objective function def objective(x): return x[0]**2.0 + x[1]**2.0 # define range for input r_min, r_max = -5.0, 5.0 # define the starting point as a random sample from the domain pt = r_min + rand(2) * (r_max – r_min) # perform the search result = minimize(objective, pt, method=’nelder-mead’) # summarize the result print(‘Status : %s’ % result[‘message’]) print(‘Total Evaluations: %d’ % result[‘nfev’]) # evaluate solution solution = result[‘x’] evaluation = objective(solution) print(‘Solution: f(%s) = %.5f’ % (solution, evaluation)) In the code above, we need to write our function with a single vector argument. Hence virtually the function becomes f (x[0], x[1]) = (x[0])2 + (x[1])2 Nelder-Mead algorithm needs a starting point. We choose a random point in the range of -5 to +5 for that (rand(2) is numpy’s way to generate a random coordinate pair between 0 and 1). The function minimize() returns a OptimizeResult object, which contains information about the result that is accessible via keys. The “message” key provides a human-readable message about the success or failure of the search, and the “nfev” key tells the number of function evaluations performed in the course of optimization. The most important one is “x” key, which specifies the input values that attained the minimum. Nelder-Mead algorithm works well for convex functions, which the shape is smooth and like a basin. For more complex function, the algorithm may stuck at a local optimum but fail to find the real global optimum. Your Task For this lesson, you should replace the objective function in the example code above with the following: from numpy import e, pi, cos, sqrt, exp def objective(v): x, y = v return ( -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) – exp(0.5 * (cos(2 * pi C *x)+cos(2*pi*y))) + e + 20 ) This defined the Ackley function. The global minimum is at v=[0,0]. However, Nelder-Mead most likely cannot find it because this function has many local minima. Try repeat your code a few times and observe the output. You should get a different output each time you run the program. Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will learn how to use the same SciPy function to apply a different optimization algorithm. Lesson 04: BFGS algorithm In this lesson, you will discover how you can make use of SciPy to apply BFGS algorithm to optimize your function. As we have seen in the previous lesson, we can make use of the minimize() function from scipy.optimize to optimize a function using Nelder-Meadd algorithm. This is the simple “pattern search” algorithm that does not need to know the derivatives of a function. First-order derivative means to differentiate the objective function once. Similarly, second-order derivative is to differentiate the first-order derivative one more time. If we have the second-order derivative of the objective function, we can apply the Newton’s method to find its optimum. There is another class of optimization algorithm that can approximate the second-order derivative from the first order derivative, and use the approximation to optimize the objective function. They are called the quasi-Newton methods. BFGS is the most famous one of this class. Revisiting the same objective function that we used in previous lessons, f (x, y) = x2 + y2 we can tell that the first-order derivative is: ∇f = [2x, 2y] This is a vector of two components, because the function f (x, y) receives a vector value of two components (x, y) and returns a scalar value. If we create a new function for the first-order derivative, we can call SciPy and apply the BFGS algorithm: from scipy.optimize import minimize from numpy.random import rand # objective function def objective(x): return x[0]**2.0 + x[1]**2.0 # derivative of the objective function def derivative(x): return [x[0] * 2, x[1] * 2] # define range for input r_min, r_max = -5.0, 5.0 # define the starting point as a random sample from the domain pt = r_min + rand(2) * (r_max – r_min) # perform the bfgs algorithm search result = minimize(objective, pt, method=’BFGS’, jac=derivative) # summarize the result print(‘Status : %s’ % result[‘message’]) print(‘Total Evaluations: %d’ % result[‘nfev’]) # evaluate solution solution = result[‘x’] evaluation = objective(solution) print(‘Solution: f(%s) = %.5f’ % (solution, evaluation)) The first-order derivative of the objective function is provided to the minimize() function with the “jac” argument. The argument is named after Jacobian matrix, which is how we call the first-order derivative of a function that takes a vector and returns a vector. The BFGS algorithm will make use of the first-order derivative to compute the inverse of the Hessian matrix (i.e., the second-order derivative of a vector function) and use it to find the optima. Besides BFGS, there is also L-BFGS-B. It is a version of the former that uses less memory (the “L”) and the domain is bounded to a region (the “B”). To use this variant, we simply replace the name of the method: result = minimize(objective, pt, method=’L-BFGS-B’, jac=derivative) Your Task For this lesson, you should create a function with much more parameters (i.e., the vector argument to the function is much more than two components) and observe the performance of BFGS and L-BFGS-B. Do you notice the difference in speed? How different are the result from these two methods? What happen if your function is not convex but have many local optima? Post your answer in the comments below. I would love to see what you come up with. Lesson 05: Hill-climbing algorithm In this lesson, you will discover how to implement hill-climbing algorithm and use it to optimize your function. The idea of hill-climbing is to start from a point on the objective function. Then we move the point a bit in a random direction. In case the move allows us to find a better solution, we keep the new position. Otherwise we stay with the old. After enough iterations of doing this, we should be close enough to the optimum of this objective function. The progress is named because it is like we are climbing on a hill, which we keep going up (or down) in any direction whenever we can. In Python, we can write the above hill-climbing algorithm for minimization as a function: from numpy.random import randn def in_bounds(point, bounds): # enumerate all dimensions of the point for d in range(len(bounds)): # check if out of bounds for this dimension if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]: return False return True def hillclimbing(objective, bounds, n_iterations, step_size): # generate an initial point solution = None while solution is None or not in_bounds(solution, bounds): solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # evaluate the initial point solution_eval = objective(solution) # run the hill climb for i in range(n_iterations): # take a step candidate = None while candidate is None or not in_bounds(candidate, bounds): candidate = solution + randn(len(bounds)) * step_size # evaluate candidate point candidte_eval = objective(candidate) # check if we should keep the new point if candidte_eval <= solution_eval: # store the new point solution, solution_eval = candidate, candidte_eval # report progress print(‘>%d f(%s) = %.5f’ % (i, solution, solution_eval)) return [solution, solution_eval] This function allows any objective function to be passed as long as it takes a vector and returns a scalar value. The “bounds” argument should be a numpy array of n×2 dimension, which n is the size of the vector that the objective function expects. It tells the lower- and upper-bound of the range we should look for the minimum. For example, we can set up the bound as follows for the objective function that expects two dimensional vectors (like the one in the previous lesson) and the components of the vector to be between -5 to +5: bounds = np.asarray([[-5.0, 5.0], [-5.0, 5.0]]) This “hillclimbing” function will randomly pick an initial point within the bound, then test the objective function in iterations. Whenever it can find the objective function yields a less value, the solution is remembered and the next point to test is generated from its neighborhood. Your Task For this lesson, you should provide your own objective function (such as copy over the one from previous lesson), set up the “n_iterations” and “step_size” and apply the “hillclimbing” function to find the minimum. Observe how the algorithm finds a solution. Try with different values of “step_size” and compare the number of iterations needed to reach the proximity of the final solution. Post your answer in the comments below. I would love to see what you come up with. Lesson 06: Simulated annealing In this lesson, you will discover how simulated annealing works and how to use it. For the non-convex functions, the algorithms you learned in previous lessons may be trapped easily at local optima and failed to find the global optima. The reason is because of the greedy nature of the algorithm: Whenever a better solution is found, it will not let go. Hence if a even better solution exists but not in the proximity, the algorithm will fail to find it. Simulated annealing try to improve on this behavior by making a balance between exploration and exploitation. At the beginning, when the algorithm is not knowing much about the function to optimize, it prefers to explore other solutions rather than stay with the best solution found. At later stage, as more solutions are explored the chance of finding even better solutions is diminished, the algorithm will prefer to remain in the neighborhood of the best solution it found. The following is the implementation of simulated annealing as a Python function: from numpy.random import randn, rand def simulated_annealing(objective, bounds, n_iterations, step_size, temp): # generate an initial point best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # evaluate the initial point best_eval = objective(best) # current working solution curr, curr_eval = best, best_eval # run the algorithm for i in range(n_iterations): # take a step candidate = curr + randn(len(bounds)) * step_size # evaluate candidate point candidate_eval = objective(candidate) # check for new best solution if candidate_eval < best_eval: # store new best point best, best_eval = candidate, candidate_eval # report progress print(‘>%d f(%s) = %.5f’ % (i, best, best_eval)) # difference between candidate and current point evaluation diff = candidate_eval – curr_eval # calculate temperature for current epoch t = temp / float(i + 1) # calculate metropolis acceptance criterion metropolis = exp(-diff / t) # check if we should keep the new point if diff < 0 or rand() < metropolis: # store the new current point curr, curr_eval = candidate, candidate_eval return [best, best_eval] Similar to the hill-climbing algorithm in the previous lesson, the function starts with a random initial point. Also similar to that in previous lesson, the algorithm runs in loops prescribed by the count “n_iterations”. In each iteration, a random neighborhood point of the current point is picked and the objective function is evaluated on it. The best solution ever found is remembered in the variable “best” and “best_eval”. The difference to the hill-climbing algorithm is that, the current point “curr” in each iteration is not necessarily the best solution. Whether the point is moved to a neighborhood or stay depends on a probability that related to the number of iterations we did and how much improvement the neighborhood can make. Because of this stochastic nature, we have a chance to get out of the local minima for a better solution. Finally, regardless where we end up, we always return the best solution ever found among the iterations of the simulated annealing algorithm. In fact, most of the hyperparameter tuning or feature selection problems are encountered in machine learning are not convex. Hence simulated annealing should be more suitable then hill-climbing for these optimization problems. Your Task For this lesson, you should repeat the exercise you did in the previous lesson with the simulated annealing code above. Try with the objective function f (x, y) = x2 + y2, which is a convex one. Do you see simulated annealing or hill climbing takes less iteration? Replace the objective function with the Ackley function introduced in Lesson 03. Do you see the minimum found by simulated annealing or hill climbing is smaller? Post your answer in the comments below. I would love to see what you come up with. Lesson 07: Gradient descent In this lesson, you will discover how you can implement gradient descent algorithm. Gradient descent algorithm is the algorithm used to train a neural network. Although there are many variants, all of them are based on gradient, or the first-order derivative, of the function. The idea lies in the physical meaning of a gradient of a function. If the function takes a vector and returns a scalar value, the gradient of the function at any point will tell you the direction that the function is increased the fastest. Hence if we aimed at finding the minimum of the function, the direction we should explore is the exact opposite of the gradient. In mathematical equation, if we are looking for the minimum of f (x), where x is a vector, and the gradient of f (x) is denoted by ∇f (x) (which is also a vector), then we know xnew = x – α × ∇f (x) will be closer to the minimum than x. Now let’s try to implement this in Python. Reusing the sample objective function and its derivative we learned in Day 4, this is the gradient descent algorithm and its use to find the minimum of the objective function: from numpy import asarray from numpy import arange from numpy.random import rand # objective function def objective(x): return x[0]**2.0 + x[1]**2.0 # derivative of the objective function def derivative(x): return asarray([x[0]*2, x[1]*2]) # gradient descent algorithm def gradient_descent(objective, derivative, bounds, n_iter, step_size): # generate an initial point solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # run the gradient descent for i in range(n_iter): # calculate gradient gradient = derivative(solution) # take a step solution = solution – step_size * gradient # evaluate candidate point solution_eval = objective(solution) # report progress print(‘>%d f(%s) = %.5f’ % (i, solution, solution_eval)) return [solution, solution_eval] # define range for input bounds = asarray([[-5.0, 5.0], [-5.0, 5.0]]) # define the total iterations n_iter = 40 # define the step size step_size = 0.1 # perform the gradient descent search solution, solution_eval = gradient_descent(objective, derivative, bounds, n_iter, step_size) print(“Solution: f(%s) = %.5f” % (solution, solution_eval)) This algorithm depends on not only the objective function but also its derivative. Hence it may not suitable for all kinds of problems. This algorithm also sensitive to the step size, which a too large step size with respect to the objective function may cause the gradient descent algorithm fail to converge. If this happens, we will see the progress is not moving toward lower value. There are several variations to make gradient descent algorithm more robust, for example: Add a momentum into the process, which the move is not only following the gradient but also partially the average of gradients in previous iterations. Make the step sizes different for each component of the vector x Make the step size adaptive to the progress Your Task For this lesson, you should run the example program above with a different “step_size” and “n_iter” and observe the difference in the progress of the algorithm. At what “step_size” you will see the above program not converge? Then try to add a new parameter β to the gradient_descent() function as the momentum weight, which the update rule now becomes xnew = x – α × ∇f (x) – β × g where g is the average of ∇f (x) in, for example, five previous iterations. Do you see any improvement to this optimization? Is it a suitable example for using momentum? Post your answer in the comments below. I would love to see what you come up with. This was the final lesson. The End! (Look How Far You Have Come) You made it. Well done! Take a moment and look back at how far you have come. You discovered: The importance of optimization in applied machine learning. How to do grid search to optimize by exhausting all possible solutions. How to use SciPy to optimize your own function. How to implement hill-climbing algorithm for optimization. How to use simulated annealing algorithm for optimization. What is gradient descent, how to use it, and some variation of this algorithm. How did you do with the mini-course? Did you enjoy this crash course? Do you have any questions? Were there any sticking points? Let me know. Leave a comment below. The post Optimization for Machine Learning Crash Course appeared first on Machine Learning Mastery. Read MoreMachine Learning Mastery Recent Comments
{"url":"https://dataintegration.info/optimization-for-machine-learning-crash-course","timestamp":"2024-11-09T15:54:15Z","content_type":"text/html","content_length":"360182","record_id":"<urn:uuid:f693f462-0baa-4d40-bac2-12f56f857e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00603.warc.gz"}
Risk-free interest rate Archives One of the most fundamental concepts in finance is that risk and return are correlated. We touched on this a tiny bit in one of the early MBA Mondays posts. But I'd like to dig a bit deeper on this concept today. Here's a chart I found on the Internet (where else?) that shows a bunch of portfolios of financial assets plotted on chart. As you can see portfolio 4 has the lowest risk and the lowest return. Portfolio 10 has the highest risk and the highest return. While you can't draw a straight line between all of them, meaning that risk and return aren't always perfectly correlated, you can see that there is a direct relationship between risk and return. This makes sense if you think about it. We don't expect to make much interest on bank deposits that are guaranteed by the federal government (although maybe we should). But we do expect to make a big return on an investment in a startup company. There is a formula well known to finance students called the Capital Asset Pricing Model which describes the relationship between risk and return. This model says that: Expected Return On An Asset = Risk Free Rate + Beta (Expect Market Return – Risk Free Rate) I don't want to dig too deeply into this model, click on the link on the model above to go to WIkipedia for a deeper dive. But I do want to talk a bit about the formula to extract the notion of risk and return. The formula says your expected return on an asset (bank account, bond, stock, venture deal, real estate deal) is equal to the risk free rate (treasury bills or an insured bank account) plus a coefficient (called Beta) times the "market premium." Basically the formula says the more risk you take (Beta) the more return you will get. You may have heard this term Beta in popular speak. "That's a high beta stock" is a common refrain. It means that it is a risky asset. Beta (another Wikipedia link) is a quantitative measure of risk. It's formula is: Covariance (asset, portfolio)/Variance (portfolio) I've probably lost most everyone who isn't a math/stats geek by now. In an attempt to get you all back, Beta is a measure of volatility. The more an asset's returns move around in ways that are driven by the underlying market (the covariance), the higher the Beta and the risk will be. So, when you think about returns, think about them in the context of risk. You can get to higher returns by taking on higher risk. And to some degree we should. It doesn't make sense for a young person to put all of their savings in a bank account unless they will need them soon. Because they can make a greater return by putting them into something where there is more risk. But we must also understand that risk means risk of loss, either partial or in some cases total loss. Markets get out of whack sometimes. The tech stock market got out of whack in the late 90s. The subprime mortgage market got out of whack in the middle of the last decade. When you invest in those kinds of markets, you are taking on a lot of risk. Markets that go up will at some point come down. So if you go out on the risk/reward curve in search of higher returns, understand that you are taking on more risk. That means risk of loss. Next week we will talk about diversification. One of my favorite risk mitigation strategies.
{"url":"https://avc.com/tag/risk-free-interest-rate/","timestamp":"2024-11-12T02:49:39Z","content_type":"text/html","content_length":"32792","record_id":"<urn:uuid:8952401a-073a-4d46-9abd-8be0b90b790a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00214.warc.gz"}
Investigation and optimization of the performance of gravity water wheels Water wheels are rotating hydraulic machines that were introduced thousands of years ago to generate energy from water. Gravity water wheels are driven by the weight of the water flow and a portion of the flow kinetic energy. In the last decades, due to the increasing diffusion of micro hydropower plants (installed power less than 100 kW), gravity water wheels are being recognized as attractive hydraulic machines to produce electricity. Unfortunately, most of the engineering knowledge on water wheels is dated back to the XIX century, with several gaps and uncertainty. Additional work is still needed to fully understand the power losses and the performance within water wheels, that could lead to further improvements in efficiency. The scope of the present thesis is the investigation and improvement of the performance of gravity water wheels. This aim was achieved using physical experiments to quantify water wheels performance under different hydraulic conditions, theoretical models to estimate and predict the efficiency, and numerical simulations to optimize the design. Undershot, breastshot and overshot water wheels were investigated, in order to give a wide overview on all the kinds of gravity water wheels. Sagebien and Zuppinger undershot wheels were investigated at Southampton University, under the supervision of prof. Gerald Muller, from October 2015 until April 2016. These two wheels differ based on the shape of the blades. The blades of Sagebien wheels are optimized to reduce the inflow power losses, while those of Zuppinger wheels are conceived to minimize the outflow power losses. The objective of the experiments was to understand which of the two designs is better in term of efficiency. The tests showed that the Sagebien type exhibits a more constant efficiency as a function of the flow rate and the hydraulic head than the Zuppinger type. The maximum efficiency (excluding leakages) was identified as 88%. Breastshot water wheels were investigated experimentally, theoretically and using numerical Computational Fluid Dynamic (CFD) methods at Politecnico di Torino. The maximum experimental efficiency was estimated as 75% using a sluice gate inflow. A vertical inflow weir was also investigated, and found to have a more constant efficiency versus the rotational speed of the wheel, but with similar maximum values. A theoretical model that was developed to estimate the power output, power losses and efficiency, had a discrepancy with the experiments of 8%. A dimensionless law was also developed to estimate the power output. Numerical CFD simulations were performed to understand the effects of the number and shape of the blades on the efficiency. The optimal number of blades was 48 for the investigated wheel, and the efficiency can be improved using a circular shape. The numerical discrepancy with experiments was less than 6%. Overshot water wheels were investigated using a similar approach as done for breastshot wheels, and were found to have a maximum experimental efficiency of 85%. A theoretical model was developed to estimate the power losses and the efficiency, in particular to quantify the volumetric losses at the top of the wheel, that is the fraction of the flow which can not enter into the buckets and that is lost. Then, numerical simulations will be started to try to improve the wheel efficiency, reducing the previous volumetric losses. More specifically, a circular wall around the periphery of the wheel was added to the original design, leading to a performance improvement up to 60%. The results of this work show that water wheels can be considered attractive hydropower converters. Investigation and optimization of the performance of gravity water wheels / Quaranta, Emanuele. - (2017). [10.6092/polito/porto/2674225] Investigation and optimization of the performance of gravity water wheels Water wheels are rotating hydraulic machines that were introduced thousands of years ago to generate energy from water. Gravity water wheels are driven by the weight of the water flow and a portion of the flow kinetic energy. In the last decades, due to the increasing diffusion of micro hydropower plants (installed power less than 100 kW), gravity water wheels are being recognized as attractive hydraulic machines to produce electricity. Unfortunately, most of the engineering knowledge on water wheels is dated back to the XIX century, with several gaps and uncertainty. Additional work is still needed to fully understand the power losses and the performance within water wheels, that could lead to further improvements in efficiency. The scope of the present thesis is the investigation and improvement of the performance of gravity water wheels. This aim was achieved using physical experiments to quantify water wheels performance under different hydraulic conditions, theoretical models to estimate and predict the efficiency, and numerical simulations to optimize the design. Undershot, breastshot and overshot water wheels were investigated, in order to give a wide overview on all the kinds of gravity water wheels. Sagebien and Zuppinger undershot wheels were investigated at Southampton University, under the supervision of prof. Gerald Muller, from October 2015 until April 2016. These two wheels differ based on the shape of the blades. The blades of Sagebien wheels are optimized to reduce the inflow power losses, while those of Zuppinger wheels are conceived to minimize the outflow power losses. The objective of the experiments was to understand which of the two designs is better in term of efficiency. The tests showed that the Sagebien type exhibits a more constant efficiency as a function of the flow rate and the hydraulic head than the Zuppinger type. The maximum efficiency (excluding leakages) was identified as 88%. Breastshot water wheels were investigated experimentally, theoretically and using numerical Computational Fluid Dynamic (CFD) methods at Politecnico di Torino. The maximum experimental efficiency was estimated as 75% using a sluice gate inflow. A vertical inflow weir was also investigated, and found to have a more constant efficiency versus the rotational speed of the wheel, but with similar maximum values. A theoretical model that was developed to estimate the power output, power losses and efficiency, had a discrepancy with the experiments of 8%. A dimensionless law was also developed to estimate the power output. Numerical CFD simulations were performed to understand the effects of the number and shape of the blades on the efficiency. The optimal number of blades was 48 for the investigated wheel, and the efficiency can be improved using a circular shape. The numerical discrepancy with experiments was less than 6%. Overshot water wheels were investigated using a similar approach as done for breastshot wheels, and were found to have a maximum experimental efficiency of 85%. A theoretical model was developed to estimate the power losses and the efficiency, in particular to quantify the volumetric losses at the top of the wheel, that is the fraction of the flow which can not enter into the buckets and that is lost. Then, numerical simulations will be started to try to improve the wheel efficiency, reducing the previous volumetric losses. More specifically, a circular wall around the periphery of the wheel was added to the original design, leading to a performance improvement up to 60%. The results of this work show that water wheels can be considered attractive hydropower converters. File in questo prodotto: File Dimensione Formato tesi_Emanuele Quaranta.pdf accesso aperto Descrizione: Doctoral thesis_Emanuele Quaranta Tipologia: Tesi di dottorato 27.18 MB Adobe PDF Visualizza/Apri Licenza: PUBBLICO - Tutti i diritti riservati Dimensione 27.18 MB Formato Adobe PDF Pubblicazioni consigliate I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2674225 Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
{"url":"https://iris.polito.it/handle/11583/2674225","timestamp":"2024-11-04T12:05:08Z","content_type":"text/html","content_length":"58031","record_id":"<urn:uuid:644f0bad-035f-4015-a946-fcdedb3e893a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00066.warc.gz"}
1st Grade Math, Curriculum & Online Math Classes for Grade 1 @BYJUS Numbers in words It is important for students to learn how numbers are expressed in words as this will allow them to recognize numbers among others words while reading study materials. Number names: 1 to 50 A number name or a numeral is a name that describes numbers. Here, 1st grade math students will be introduced to the numbers names of numbers from 1 to 50. 12 in words Students will learn how to write the number ‘12’ in words. 15 in words Grade 1 math students will learn how to write the number ‘15’ in words. 30 in words Students will learn how to write the number ‘30’ in words. Counting is the first math concept that students will learn to apply in real life. Connecting numbers with real objects will help them not only retain the sequence of numbers but develop number Concept of counting The best method for first graders to retain the numbers they have learned is to find places where they can use numbers in real life. Students will start learning patterns among numbers by using them for counting. Cardinal numbers Cardinal numbers or cardinals are numbers that are used for counting and indicate the quantity or how many things are there. With the introduction to cardinal numbers, students will learn ways to count physical objects and handle quantitative information around them — an important organizational skill. Learning this simple concept in first grade helps students kick start their mathematical learning journey in a meaningful manner. Odd and even numbers There are multiple patterns hidden among numbers. The concept of odd and even numbers is usually the first pattern that students discover among numbers. This concept is really important as it forms the foundation of math operations like multiplication and division.
{"url":"https://byjus.com/us/math/1st-grade-math/","timestamp":"2024-11-03T14:19:42Z","content_type":"text/html","content_length":"207055","record_id":"<urn:uuid:27a3a3a0-7908-4ada-acd4-c77716b5cee2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00545.warc.gz"}
IV Rank vs IV Percentile: Which is Better for Options Trading? Implied Volatility (IV) holds significant importance in options trading. It embodies the market’s expectations regarding an asset’s future price fluctuations, directly influencing the option’s premium. Two prevalent indicators are utilized to measure implied volatility in relation to its historical levels: the IV Rank (Implied Volatility Rank) and IV Percentile (Implied Volatility In this insightful blog post, we will delve into these two indicators, examining their similarities and differences and comparing their application performance. What is Implied Volatility Rank (IV Rank)? a. The Concept of IV Rank The core principle behind Implied Volatility Rank (IV Rank) is to compare the current implied volatility with the highest and lowest implied volatilities during a specified period. b. The Calculation of IV Rank The IV Rank calculation can be expressed with this formula: IV Rank = (Current Implied Volatility – Lowest Implied Volatility) / (Highest Implied Volatility – Lowest Implied Volatility) By using this formula, we arrive at a value ranging between 0 and 1. This value signifies the relative position of the current implied volatility within its historical spectrum. c. The Practical Application of IV Rank with an Example Let’s consider a stock with implied volatility fluctuating between 20% and 40% over the past 12 months. Given that the current implied volatility is 30%, we can compute the IV Rank as follows: IV Rank = (30% – 20%) / (40% – 20%) = 0.5 This result indicates that the current implied volatility occupies the midpoint of the implied volatility range documented within the past 12 months. What is Implied Volatility Percentile (IV Percentile)? a. The Concept of IV Percentile Implied Volatility Percentile (IV Percentile) quantifies the proportion of historical implied volatility levels outperformed by the current implied volatility within a specified time frame. Contrasting with IV Rank, IV Percentile considers the distribution of past implied volatilities, thus offering a more precise representation. b. The Calculation of IV Percentile The IV Percentile calculation can be expressed with this formula: IV Percentile = (Number of days with IV below the current IV level) / (Total number of days in the period) c. The Practical Application of IV Percentile with an Example Let’s assume we assess the implied volatility of a stock for each trading day over the past 12 months and discover that the current implied volatility is 30%. In these 12 months, 75% of the days exhibit implied volatilities below 30%. Thus, the current implied volatility outperforms the implied volatility levels of 75% of the days within the past year. Consequently, the stock’s IV Percentile is 75. IV Rank vs IV Percentile: Which One is Better? The crucial distinction between IV Rank and IV Percentile stems from their sensitivity toward outliers present in historical implied volatility data. In scenarios where the historical implied volatility data showcases a relatively normal distribution without significant outliers, both IV Rank and IV Percentile can offer valuable insights. However, suppose the historical data includes extreme values that may not accurately depict the current market environment. In that case, IV Percentile may be a more suitable choice, as outliers influence it less. IV Rank vs IV Percentile: A Comparative Example Showcasing Reliability Let’s consider two stocks, Stock A and Stock B. Over the past week, both exhibited a similar range of implied volatilities, between 20% and 40%. However, Stock B experienced an extreme implied volatility spike to 80% on one day due to a rare market event. The implied volatilities for the past week are as follows: Stock A: [20%, 25%, 30%, 35%, 40%] Stock B: [20%, 25%, 30%, 80%, 40%] The current implied volatility for both stocks is 32%. For Stock A, we can calculate the IV Rank as follows: IV Rank = (32% – 20%) / (40% – 20%) = 0.6 For Stock B, we can similarly calculate the IV Rank: IV Rank = (32% – 20%) / (80% – 20%) = 0.2 In this case, IV Rank suggests that the current implied volatility for Stock B is relatively low compared to its historical range. However, this conclusion may be misleading, as the extreme value of 80% distorts the IV Rank calculation. Now let’s calculate the IV Percentile. For Stock A, the current implied volatility of 32% is higher than 60% of the historical values. For Stock B, the current implied volatility of 32% is also higher than 60% of the historical values. IV Percentile for Stock A: 60 IV Percentile for Stock B: 60 Here, IV Percentile offers a more accurate reflection of the current market conditions for Stock B, as the extreme value of 80% less influences it. In this example, we demonstrate that relying on IV Percentile would sometimes lead to more informed decisions. Final Thoughts From the analysis presented, it becomes clear that IV Percentile is a more reliable indicator than IV Rank. It’s crucial not to overlook the possibility of extreme values, as significant events or data releases can readily give rise to such occurrences. Even relatively stable stock indices may be subject to extreme values. The Significant Surge of VIX in 2020 For instance, the VIX—an indicator reflecting the implied volatility of the S&P 500 index—experienced a dramatic surge in 2020 due to the Covid-19 pandemic. If the time frame for measuring IV Rank encompasses this period, it could considerably underestimate the actual relative size of implied volatility. By employing IV Percentile, this issue can be mitigated as much as possible.
{"url":"https://cannytrading.com/iv-rank-vs-iv-percentile/","timestamp":"2024-11-15T00:12:04Z","content_type":"text/html","content_length":"132464","record_id":"<urn:uuid:accf57a2-39ae-4d2e-8533-5c6aa88d148d>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00243.warc.gz"}
Behaviour Manifolds and the Hessian of the Total Loss - Notes and Criticism — AI Alignment Forum In this note I will discuss some computations and observations that I have seen in other posts about "basin broadness/flatness". I am mostly working off the content of the posts Information Loss --> Basin flatness and Basin broadness depends on the size and number of orthogonal features. I will attempt to give one rigorous and unified narrative for core mathematical parts of these posts and I will also attempt to explain my reservations about some aspects of these approaches. This post started out as a series of comments that I had already made on the posts, but I felt it may be worthwhile for me to spell out my position and give my own explanations. Work completed while author was a SERI MATS scholar under the mentorship of Evan Hubinger. Basic Notation and Terminology We will imagine fixing some model architecture and thinking about the loss landscape from a purely mathematical perspective. We will not concern ourselves with the realities of training. Let denote the parameter space of a deep neural network model . This means that each element is a complete set of weights and biases for the model. And suppose that when a set of parameters is fixed, the network maps from an input space to an output space . When it matters below, we will take , but for now let us leave it abstract. So we have a function such that for any , the function is a fixed input-output function implemented by the network. Let be a dataset of training examples. We can then define a function , by This takes as input a set of parameters and returns the behaviour of on the training data. We will think of the loss function as . Example. We could have , , and We also then define what we will call the total loss This is just the usual thing: The total loss over the training data set for a given set of weights and biases. So the graph of is what one might call the 'loss landscape'. Behaviour Manifolds By a behaviour manifold (see [Hebbar]), we mean a set of the form where is a tuple of possible outputs. The idea here is that for a fixed behaviour manifold , all of the models given by parameter sets have identical behaviour on the training data. Assume that is an appropriately smooth -dimensional space and let us now assume that . Suppose that . In this case, at a point at which the Jacobian matrix has full rank, the map is a submersion. The submersion theorem (which - in this context - is little more than the implicit function theorem) tells us that given , if is a submersion in a neighbourhood of a point , then is a smooth -dimensional submanifold in a neighbourhood of . So we conclude that in a neighbourhood of a point in parameter space at which the Jacobian of has full rank, the behaviour manifold is an -dimensional smooth submanifold. Firstly, I want to emphasize that when the Jacobian of does not have full rank, it is generally difficult to make conclusions about the geometry of the level set, i.e. about the set that is called the behaviour manifold in this setting. Examples. The following simple examples are to emphasize that there is not a straightforward intuitive relationship that says "when the Jacobian has less than full rank, there are fewer directions in parameter space along which the behaviour changes and therefore the behaviour manifold is bigger than -dimensional": 1. Consider given by . We have . This has rank 1 everywhere except the origin: At the point it has less than full rank. And at that point, the level set is just a single point, i.e. it is 2. Consider given by We have . Again, this has less than full rank at the point And at that point, the level set is the entire -axis, i.e. it is 1-dimensional. 3. Consider given by We of course have . This has less than full rank everywhere, and the only non-empty level set is the entire of , i.e. 2-dimensional. Remark. We note further, just for the sake of intuition about these kinds of issues, that the geometry of the level set of a smooth function can in general be very bad: Every closed subset is the zero set of some smooth function, i.e. given any closed set , there exists a smooth function with Knowing that a level set is closed is an extremely basic fact and yet without using specific information about the function you are looking at, you cannot conclude anything else. Secondly, the use of the submersion theorem here only makes sense when . But this is not even commonly the case. It is common to have many more data points (the ) than parameters (the ), ultimately meaning that the dimension of is much, much larger than the dimension of the domain of . This suggests a slightly different perspective, which I briefly outline next. Behavioural Space When the codomain is a higher-dimensional space than the domain, we more commonly picture the image of a function, as opposed to the graph, e.g. if I say to consider a smooth function , one more naturally pictures the curve in the plane, as a kind-of 'copy' of the line , as opposed to the graph of . So if one were to try to continue along these lines, one might instead imagine the image of parameter space in the behaviour space We think of each point of as a complete specification of possible outputs on the dataset. Then the image is (loosely speaking) an dimensional submanifold of this space which we should think of as having large codimension. And each point on this submanifold is the outputs of an actual model with parameters . In this setting, the points at which the Jacobian has full rank map to points which have neighbourhoods in which is smooth and embedded. The Hessian of the Total Loss A computation of the Hessian of appears in both Information Loss --> Basin flatness and Basin broadness depends on the size and number of orthogonal features, under slightly different assumptions. Let us carefully go over that computation here, in a slightly greater level of generality. We continue with , in which case . The function we are going to differentiate is: And since each for , we should think of as a matrix, the general entry of which is . We want to differentiate twice with respect to . Firstly, we have for . Then for we differentiate again: This is now an equation of matrices. At A Local Minimum of The Loss Function If is such that is a local minimum for (which means that the parameters are such that the output of the network on the training data is a local minimum for the loss function), then the second term on the right-hand side of (1) vanishes (because the term includes the first derivatives of , which are zero at a minimum). Therefore: If is a local minimum for we have: If, in addition, the Hessian of is equal to the identity matrix (by which we mean - as is the case for the example loss function given above in (*)), then we would have: In Basin broadness depends on the size and number of orthogonal features, the expression on the right-hand side of equation (2) above is referred to as an inner product of "the features over the training data set". I do not understand the use of the word 'features' here and in the remainder of their post. The phrase seems to imply that a function of the form defined on the inputs of the training dataset, is what constitutes a feature. No further explanation is really given. It's completely plausible that I have missed something (and perhaps other readers do not or will not share my confusion) but I would like to see an attempt at a clear and detailed explanation of exactly how this notion is supposed to be the same notion of feature that (say) Anthropic use in their interpretability work (as was claimed to me). I'd like to tentatively try to give some higher-level criticism of these kinds of approaches. This is a tricky thing to do, I admit; it's generally very hard to say that a certain approach is unlikely to yield results, but I will at least try to explain where my skepticism is coming from. The perspective and the computations that are presented here (which in my opinion are representative of the mathematical parts of the linked posts and of various other unnamed posts) do not use any significant facts about neural networks or their architecture. In particular, in the mathematical framework that is set up, the function is more or less just any smooth function. And the methods used are just a few lines of calculus and linear algebra applied to abstract smooth functions. If these are the principal ingredients, then I am naturally led to expect that the conclusions will be relatively straightforward facts that will hold for more or less any smooth function . Such facts may be useful as part of bigger arguments - of course many arguments in mathematics do yield truly significant results using only 'low-level' methods - but in my experience one is extremely unlikely to end up with significant results in this way without it ultimately being clear after the fact where the hard work has happened or what the significant original insight was. So, naively, my expectation at the moment is that in order to arrive at better results about this sort of thing, arguments that start like these ones do must quickly bring to bear substantial mathematical facts about the network, e.g. random initialization, gradient descent, the structure of the network's layers, activations etc. One has to actually use something. I feel (again, speaking naively) that after achieving more success with a mathematical argument along these lines, one's hands would look dirtier. In particular, for what it's worth, I do not expect my suggestion to look at the image of the parameter space in 'behaviour space' to lead (by itself) to any further non-trivial progress. (And I say 'naively' in the preceding sentences here because I do not claim myself to have produced any significant results of the form I am discussing). I agree that the space may well miss important concepts and perspectives. As I say, it is not my suggestion to look at it, but rather just something that was implicitly being done in another post. The space may well be a more natural one. (It's of course the space of functions , and so a space in which 'model space' naturally sits in some sense. ) I think there should be a space both for in-progress research dumps and for more worked out final research reports on the forum. Maybe it would make sense to have separate categories for them or so. New Comment 5 comments, sorted by Click to highlight new comments since: I worry that using as the space of behaviors misses something important about the intuitive idea of robustness, making any conclusions about or or behavior manifolds harder to apply. A more natural space (to illustrate my point, not as something helpful for this post) would be , with a metric that cares about how outputs differ on inputs that fall within a particular base distribution , something like The issue with is that models in a behavior manifold only need to agree on the training inputs, and always include all models with arbitrarily crazy behaviors at all inputs outside the dataset, even if we are talking about inputs very close to those in the dataset (which is what above is supposed to prevent). So the behavior manifolds are more like cylinders than balls, ignoring crucial dimensions. Since generalization does work (so learning tends to find very unusual points of them), it's generally unclear how a behavior manifold as a whole is going to be relevant to what's actually going on. The perspective and the computations that are presented here (which in my opinion are representative of the mathematical parts of the linked posts and of various other unnamed posts) do not use any significant facts about neural networks or their architecture. You're correct that the written portion of the Information Loss --> Basin flatness post doesn't use any non-trivial facts about NNs. The purpose of the written portion was to explain some mathematical groundwork, which is then used for the non-trivial claim. (I did not know at the time that there was a standard name "Submersion theorem". I had also made formal mistakes, which I am glad you pointed out in your comments. The essence was mostly valid though.) The non-trivial claim occurs in the video section of the post, where a sort of degeneracy occuring in ReLU MLPs is examined. I now no longer believe that the precise form of my claim is relevant to practical networks. An approximate form (where low rank is replaced with something similar to low determinant) seems salvageable, though still of dubious value, since I think I have better framings now. Secondly, the use of the submersion theorem here only makes sense when . Agreed. I was addressing the overparameterized case, not the underparameterized one. In hindsight, I should have mentioned this at the very beginning of the post -- my bad. (Sorry for the very late response) All in all, I don't think my original post held up well. I guess I was excited to pump out the concept quickly, before the dust settled. Maybe this was a mistake? Usually I make the ~opposite error of never getting around to posting things.
{"url":"https://www.alignmentforum.org/posts/v2SvxGNijBzRYk7Ep/behaviour-manifolds-and-the-hessian-of-the-total-loss-notes","timestamp":"2024-11-09T23:51:08Z","content_type":"text/html","content_length":"1048971","record_id":"<urn:uuid:9d9e8075-8cd0-4533-b45a-9494ed227a93>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00210.warc.gz"}
Speed converter Use this tool to quickly convert speed between meters/s, kilometers/h, miles/h, feet/s, knots and speed of sound. Enter speed value you want to convert Speed units Meters per second (m/s) is a unit of speed or velocity in the International System of Units (SI). It represents the distance traveled in meters per one second of time. Kilometers per hour (km/h) is a unit of speed commonly used in many countries. It measures the distance traveled in kilometers within one hour, providing a metric measurement of velocity. Miles per hour (mph) is a unit of speed predominantly used in the United States and some other countries. It measures the distance traveled in miles within one hour. Feet per second (ft/s) is an imperial unit of speed or velocity. It measures the distance traveled in feet within one second, commonly used in certain engineering and physics contexts. Knots (kt) are a unit of speed used primarily in aviation and maritime contexts. One knot is equal to one nautical mile per hour, providing a measure of speed over water or through the air. The speed of sound varies depending on factors like temperature and medium. In dry air at 20°C (68°F), it's abut 340 meters per second.
{"url":"https://www.convertcase.com/conversion/speed-converter","timestamp":"2024-11-07T18:54:38Z","content_type":"text/html","content_length":"51574","record_id":"<urn:uuid:b88669a9-b617-44f1-83e1-205daaa1eb34>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00365.warc.gz"}
Fibonacci Series Program In Java Using Recursion Last Updated : Mar 11, 2024 IN - Java | Written & Updated By - Pragati In this article we will show you the solution of fibonacci series program in java using recursion, that each digit in the Fibonacci sequence from 0 to 1, 2, 3, 4, 6, 12, 21, 34, and so forth—is equal to the sum of its two predecessors. The fibonacci series' two preceding digits are 0 and 1. There are two approaches to implement the Fibonacci series in Java: without recursion and utilising the Fibonacci Number. PreviousNumber and nextNumber in the Fibonacci Recursive Series has initial values of 0 and 1, correspondingly. The Fibonacci While Loop iterates through Maximum Number. PreviousNumber and nextNumber now have updated values. The justification is still the same as previously. Instead of the number being hardcoded inside the Java Fibonacci Series, a user gets prompted to enter it. The Java function fibonacciRecursion() accepts a number as an argument. because the Java Fibonacci sequence begins with 0, 1, 1, it checks on 0,1,2 and returns 0,1,1. This method will call itself again if the argument n is greater than 3. The call is placed twice. Let's look at an example of the Fibonacci Sequence in Java utilising recursion with a 4 as the input. Java programming is used to calculate the Fibonacci series using both iterative and recursive methods. Two functions are used in this instance: fibonacci (int number) and fibonacci2 (int number). The first one employs recursion to print the Fibonacci series, whereas the second one makes use of a loop or iteration. Step By Step Guide On Fibonacci Series Program In Java Using Recursion :- public class TalkersCode{ static int n1 = 0,n2 = 1,n3 = 0; static void fibbonacci(int count) { if (count > 0) { n3 = n1 + n2; n1 = n2; n2 = n3; System.out.print(" " + n3); fibbonacci(count - 1); public static void main(String args[]) { int count = 5; System.out.print(n1 + " " + n2); fibbonacci(count - 2); 1. First, we make a public class called TalkersCode. 2. Then we create a static function. 3. After that we count an integer number with n1, and n2, n3. 4. After that we create the Fibonacci count of "+count+" numbers using system.out.println. 5. After that we define the public static void main as a int count. 6. After that we define fibbonacci count numbers using system.out.println. Conclusion :- The Fibonacci series is indeed a set containing natural no where each successive number is the product of the two preceding numbers, as in fn = fn-1 + fn-2. The first 2 numbers in the Fibonacci numbers always are 1 and 2. We develop a function to calculate Fibonacci numbers in this Java programme example for the Fibonacci sequence, and then we print those numbers. I hope this article on fibonacci series program in java using recursion helps you and the steps and method mentioned above are easy to follow and implement.
{"url":"https://talkerscode.com/howto/fibonacci-series-program-in-java-using-recursion.php","timestamp":"2024-11-06T02:48:35Z","content_type":"text/html","content_length":"59964","record_id":"<urn:uuid:99bfd8e3-a2bd-407d-9a26-ef402149c41a>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00613.warc.gz"}
Formally Verifying the Ethereum 2.0 Phase 0 Specifications | Consensys The Automated Verification team on Consensys R&D have been working on a formal specification and verification of the Beacon Chain for a few months. We are happy to report that lots of progress has been made and although not complete yet, we have managed to develop a solid and formally verified kernel of the Beacon Chain. For the first time, our work provides an unmatched level of trustworthiness to the core foundations of the Eth2.0 infrastructure. Verification vs. Testing We used the award-winning verification-aware programming language Dafny to write a formal (functional and logical) specification of each Beacon Chain function, an implementation of each function, and a proof that the implementation conforms to its specification. In other words, we have mathematically verified the absence of bugs. The implementations that we have eventually proved correct are based on the official Eth2.0 specifications with the caveat that we have fixed and reported some bugs and inconsistencies. Our methodology is different to testing as we mathematically prove conformance of the functions to their specifications, for all inputs. Testing cannot range over infinitely many inputs, and as a consequence can discover bugs but not prove the absence of bugs. And the best thing is that we do not need to publish a paper nor to review the proofs. The proofs are part of the code base and written as programs. Yes, in Dafny, you can write a proof as a developer-friendly program. Also, the proofs are mechanically checked by a theorem prover, leaving no room for incomplete or flawed proofs. Properties We Have Proved The properties range from the absence of arithmetic under/overflows and index out of bounds, the conformance of each function to logical (first-order logic) pre/post-conditions (merkelise example here), to more complex ones involving functions’ compositions. For example, we have the following property of the SSZ Serialise/Deserialise functions: for each object x, Deserialise(Serialise(x)) = x, i.e. deserialising a serialised object returns the original object. We have also established a number of invariants, and used them to prove that the core operations of the Beacon Chain and ForkChoice (state_transition, on_block) actually build a chain of blocks: for any block b in the store, the ancestors of b form a finite totally ordered sequence leading to the genesis block, which is the main property of a blockchain! The Benefits of Formal Verification Any formal methodist would insist that verification is a security best practice. Here’s exactly how this methodology ensures a secure and trustworthy infrastructure for Ethereum 2.0. Functional Specification First, we have lifted the official Eth2.0 specifications to a formal logical and functional specification. For each function, we formally define what the function is expected to compute, not how. This provides language-agnostic developer-friendly reference specifications that can be used to develop more secure implementations, with less effort. Second, our specifications, implementations and proof architecture are modular. As a result, we can easily experiment with new implementations (e.g. optimisations) and check their impact on the overall system. Think of a clever hack to implement a function? Change the implementation and ask Dafny to check that it still conforms to its specification. If it does, the proofs of the components that use this function are not impacted. Third, our implementations are executable. We can compile and run a Dafny program. Even better, you can automatically generate code in some popular programming languages like C#, Go (and soon Java) from the Dafny code. This can be used to complement existing code bases or to generate certified tests. The implementation to be tested can use our proved-correct functions to compute the expected result of a test and check it against its own result. Everything in a Single Language Last but not least, our code base is self-contained. It contains the specifications, implementations, documentations, and proofs, all in a single, readable, simple and semantically well-defined programming language. Questions and Considerations What about the soundness of the verification engine? You might be wondering, "what if the Dafny compiler/verifier is buggy?" We actually know Dafny is buggy (dafny repo issues), but we do not rely on the absence of bugs in Dafny. We rely on Dafny (and its verification engine) to be sound. Soundness means that when Dafny reports that proofs are correct, they are indeed correct. What if the specification we have written is not the right one? In this case, we would prove conformance to a wrong requirement. Yes, this can happen and there is no silver bullet to fix this problem. However, as we mentioned before, Dafny is executable. This enables us to run the code and get some confidence that our specifications are the right ones. And our specifications are written in first-order logic with no room for dispute about the meaning, so if you notice a problem, let us know and we will fix it. What if Dafny cannot prove that an implementation conforms to a specification? This can happen, but in this case Dafny has some feedback mechanisms to help investigate what steps of a proof cannot be verified. And until now, we have always managed to build proofs that Dafny can automatically check.We welcome your feedback, so please check out our eth2.0-dafny repository. We’ve been excited to watch Ethereum 2.0 development reach its recent testnet milestones, and we look forward to working with teams across the ecosystem to ensure the next phase of the network is built on a rock solid foundation. Acknowledgement: Thanks to my teammates Joanne Fuller, Roberto Saltini (Automated Verification team), Nicolas Liochon, and to Avery Erwin for comments on a preliminary version of this post. Keep Up With Ethereum 2.0 Subscribe to the Consensys newsletter to get the latest Eth2 news straight to your inbox.
{"url":"https://consensys.io/blog/formally-verifying-the-ethereum-2-0-phase-0-specifications","timestamp":"2024-11-12T20:02:38Z","content_type":"text/html","content_length":"110377","record_id":"<urn:uuid:b1161982-eefc-4e04-8805-8d4ae43c8b47>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00887.warc.gz"}
BlogWhat is hashrate and why is it important? The only way to get cryptocurrency is mining, which is the process of checking transactions and creating new blocks in the blockchain. Mining is based on the calculation of complex mathematical problems, which requires high-performance computer processors and a high hashrate - the power at which these computers perform the calculations. What hashrate exactly is and why it is so important — the subject of this article. Why is hashrate important? Bitcoin, like some other blockchain networks, is based on the Proof of Work (PoW) algorithm, which is responsible for validating transactions. This process requires a number of complex tasks to be performed until a valid result - a hash - is created. The process of generating a valid hash is random and requires millions of calculations, making computer speed a key factor in mining. This speed is referred to as the hashrate. Thus, the higher the hash rate, the higher the probability of creating a valid hash and validating the transaction. Hashrate is measured in units of: • 1 kilohash per second (1KH/s): 1 thousand calculations per second; • 1 megahash per second (1 MH/s): 1 million calculations per second; • 1 gigahash per second (1 GH/s): 1 billion calculations per second. Hashrate is a key indicator of blockchain network security because it is a rough estimate of how secure the blockchain is against potential hacker attacks or scams. As the number of miners on the network increases, so does the hash rate, making it more difficult for fraudsters or hackers to attack the network. In addition, the higher the hash rate, the harder it is for an individual miner to gain control of a larger part of the network. What is bitcoin's current hash rate? Bitcoin's hash rate has reached an all-time high of 281.79 TH/s. This means that the processing power of the network can run 281.79 trillion calculations per second. Why does the hash rate change? Usually mining is a capital-intensive process that requires equipment, electricity, manpower, maintenance and time. Therefore, miners are rewarded with new cryptocurrency units for each successfully mined block. If the hash rate is high, the mining difficulty increases and more energy is required to verify the block. At the same time, a lower hash rate leads to less difficulty and requires less energy to verify the block. Thus, the difficulty of mining and the cost/profit ratio are key metrics for miners when choosing the cryptocurrency they want to mine. Hence this dynamic causes the hash rate to change over time. What is mining difficulty? Mining difficulty, which is closely related to hashrate, is a metric that indicates how difficult it is to mine a particular blockchain. The more miners come over time, more hash rate becomes available, making mining more competitive, difficult and expensive. This leads to longer block confirmation time and can eventually make the blockchain unusable. To avoid this, the mining difficulty is adjusted by the blockchain periodically to accommodate changes in hash rate. Thus, the difficulty of mining increases or decreases depending on the number of miners and the total hashrate. Understanding what a hashrate is, as well as using the most profitable mining strategies, allows crypto enthusiasts to save personal time and resources of their equipment, as well as increase their income significantly.
{"url":"https://greencryptopay.com/blog/what-is-hashrate-and-why-is-it-important","timestamp":"2024-11-09T06:36:47Z","content_type":"text/html","content_length":"1049536","record_id":"<urn:uuid:43f9694c-b31a-4e71-af8e-60f2a6bda430>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00316.warc.gz"}
Most ridiculous green proposal 7 minutes ago, Jay McKinsey said: Actually the article states the amount as $1,000 more for a car. As long as the sticker also includes how much more they will spend in gasoline costs it sounds like a fine idea. Perhaps the sticker could also explain that the automakers who agreed to the new restrictions did so to get their 2008 bailouts and are now not living up to their agreement. The most notable company that did not take bailout money is Ford, the lead company in the agreement with CA to keep using our standards. The F-150 is going to remain a CA emissions vehicle. Oh and there are 14 other explicit CARB states and a total of 23 states that have filed suit to *keep* CA emissions. Ohh you just keep digging a deeper grave. Lmao yes Ford did walk away your correct, Mulally walked away from EV/ Hybrids. The father of ecboost and saved Ford. Obama bailed out GM, who in turn built factories in China. Odd how much green tech roles out of China... Time for a glass of wine...this conversation is getting old. I'll be back....lol a f 150 in California..peace out. I hope nobody here finds this ridiculous: 58 minutes ago, Wombat said: I hope nobody here finds this ridiculous: Oh you have to pull out that best quote "The DOD—one of the largest single consumers of energy globally—aspires to eliminate all fossil fuel dependency." • 3 • 1 14 hours ago, Guy Daley said: Brilliant move on that purchase. You just happened to be in the right place at the right time and that asset will pay dividends for as long as you have it. OUTSTANDING investment. congrats. Actually, you can pick up these old 2500 gallon tank propane trucks pretty much everywhere in the USA. A BIG batch of them were made in the 80's and they are all wearing out and the replacement batch all made pretty much... NOW are all 7000gallon tank gargantuan beasts which means fewer trips to pick up Propane and more customers served per operator per day. The Trick is the ability to get these older certified 2500 gallon tanks filled, but since mine is still on the truck and I know a guy, no problem. Otherwise you will have to take it off the truck and put it on a concrete pad like everyone else which is what most everyone is doing. There are tens of thousands of these old propane trucks around. Heck, guy I got mine from was tearing his hair out trying to get rid of them(he had 70 or so) as normal channels do not know how to list them for him to get rid of them. • 1 • 1 11 hours ago, Jay McKinsey said: The ban on diesel trucks is set for 2045. It will be phased in: But just 10 years from now, half of all new trucks and vans sold in California in classes 4 through 8—which includes everything from the package delivery van to the biggest garbage trucks—will have to be ZEVs. And by 2035, CARB says that 55 percent of all class 2b-3 trucks, 75 percent of all class 4 through 6 trucks and vans, and 40 percent of all class 7 and 8 trucks and tractors sold in the state have to be ZEVs. And just what Nickel magical mine is going to open up for all of this? Hrmm? I mean common, TESLA right now cannot increase battery production because it is pinched by Nickel. Philippines, Indonesia, New Caldonia, Australia, Brazil, Canada are all restricting their Nickel mines with ever increasing environmental regulations(Philippines lost 50% of their Nickel production due to said regulations). Those countries account for +++75% of the worlds Nickel Reserves by themselves. The Whole world is saying they will stop ICE production... uh huh, 100Million cars a year currently and do remember India/China/SE Asia/Africa are JUST starting to own autos which is 3/4 the worlds population, so right there is a 1-->2 Billion Vehicles required all by itself, and each TESLA 3 has 35kg of Nickel in it. That right there is 70Mt of NIckel. Farming truck/delivery truck will require 4X as much per and there goes another 100,000,000 of those vehicles. According to TESLA semi truck requires roughly 15X as much as a Model 3 for another 10million vehicles. Farming equipment even more as they have to run all day often for 20 hours at a shot during planting/harvest. So, even in perfect world, this is well over 100Mt of Nickel. Nickel is mostly used for steel currently and there is NO replacement for Nickel in steel production/use. None. Do not worry! Total worldwide nickel production from mining is a massive... 2.7Mt... and Total Nickel world reserves stand at 90MT... Sweet, no problem, lets just turn 100% of our world Nickel reserves into a brief 10-->20 year joy ride to meet all these nations so called "goals" according to their "brilliant" politicians. Give you a hint: Your utopia needs a different battery. If its not made from Hydrogen, Silicon, Nitrogen, Oxygen, Aluminum, Iron, Calcium, SOdium, Chlorine, Magnesium as at least one or two of its main components, you got Shit dude unless you wish to pretend you get your utopia but the other 8 Billion people get squalor.... Guess what they will do? Trample you and hang your pretentious ass • 2 2 hours ago, Jay McKinsey said: Oh you have to pull out that best quote "The DOD—one of the largest single consumers of energy globally—aspires to eliminate all fossil fuel dependency." Yeah, not like there is any strategic value to their mission?!? It's not like China or Russia would ever dare to try cut the fuel supply of US forces in a conflict?!? 9 minutes ago, footeab@yahoo.com said: And just what Nickel magical mine is going to open up for all of this? Hrmm? I mean common, TESLA right now cannot increase battery production because it is pinched by Nickel. Philippines, Indonesia, New Caldonia, Australia, Brazil, Canada are all restricting their Nickel mines with ever increasing environmental regulations(Philippines lost 50% of their Nickel production due to said regulations). Those countries account for +++75% of the worlds Nickel Reserves by themselves. The Whole world is saying they will stop ICE production... uh huh, 100Million cars a year currently and do remember India/China/SE Asia/Africa are JUST starting to own autos which is 3/4 the worlds population, so right there is a 1-->2 Billion Vehicles required all by itself, and each TESLA 3 has 35kg of Nickel in it. That right there is 70Mt of NIckel. Farming truck/delivery truck will require 4X as much per and there goes another 100,000,000 of those vehicles. According to TESLA semi truck requires roughly 15X as much as a Model 3 for another 10million vehicles. Farming equipment even more as they have to run all day often for 20 hours at a shot during planting /harvest. So, even in perfect world, this is well over 100Mt of Nickel. Nickel is mostly used for steel currently and there is NO replacement for Nickel in steel production/use. None. Do not worry! Total worldwide nickel production from mining is a massive... 2.7Mt... and Total Nickel world reserves stand at 90MT... Sweet, no problem, lets just turn 100% of our world Nickel reserves into a brief 10-->20 year joy ride to meet all these nations so called "goals" according to their "brilliant" politicians. Give you a hint: Your utopia needs a different battery. If its not made from Hydrogen, Silicon, Nitrogen, Oxygen, Aluminum, Iron, Calcium, SOdium, Chlorine, Magnesium as at least one or two of its main components, you got Shit dude unless you wish to pretend you get your utopia but the other 8 Billion people get squalor.... Guess what they will do? Trample you and hang your pretentious ass high. Actually Footeab, Australia has a virtually inexhaustable supply of Nickel, but most of it is Nickel Hydroxide, not the lucrative Sulphate variety. When the price hit $120,000/tonne approx 12 years ago, we invested heavily in it, but China and Japan stopped adding it to their steel and just made "pig iron" instead. The arse fell out of the market and BHP lost billions. Even now, it is not profitable to mine. Price would have to head North of $40,000/tonne to get us interested. The current price suggests the market is still over-supplied, would take many millions of EV's before the market got tight enough to justify any investment. 17 minutes ago, footeab@yahoo.com said: And just what Nickel magical mine is going to open up for all of this? Hrmm? I mean common, TESLA right now cannot increase battery production because it is pinched by Nickel. Philippines, Indonesia, New Caldonia, Australia, Brazil, Canada are all restricting their Nickel mines with ever increasing environmental regulations(Philippines lost 50% of their Nickel production due to said regulations). Those countries account for +++75% of the worlds Nickel Reserves by themselves. The Whole world is saying they will stop ICE production... uh huh, 100Million cars a year currently and do remember India/China/SE Asia/Africa are JUST starting to own autos which is 3/4 the worlds population, so right there is a 1-->2 Billion Vehicles required all by itself, and each TESLA 3 has 35kg of Nickel in it. That right there is 70Mt of NIckel. Farming truck/delivery truck will require 4X as much per and there goes another 100,000,000 of those vehicles. According to TESLA semi truck requires roughly 15X as much as a Model 3 for another 10million vehicles. Farming equipment even more as they have to run all day often for 20 hours at a shot during planting /harvest. So, even in perfect world, this is well over 100Mt of Nickel. Nickel is mostly used for steel currently and there is NO replacement for Nickel in steel production/use. None. Do not worry! Total worldwide nickel production from mining is a massive... 2.7Mt... and Total Nickel world reserves stand at 90MT... Sweet, no problem, lets just turn 100% of our world Nickel reserves into a brief 10-->20 year joy ride to meet all these nations so called "goals" according to their "brilliant" politicians. Give you a hint: Your utopia needs a different battery. If its not made from Hydrogen, Silicon, Nitrogen, Oxygen, Aluminum, Iron, Calcium, SOdium, Chlorine, Magnesium as at least one or two of its main components, you got Shit dude unless you wish to pretend you get your utopia but the other 8 Billion people get squalor.... Guess what they will do? Trample you and hang your pretentious ass high. Bwahaha! You are always hilarious! Unlike your precious fossil fuel molecules, elements like nickel are 100% recyclable, so they only have to be mined from the Earth once. And we have the cost of nickel which tells you the association between supply and demand: Gee, tell me again how there is a shortage of Nickel? Just now, Wombat said: Actually Footeab, Australia has a virtually inexhaustable supply of Nickel, but most of it is Nickel Hydroxide, not the lucrative Sulphate variety. When the price hit $120,000/tonne approx 12 years ago, we invested heavily in it, but China and Japan stopped adding it to their steel and just made "pig iron" instead. The arse fell out of the market and BHP lost billions. Even now, it is not profitable to mine. Price would have to head North of $40,000/tonne to get us interested. The current price suggests the market is still over-supplied, would take many millions of EV's before the market got tight enough to justify any investment. PS: Russia is the worlds largest producer of Nickel. Company is called Norilsk Nickel or something like that. 8 hours ago, Boat said: Tech reducing the price point and increased popularity will be the actual mandate in the end no matter what the politicians say. That’s what will drive demand for a transition and the extent of Agreed Boat it comes down to the consumer at the end of the day. As a Brit I am interested to hear from those Californians on this site (or others) regarding the impact of renewables on their power supply and what the benefits other than the obvious reduction in carbon footprint this has had on the State. Or are we talking massive subsidies on green projects to get to the current levels of renewables supplied and unfair additional regulations imposed on utilities that promote fossil fuel/nuclear? How has this affected cost to the consumer compared to other States and is the power supply maintained as it should be or are there outages/blackouts?? Has the whole Green mantra been pushed onto the people of California willingly or is it just a political decision where the people have no say?? Lot of questions I know, and sorry for the rambled way I've asked them, just off the top of my jumbled head! @Jay McKinsey as a guy who backs renewables I would be interested in your perspective too, so this remains a balanced discussion. 5 minutes ago, Wombat said: PS: Russia is the worlds largest producer of Nickel. Company is called Norilsk Nickel or something like that. PSS: Just dig this up: and this: but I should point out there is a big difference between reserves and resources. It is safe to assume there is a couple billion tonnes of resources, whereas reserves are just what has already been drilled for. As I say, there has been very little exploration these last few years. 26 minutes ago, Jay McKinsey said: Bwahaha! You are always hilarious! Unlike your precious fossil fuel molecules, elements like nickel are 100% recyclable, so they only have to be mined from the Earth once. And we have the cost of nickel which tells you the association between supply and demand: Gee, tell me again how there is a shortage of Nickel? Jay you dont have many who agree with you on this site, but in your defence you always try to back up your claims with data. I think for the world to truly get to where you wish it to be then there needs to be big improvements in battery storage and that is new tech that we have no idea how long, if ever, it will take to develop. However I have faith in humanity that this will be overcome at some point in the relatively near future (10-15 years) as there is a huge pot of gold for whoever does succeed, that is why there are so many different tech companies desperately trying to develop a miriad of projects currently, (my hope is for a graphene battery). Just in the UK we have 580 projects either in operation, approved or well on the way to being approved https://www.renewableuk.com/news/517015/Governments-announcement-on-battery-storage-will-boost-investment-in-new-technology-.htm We already have the worlds largest offshore wind farm https://www.azocleantech.com/article.aspx?ArticleID=1072 I just think that current renewables don't cut it frankly, and the Green mantra with climate change is the worlds biggest money making hoax ever (Covid 19 apart). The green renewables have been rushed into service in many ways and they need big improvements before they can think about replacing NG and nuclear in my opinion. Welcome your thoughts. Edited by Rob Plant Guy Daley &plus; 49 5 hours ago, footeab@yahoo.com said: Actually, you can pick up these old 2500 gallon tank propane trucks pretty much everywhere in the USA. A BIG batch of them were made in the 80's and they are all wearing out and the replacement batch all made pretty much... NOW are all 7000gallon tank gargantuan beasts which means fewer trips to pick up Propane and more customers served per operator per day. The Trick is the ability to get these older certified 2500 gallon tanks filled, but since mine is still on the truck and I know a guy, no problem. Otherwise you will have to take it off the truck and put it on a concrete pad like everyone else which is what most everyone is doing. There are tens of thousands of these old propane trucks around. Heck, guy I got mine from was tearing his hair out trying to get rid of them(he had 70 or so) as normal channels do not know how to list them for him to get rid of them. Tens of thousands of these old propane trucks around? Not around here. We've got an abundance of old farm trucks (north central KS). I suppose, it all depends on where you live and I wouldn't want to go interstate to chase one down. Like I said, smart move because propane is dirt cheap NOW. For how long, who knows. 11 hours ago, Jay McKinsey said: Ah, how quaint you started in 2011 when there was a giant spike... Can tell you are an "economist" and not an engineer. As I was not talking about total use 10 years ago, but 10 years in the FUTURE. And you ignored world reserves have doubled since 2011 and the Doubled number is the one I used. Why? Reserves are tied directly to $$$/Ton. Cost spiked, reserves doubled... who knew... Reserves are tied to economics... But not according to economists..... And no, nothing is 100% recyclable, Nickel is highly recyclable and is running about 85% currently making less pure alloys which are then combined in even less pure uses for steel making. One thing you utopian dreamers refuse to acknowledge. ALL recycled material is NEVER used in its original form but always in less pure uses where impurities introduced in the sorting/recycling process are not as important in the secondary material uses. 12 hours ago, Wombat said: Actually Footeab, Australia has a virtually inexhaustable supply of Nickel, but most of it is Nickel Hydroxide, not the lucrative Sulphate variety. When the price hit $120,000/tonne approx 12 years ago, we invested heavily in it, but China and Japan stopped adding it to their steel and just made "pig iron" instead. The arse fell out of the market and BHP lost billions. Even now, it is not profitable to mine. Price would have to head North of $40,000/tonne to get us interested. The current price suggests the market is still over-supplied, would take many millions of EV's before the market got tight enough to justify any investment. No one cares about a couple million EV's and that is not the topic of discussion. It takes a decade to bring a Nickel mine online and all these countries claim they will eliminate ICE by 2030/ 2040... So, lets assume you can use a calendar and can count to 10 without taking off your socks... Which means hundreds of millions of vehicles if not a Billion or 2 Billion Vehicles not a couple million. Which brings up my previous calculation using TESLA's numbers for Nickel content in their vehicles. Need for world Production for ONLY vehicles and no grid storage, uh hem, NO grid storage, to increase by 100MT verses the current 2.7MT... a "small" 3700% change... So, Australia with its supposed "infinite" supply of Nickel needs to get cracking... Elon Musk has been begging for Nickel production; so has every single car manufacturer. You might want to start listening to Eurythmics "sweet dreams" Edited by footeab@yahoo.com Dan Warnick &plus; 6,100 9 minutes ago, Dan Warnick said: Great song. 😃 Yes, annoying my minions in the office currently playing it... Ah, back to work. EDIT: To Wombat. Oh yea, and Australia has the largest reserves of Nickel on the planet of the Sulphide stuff... Dear Wombat... Australian production has nothing to do with Hydroxide vrs Sulphate... So, while I do not know for sure, I would say the COST of Australian Nickel was way too high due to Environmental regulations of the Sulphide mines and NOT the Hydroxide deposits as YOU claim. Same reason Phillipine production has tanked recently. Edited by footeab@yahoo.com 11 hours ago, Rob Plant said: Jay you dont have many who agree with you on this site, but in your defence you always try to back up your claims with data. I think for the world to truly get to where you wish it to be then there needs to be big improvements in battery storage and that is new tech that we have no idea how long, if ever, it will take to develop. However I have faith in humanity that this will be overcome at some point in the relatively near future (10-15 years) as there is a huge pot of gold for whoever does succeed, that is why there are so many different tech companies desperately trying to develop a miriad of projects currently, (my hope is for a graphene battery). Just in the UK we have 580 projects either in operation, approved or well on the way to being approved https://www.renewableuk.com/news/517015/Governments-announcement-on-battery-storage-will-boost-investment-in-new-technology-.htm We already have the worlds largest offshore wind farm https://www.azocleantech.com/article.aspx?ArticleID=1072 I just think that current renewables don't cut it frankly, and the Green mantra with climate change is the worlds biggest money making hoax ever (Covid 19 apart). The green renewables have been rushed into service in many ways and they need big improvements before they can think about replacing NG and nuclear in my opinion. Welcome your thoughts. I am old enough to remember a despicable UK Prime Minister called Harold Wilson. In the 1960's,he agreed with the vicious National Union of Mineworkers that British coal mining would be subsidised with taxpayer money and that a fleet of coal-fired power stations would be built to burn it. The cost crippled the economy and the Trent valley became known as Sulfur valley from the pollution coming from the power stations along the River Trent. The valley was frequently filled with cloud from the water evaporated in the cooling towers. As with the strutting union leaders of the 1960's we are now being shafted by people who are only concerned with promoting their own interests at the expense of the public. In the UK,I remember that we were asked to agree to paying extra on our electricity bills for the renewable energy content. Only a handful of consumers agreed to pay extra,so it was then made compulsory. • 1 • 1 • 1 29 minutes ago, footeab@yahoo.com said: No one cares about a couple million EV's and that is not the topic of discussion. It takes a decade to bring a Nickel mine online and all these countries claim they will eliminate ICE by 2030/ 2040... So, lets assume you can use a calendar and can count to 10 without taking off your socks... Which means hundreds of millions of vehicles if not a Billion or 2 Billion Vehicles not a couple million. Which brings up my previous calculation using TESLA's numbers for Nickel content in their vehicles. Need for world Production for ONLY vehicles and no grid storage, uh hem, NO grid storage, to increase by 100MT verses the current 2.7MT... a "small" 3700% change... So, Australia with its supposed "infinite" supply of Nickel needs to get cracking... Elon Musk has been begging for Nickel production; so has every single car manufacturer. You might want to start listening to Eurythmics "sweet dreams" Australia has nickel sulphide deposits,not nickel sulphate. markslawson &plus; 1,057 Okay, I'll add in an $A16 billion ($US11.2 billion) project to run an undersea power cable from Australia to Singapore, in order to transmit green power. Is there enough demand in either Singapore or the adjacent Malaysia to justify such an expensive connection and would either country be willing to buy power from it? For that matter where is the green power to send over this cable to come from? The top end of Australia is sparsely populated and the bulk of the renewable projects are at the other end of the continent. It is madness yet there are those who take it seriously. 2 hours ago, Richard D said: Australia has nickel sulphide deposits,not nickel sulphate. Australia has both is my ignorant brief reading of the subject... According to this: https://www.benchmarkminerals.com/membership/esg-and-nickel-wading-through-the-issues/ Sulphide vs Laterite Processing Routes Nickel is produced from two primary resource types, sulphide ores and oxide ores more commonly referred to as laterites. Sulphide deposits tend to be located outside of the tropics (although there are a smattering of deposits in South America, South Africa and Australia). Sulphide deposits are predominantly exploited in Russia, Canada, Scandinavian countries, China and Australia. Laterites are distributed in the tropics with Indonesia, Australia, New Caledonia, Philippines, Papua New Guinea, Cuba, Brazil and other countries hosting the majority of the laterite resources. Approximately 70% of the world’s nickel resources are in the form of laterites with the remainder in sulphides. However, until the late 1990s 70% of global nickel production was from the exploitation of sulphides. Another source: https://www.spglobal.com/marketintelligence/en/news-insights/blog/battery-grade-nickel-supply-will-suffer-as-major-nickel-discoveries-slump Our analysis of nickel discoveries has identified 50 major discoveries made from 1990 to 2019, containing 96.4 million tonnes of nickel in oxide and sulfide deposits. Three were discovered in the past decade and account for only 7% of the total nickel discovered — a sharp decline from the 1990s, when new nickel discoveries were much more common. Nickel sulfide deposits (currently the most reliable source for class 1 nickel) account for only 19%, or 18.1 Mt, of the nickel in these discoveries. Given the forecast demand increase for battery-grade nickel in the upcoming years, the pool of quality assets that can easily produce class 1 material is shallow. SO, 18Mt... of battery grade Nickel.... Ouch.... Need 100Mt... 14 hours ago, Rob Plant said: Jay you dont have many who agree with you on this site, but in your defence you always try to back up your claims with data. I think for the world to truly get to where you wish it to be then there needs to be big improvements in battery storage and that is new tech that we have no idea how long, if ever, it will take to develop. However I have faith in humanity that this will be overcome at some point in the relatively near future (10-15 years) as there is a huge pot of gold for whoever does succeed, that is why there are so many different tech companies desperately trying to develop a miriad of projects currently, (my hope is for a graphene battery). Just in the UK we have 580 projects either in operation, approved or well on the way to being approved https://www.renewableuk.com/news/517015/Governments-announcement-on-battery-storage-will-boost-investment-in-new-technology-.htm We already have the worlds largest offshore wind farm https://www.azocleantech.com/article.aspx?ArticleID=1072 I just think that current renewables don't cut it frankly, and the Green mantra with climate change is the worlds biggest money making hoax ever (Covid 19 apart). The green renewables have been rushed into service in many ways and they need big improvements before they can think about replacing NG and nuclear in my opinion. Welcome your thoughts. 'Rob, thanks for noting that I back up my claims with data. As to your question in your other post, California is very pro renewables, nothing has been foisted upon us. Quite the opposite, renewables are a growing industry that we are proud to be one of the world leaders in. It is a money making exercise for us. You can think climate change is a hoax but air pollution certainly isn't and that is what initiated our move toward renewables. Then it became clear that regardless of the original investment motivation, we had discovered exponential tech whose cost curves were responding just like Moore's Law. So we are applying the exponential business model that we pioneered here in Silicon Valley to renewable tech. The rest is our mode of capitalism, a new market entrant with better, lower cost tech disrupting the old fashioned incumbent. The investments made in renewables are leading to exponentially decreasing costs and increasing capabilities. We don't get to some ideal solution of which you desire without working our way along the cost / investment curve, refining and developing the technology is integral to the process as we go. The mistake that so many here make is to think that we should just work out the solution in a lab and then unleash it on the world. Reality is that the way you develop that solution is by delivering incrementally better and lower cost solutions to the market and eventually we will get to the tech that solves all the market needs. The renewables of today just need to be good enough to move us along to the next marginal unit of the market and they are doing a superb job of it. Then we will move onto the marginal unit after that... Today's renewables are beginning to replace new NG build and since coal peaked in 2008 two thirds of its replacement has been NG and one third solar wind. Batteries are currently the key player in moving us along to the next marginal unit. However the most important thing is that the investment function has shifted dramatically to renewables. Yesterday's investment curve is today's economic reality and today's investment curve is tomorrow's economic reality. 1 hour ago, markslawson said: Okay, I'll add in an $A16 billion ($US11.2 billion) project to run an undersea power cable from Australia to Singapore, in order to transmit green power. Is there enough demand in either Singapore or the adjacent Malaysia to justify such an expensive connection and would either country be willing to buy power from it? For that matter where is the green power to send over this cable to come from? The top end of Australia is sparsely populated and the bulk of the renewable projects are at the other end of the continent. It is madness yet there are those who take it Mark, it is well publicized that they are planning on building a massive solar farm in the north where the cable goes out to sea. https://www.pv-magazine-australia.com/2020/05/27/ Singapore based independent electricity retailer iSwitch, one of the city-state’s top 3 retailers and its largest green retailer, has pledged its support to be a foundation off-taker for the solar energy produced by the proposed Sun Cable project. https://www.pv-magazine-australia.com/2019/10/01/singapores-largest-green-energy-retailer-pledges-itself-to-sun-cable-project/ 29 minutes ago, footeab@yahoo.com said: Australia has both is my ignorant brief reading of the subject... According to this: https://www.benchmarkminerals.com/membership/esg-and-nickel-wading-through-the-issues/ Sulphide vs Laterite Processing Routes Nickel is produced from two primary resource types, sulphide ores and oxide ores more commonly referred to as laterites. Sulphide deposits tend to be located outside of the tropics (although there are a smattering of deposits in South America, South Africa and Australia). Sulphide deposits are predominantly exploited in Russia, Canada, Scandinavian countries, China and Australia. Laterites are distributed in the tropics with Indonesia, Australia, New Caledonia, Philippines, Papua New Guinea, Cuba, Brazil and other countries hosting the majority of the laterite resources. Approximately 70% of the world’s nickel resources are in the form of laterites with the remainder in sulphides. However, until the late 1990s 70% of global nickel production was from the exploitation of sulphides. Another source: https://www.spglobal.com/marketintelligence/en/news-insights/blog/battery-grade-nickel-supply-will-suffer-as-major-nickel-discoveries-slump Our analysis of nickel discoveries has identified 50 major discoveries made from 1990 to 2019, containing 96.4 million tonnes of nickel in oxide and sulfide deposits. Three were discovered in the past decade and account for only 7% of the total nickel discovered — a sharp decline from the 1990s, when new nickel discoveries were much more common. Nickel sulfide deposits (currently the most reliable source for class 1 nickel) account for only 19%, or 18.1 Mt, of the nickel in these discoveries. Given the forecast demand increase for battery-grade nickel in the upcoming years, the pool of quality assets that can easily produce class 1 material is shallow. SO, 18Mt... of battery grade Nickel.... Ouch.... Need 100Mt... You are apparently unaware that not all batteries contain nickel. Tesla's LFP battery that they are using for grid applications and some vehicles uses neither cobalt or nickel. 4 hours ago, footeab@yahoo.com said: Ah, how quaint you started in 2011 when there was a giant spike... Can tell you are an "economist" and not an engineer. As I was not talking about total use 10 years ago, but 10 years in the FUTURE. And you ignored world reserves have doubled since 2011 and the Doubled number is the one I used. Why? Reserves are tied directly to $$$/Ton. Cost spiked, reserves doubled... who knew... Reserves are tied to economics... But not according to economists..... And no, nothing is 100% recyclable, Nickel is highly recyclable and is running about 85% currently making less pure alloys which are then combined in even less pure uses for steel making. One thing you utopian dreamers refuse to acknowledge. ALL recycled material is NEVER used in its original form but always in less pure uses where impurities introduced in the sorting/recycling process are not as important in the secondary material uses. Oh how quaint, can tell you are not an economist. I started 10 years back because anything before that is considered historical and needs to be adjusted for inflation. The chart you present does not adjust for inflation and is thus unrealistic and meant to mislead. And why does it end in 2013? Maybe because it wouldn't show that nice increasing price line if it were current? Try again. btw adjusting for inflation the 1983 price needs to be increased by 161% Edited by Jay McKinsey
{"url":"https://community.oilprice.com/topic/21405-most-ridiculous-green-proposal/page/3/","timestamp":"2024-11-03T03:23:18Z","content_type":"text/html","content_length":"600761","record_id":"<urn:uuid:7309afea-8f21-49f3-8e63-4711e20f8be3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00012.warc.gz"}
Real-time state update by state-space model Kalman filtering Since R2021b update efficiently updates the state distribution in real time by applying one recursion of the Kalman filter to compute state-distribution moments for the final period of the specified response To compute state-distribution moments by recursive application of the Kalman filter for each period in the specified response data, use filter instead. [nextState,NextStateCov] = update(Mdl,Y) returns the updated state-distribution moments at the final time T, conditioned on the current state distribution, by applying one recursion of the Kalman filter to the fully specified, standard state-space model Mdl given T observed responses Y. nextState and NextStateCov are the mean and covariance, respectively, of the updated state distribution. [nextState,NextStateCov] = update(___,Name,Value) uses additional options specified by one or more name-value arguments, and uses any of the input-argument combinations in the previous syntaxes. For example, update(Mdl,Y,Params=params,SquareRoot=true) sets unknown parameters in the partially specified model Mdl to the values in params, and specifies use of the square-root Kalman filter variant for numerical stability. Compute Only Final State Distribution from Kalman Filter Suppose that a latent process is an AR(1). The state equation is where ${u}_{t}$ is Gaussian with mean 0 and standard deviation 1. Generate a random series of 100 observations from ${x}_{t}$, assuming that the series starts at 1.5. T = 100; ARMdl = arima(AR=0.5,Constant=0,Variance=1); x0 = 1.5; rng(1); % For reproducibility x = simulate(ARMdl,T,Y0=x0); Suppose further that the latent process is subject to additive measurement error. The observation equation is ${y}_{t}={x}_{t}+{\epsilon }_{t},$ where ${\epsilon }_{t}$ is Gaussian with mean 0 and standard deviation 0.75. Together, the latent process and observation equations compose a state-space model. Use the random latent state process (x) and the observation equation to generate observations. Specify the four coefficient matrices. A = 0.5; B = 1; C = 1; D = 0.75; Specify the state-space model using the coefficient matrices. Mdl = State-space model type: ssm State vector length: 1 Observation vector length: 1 State disturbance vector length: 1 Observation innovation vector length: 1 Sample size supported by model: Unlimited State variables: x1, x2,... State disturbances: u1, u2,... Observation series: y1, y2,... Observation innovations: e1, e2,... State equation: x1(t) = (0.50)x1(t-1) + u1(t) Observation equation: y1(t) = x1(t) + (0.75)e1(t) Initial state distribution: Initial state means Initial state covariance matrix x1 1.33 State types Mdl is an ssm model. Verify that the model is correctly specified using the display in the Command Window. The software infers that the state process is stationary. Subsequently, the software sets the initial state mean and covariance to the mean and variance of the stationary distribution of an AR(1) model. Filter the observations through the state-space model, in real time, to obtain the state distribution for period 100. [rtfX100,rtfXVar100] = update(Mdl,y) update applies the Kalman filter to all observations in y, and returns the state estimate of only period 100. Compare the result to the results of filter. [fX,~,output] = filter(Mdl,y); fXVar100 = output(end).FilteredStatesCov tol = 1e-10; discrepencyMeans = fX100 - rtfX100; discrepencyVars = fXVar100 - rtfXVar100; areMeansEqual = norm(discrepencyMeans) < tol areMeansEqual = logical areVarsEqual = norm(discrepencyVars) < tol Like update, the filter function filters the observations through the model, but it returns all intermediate state estimates. Because update returns only the final state estimate, it is more suited to real-time calculations than filter. Filter States in Real Time Consider the simulated data and state-space model in Compute Only Final State Distribution from Kalman Filter. T = 100; ARMdl = arima(AR=0.5,Constant=0,Variance=1); x0 = 1.5; rng(1); % For reproducibility x = simulate(ARMdl,T,Y0=x0); y = x + 0.75*randn(T,1); A = 0.5; B = 1; C = 1; D = 0.75; Mdl = ssm(A,B,C,D); Suppose observations are available sequentially, and consider obtaining the updated state distribution by filtering each new observation as it is available. Simulate the following procedure using a loop. 1. Create variables that store the initial state distribution moments. 2. Filter the incoming observation through the model specifying the current initial state distribution moments. 3. Overwrite the current state distribution moments with the new state distribution moments. 4. Repeat steps 2 and 3 as new observations are available. currentState = Mdl.Mean0; currentStateCov = Mdl.Cov0; newState = zeros(T,1); newStateCov = zeros(T,1); for j = 1:T [newState(j),newStateCov(j)] = update(Mdl,y(j),currentState,currentStateCov); currentState = newState(j); currentStateCov = newStateCov(j); Plot the observations, true state values, and new state means of each period. legend(["True state values" "Observations" "New state values"]) Compare the results to the results of filter. tol = 1e-10; [fX,~,output] = filter(Mdl,y); discrepencyMeans = fX - newState; discrepencyVars = [output.FilteredStatesCov]' - newStateCov; areMeansEqual = norm(discrepencyMeans) < tol areMeansEqual = logical areVarsEqual = norm(discrepencyVars) < tol The real-time filter update, applied to the entire data set sequentially, returns the same state distributions as filter. Nowcast State-Space Model Containing Regression Component Consider that the linear relationship between the change in the unemployment rate and the nominal gross national product (nGNP) growth rate is of interest. Suppose the innovations of a mismeasured regression of the first difference of the unemployment rate onto the nGNP growth rate is an ARMA(1,1) series with Gaussian disturbances (that is, a regression model with ARMA(1,1) errors and measurement error). Symbolically, and in state-space form, the model is $\begin{array}{l}\left[\begin{array}{c}{x}_{1,t}\\ {x}_{2,t}\end{array}\right]=\left[\begin{array}{cc}\varphi & \theta \\ 0& 0\end{array}\right]\left[\begin{array}{c}{x}_{1,t-1}\\ {x}_{2,t-1}\end {array}\right]+\left[\begin{array}{c}1\\ 1\end{array}\right]{u}_{t}\\ {y}_{t}-\beta {Z}_{t}={x}_{1,t}+\sigma {\epsilon }_{t},\end{array}$ • ${x}_{1,t}$ is the ARMA error series in the regression model. • ${x}_{2,t}$ is a dummy state for the MA(1) effect. • ${y}_{1,t}$ is the observed change in the unemployment rate being deflated by the growth rate of nGNP (${Z}_{t}$). • ${u}_{t}$ is a Gaussian series of disturbances having mean 0 and standard deviation 1. • ${\epsilon }_{\mathit{t}}$ is a Gaussian series of measurement errors with scale $\sigma$. Load the Nelson-Plosser data set, which contains the unemployment rate and nGNP series, among other measurements. Preprocess the data by following this procedure: 1. Remove the leading missing observations. 2. Convert the nGNP series to a return series by using price2ret. 3. Apply the first difference to the unemployment rate series. vars = ["GNPN" "UR"]; DT = rmmissing(DataTable(:,vars)); T = size(DT,1) - 1; % Sample size after differencing Z = [ones(T,1) price2ret(DT.GNPN)]; y = diff(DT.UR); Though this example removes missing values, the Kalman filter accommodates series containing missing values. Specify the coefficient matrices. A = [NaN NaN; 0 0]; B = [1; 1]; C = [1 0]; D = NaN; Specify the state-space model using ssm. Fit the model to all observations except for the final 10 observations (a holdout sample). Use a random set of initial parameter values for optimization. Specify the regression component and its initial value for optimization using the 'Predictors' and 'Beta0' name-value arguments, respectively. Restrict the estimate of $\sigma$ to all positive, real numbers. fh = 10; params0 = [0.3 0.2 0.2]; [EstMdl,estParams] = estimate(Mdl,y(1:T-fh),params0,'Predictors',Z(1:T-fh,:), ... 'Beta0',[0.1 0.2],'lb',[-Inf,-Inf,0,-Inf,-Inf]); Method: Maximum likelihood (fmincon) Sample size: 51 Logarithmic likelihood: -87.2409 Akaike info criterion: 184.482 Bayesian info criterion: 194.141 | Coeff Std Err t Stat Prob c(1) | -0.31780 0.37357 -0.85071 0.39494 c(2) | 1.21242 0.82223 1.47455 0.14034 c(3) | 0.45583 1.32970 0.34281 0.73174 y <- z(1) | 1.32407 0.26525 4.99179 0 y <- z(2) | -24.48733 1.89161 -12.94520 0 | Final State Std Dev t Stat Prob x(1) | -0.38117 0.42842 -0.88971 0.37363 x(2) | 0.23402 0.66222 0.35339 0.72380 EstMdl is an ssm model. Nowcast the unemployment rate into the forecast horizon. Simulate this procedure using a loop: 1. Compute the current state distribution moments by filtering all in-sample observations through the estimated model. 2. When an observation is available in the forecast horizon, filter it through the model. EstMdl does not store the regression coefficients, so you must pass them in using the name-value argument 3. Set the current state distribution state moments to the nowcasts. 4. Repeat steps 2 and 3 when new observations are available. [currentState,currentStateCov] = update(EstMdl,y(1:T-fh), ... unrateF = zeros(fh,2); unrateCovF = cell(fh,1); for j = 1:fh [unrateF(j,:),unrateCovF{j}] = update(EstMdl,y(T-fh+j),currentState,currentStateCov, ... currentState = unrateF(j,:)'; currentStateCov = unrateCovF{j}; Plot the estimated, filtered states. Recall that the first state is the change in the unemployment rate, and the second state helps build the first. plot(dates((end-fh+1):end),[unrateF(:,1) y((end-fh+1):end)]); ylabel('Change in the unemployment rate') title('Filtered Change in the Unemployment Rate') Efficiently Obtain Observation Contributions to Full Data Likelihood The filter function returns only the sum of the loglikelihoods for specified observations. To efficiently compute the loglikelihood of each observation, which can be convenient for custom estimation techniques, use update instead. Consider the simulated data and state-space model in Compute Only Final State Distribution from Kalman Filter. T = 100; ARMdl = arima(AR=0.5,Constant=0,Variance=1); x0 = 1.5; rng(1); % For reproducibility x = simulate(ARMdl,T,Y0=x0); y = x + 0.75*randn(T,1); A = 0.5; B = 1; C = 1; D = 0.75; Mdl = ssm(A,B,C,D); Evaluate the likelihood function for each observation. [~,~,logLj] = update(Mdl,y); logL is a 100-by-1 vector; logL(j) is the loglikelihood evaluated at observation j. Use filter to evaluate the likelihood for the entire data set. [~,logL] = filter(Mdl,y); logL is a scalar representing the full data likelihood. Because the software assumes the sample is randomly drawn, the likelihood for all observations is the sum of the individual loglikelihood values. Confirm this fact. tol = 1e-10; discrepency = logL - sum(logLj); areEqual = discrepency < tol Input Arguments Mdl — Standard state-space model ssm model object Standard state-space model, specified as an ssm model object returned by ssm or estimate. If Mdl is partially specified (that is, it contains unknown parameters), specify estimates of the unknown parameters by using the 'Params' name-value argument. Otherwise, update issues an error. Mdl does not store observed responses or predictor data. Supply the data wherever necessary using the appropriate input or name-value arguments. currentState — Current mean of state distribution Mdl.Mean0 (default) | numeric vector The current mean of the state distribution (in other words, the mean at time 1 before the Kalman filter processes the specified observations Y), specified as an m-by-1 numeric vector. m is the number of states. Data Types: double CurrentStateCov — Current covariance matrix of state distribution Mdl.Cov0 (default) | numeric matrix The current covariance matrix of the state distribution (in other words, the covariance matrix at time 1 before the Kalman filter processes the specified observations Y), specified as an m-by-m symmetric, positive semi-definite numeric matrix. Data Types: double Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: update(Mdl,Y,Params=params,SquareRoot=true) sets unknown parameters in the partially specified model Mdl to the values in params, and specifies use of the square-root Kalman filter variant for numerical stability. Univariate — Flag for applying univariate treatment of multivariate series false (default) | true Flag for applying the univariate treatment of a multivariate series (also known as sequential filtering), specified as true or false. A value of true applies the univariate treatment. The univariate treatment can accelerate and improve numerical stability of the Kalman filter. However, all observation innovations must be uncorrelated. That is, D[t]D[t]' must be diagonal, where D [t], t = 1,...,T, is one of the following: • The matrix D{t} in a time-varying state-space model • The matrix D in a time-invariant state-space model Example: Univariate=true Data Types: logical SquareRoot — Flag for applying square-root Kalman filter variant false (default) | true Flag for applying the square-root Kalman filter variant, specified as true or false. A value of true applies the square-root filter when update implements the Kalman filter. If you suspect that the eigenvalues of the filtered state or forecasted observation covariance matrices are close to zero, set SquareRoot=true. The square-root filter is robust to numerical issues arising from the finite precision of calculations, but it requires more computational resources. Example: SquareRoot=true Data Types: logical Tolerance — Forecast uncertainty threshold 0 (default) | nonnegative scalar Forecast uncertainty threshold, specified as a nonnegative scalar. If the forecast uncertainty for a particular observation is less than Tolerance during numerical estimation, then the software removes the uncertainty corresponding to the observation from the forecast covariance matrix before its inversion. It is best practice to set Tolerance to a small number, for example, le-15, to overcome numerical obstacles during estimation. Example: Tolerance=le-15 Data Types: double Predictors — Predictor variables in state-space model observation equation [] (default) | numeric matrix Predictor variables in the state-space model observation equation, specified as a T-by-d numeric matrix, where d is the number of predictor variables. Row t corresponds to the observed predictors at period t (Z[t]). The expanded observation equation is ${y}_{t}-{Z}_{t}\beta =C{x}_{t}+D{u}_{t}.$ That is, update deflates the observations using the regression component. β is the time-invariant vector of regression coefficients that the software estimates with all other parameters. If there are n observations per period, then the software regresses all predictor series onto each observation. If you specify Predictors, then Mdl must be time invariant. Otherwise, the software returns an error. By default, the software excludes a regression component from the state-space model. Data Types: double Beta — Regression coefficients [] (default) | numeric matrix Regression coefficients corresponding to predictor variables, specified as a d-by-n numeric matrix. d is the number of predictor variables (see Predictors). If Mdl is an estimated state-space model, specify the estimated regression coefficients stored in estParams. Output Arguments nextState — State mean after update applies Kalman filter numeric vector State mean after update applies the Kalman filter, returned as an m-by-1 numeric vector. Elements correspond to the order of the states defined in Mdl (either by the rows of A or as determined by NextStateCov — State covariance matrix after update applies Kalman filter numeric matrix State covariance matrix after update applies the Kalman filter, returned as an m-by-m numeric matrix. Rows and columns correspond to the order of the states defined in Mdl (either by the rows of A or as determined by Mdl.ParamMap). logL — Loglikelihood for each observation numeric vector Loglikelihood for each observation in Y, returned as an T-by-1 numeric vector. More About Real-Time State-Distribution Update The real-time state-distribution update applies one recursion of the Kalman filter to a standard state-space model given a length T response series and the state distribution at time T - 1, to compute the state distribution at time T. Consider a state-space model expressed in compact form $\left[\begin{array}{c}{x}_{t}\\ {y}_{t}\end{array}\right]=\left[\begin{array}{c}{A}_{t}\\ {A}_{t}{C}_{t}\end{array}\right]{x}_{t-1}+\left[\begin{array}{cc}{B}_{t}& 0\\ {B}_{t}{C}_{t}& {D}_{t}\end {array}\right]\left[\begin{array}{c}{u}_{t}\\ {\epsilon }_{t}\end{array}\right].$ The Kalman filter proceeds as follows for each period t: 1. Obtain the forecast distributions for each period in the data by recursively applying the conditional expectation to the state-space equation, given initial state distribution moments x[0|0] and P[0|0], and all observations up to time t − 1 (Y^t−1[1]). The resulting conditional distribution is $\left[\begin{array}{c}{x}_{t}\\ {y}_{t}\end{array}\right]|{Y}_{1}^{t-1}~Ν\left(\left[\begin{array}{c}{\stackrel{^}{x}}_{t|t-1}\\ {\stackrel{^}{y}}_{t|t-1}\end{array}\right],\left[\begin{array} {cc}{P}_{t|t-1}& {L}_{t|t-1}\\ {L}_{t|t-l}^{\prime }& {V}_{t|t-1}\end{array}\right]\right),$ □ ${\stackrel{^}{x}}_{t|t-1}={A}_{t}{\stackrel{^}{x}}_{t-1|t-1},$ the state forecast for time t. □ ${\stackrel{^}{y}}_{t|t-1}={C}_{t}{\stackrel{^}{x}}_{t|t-1},$ the forecasted response for time t. □ ${P}_{t|t-1}={A}_{t}{P}_{t-1|t-1}{A}_{t}^{\prime }+{B}_{t}{B}_{t}^{\prime },$ the state forecast covariance. □ ${V}_{t|t-1}={C}_{t}{P}_{t-1|t-1}{C}_{t}^{\prime }+{D}_{t}{D}_{t}^{\prime },$ the forecasted response covariance. □ ${L}_{t|t-1}={P}_{t-1|t-1}{C}_{t}^{\prime },$ the state and response forecast covariance. 2. Filter observation t through the model to obtain the updated state distribution: □ ${\stackrel{^}{x}}_{t|t}={\stackrel{^}{x}}_{t|t-1}+{L}_{t|t-1}{V}_{t|t-1}^{-1}\left({y}_{t}-{\stackrel{^}{y}}_{t|t-1}\right),$ the state filter estimator. □ ${P}_{t|t}={P}_{t|t-1}-{L}_{t|t-1}{V}_{t|t-1}^{-1}{L}_{t|t-1}^{\prime },$ the state covariance filter estimator. When ${\stackrel{^}{x}}_{t-1|t-1}$ is the current state mean and P[t−1|t−1] is the current state covariance, ${\stackrel{^}{x}}_{t|t}$ is the new state mean and P[t|t] is the new state covariance. • The Kalman filter accommodates missing data by not updating filtered state estimates corresponding to missing observations. In other words, suppose there is a missing observation at period t. Then, the state forecast for period t based on the previous t – 1 observations and filtered state for period t are equivalent. • For explicitly defined state-space models, update applies all predictors to each response series. However, each response series has its own set of regression coefficients. • For efficiency, update does minimal input validation. • In theory, the state covariance matrix must be symmetric and positive semi-definite. update forces symmetry of the covariance matrix before it applies the Kalman filter, but it does not check whether the matrix is positive semi-definite. Alternative Functionality To obtain filtered states for each period in the response data, call the filter function instead. Unlike update, filter performs comprehensive input validation. Version History Introduced in R2021b
{"url":"https://in.mathworks.com/help/econ/ssm.update.html","timestamp":"2024-11-03T12:30:15Z","content_type":"text/html","content_length":"163673","record_id":"<urn:uuid:70a5cca5-0d43-4d1b-ad30-934cce665385>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00140.warc.gz"}
How to Sort Data in R | R-bloggersHow to Sort Data in R [This article was first published on R – Displayr , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. There are several different methods for sorting data in R. The best method depends on on the type of data structure you have. In R, you can store data in different object types such as vectors, data frames, matrices and arrays. There are a range of other more complex structures in R, but we will just cover sort functions for some of the more common data types. Object Classes You can identify the type of data structure being used with the class() function, which will return the data type of the object. In the example below, we see that x is a numeric vector of values. Sorting Vectors In R, a vector is one-dimensional lists of values of the same basic data type, such as text or numeric. A simple vector containing 4 numeric values may look like this: To sort a vector in R use the sort() function. See the following example. By default, R will sort the vector in ascending order. However, you can add the decreasing argument to the function, which will explicitly specify the sort order as in the example above. Sorting Data Frames In R, a data frame is an object with multiple rows and multiple columns. Each column in a data frame can be of a different data type. To sort data frames, use the order() function. Consider the following R data frame (df) which contains data on store location, account rep, number of employees and monthly sales: To sort the data frame in descending order by monthly sales, apply the order function with the column to sort by specified in the function: Note that the negative sign (-) in front of the column name (df$sales) is applied to execute the sort in descending order. You can also use the decreasing argument, as in the sort() function. The order() function can also reference the column index rather than the specific column name. For example, the same sort can be achieved using the following syntax to reference the fourth column in the data frame: You can also sort by multiple columns by specifying multiple arguments in the sort function. For example, suppose we wanted to first sort the above data frame by sales rep as the primary sort in ascending order and then by monthly sales in descending order. Sorting Matrices A matrix is similar to a data frame except in that all columns in a matrix must be of the same data type (numeric, character, etc.). Consider the following 4×10 matrix of numeric values. To sort the matrix by the first column in ascending order, we would use the same sort function that we used to previously sort a data frame: Note that we are referencing the first column in the sort function. You can also sort by adding additional column references to the order function. For example, to sort the above matrix by the first column in ascending order as the primary sort and the second column as the secondary sort, add a second column reference to the order function. Note the negative (-) sign in front of the second sort term. This sorts the second column in descending order. We hope you found this post helpful. Find out how to do more in R by checking out our “How to do this in R” series!
{"url":"https://www.r-bloggers.com/2018/09/how-to-sort-data-in-r/","timestamp":"2024-11-10T15:09:50Z","content_type":"text/html","content_length":"108646","record_id":"<urn:uuid:2a6eefd4-c642-4b53-a4da-1fd567560fd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00717.warc.gz"}
[GAP Forum] extensions of subgroups of a finite 2-group Benjamin Sambale benjamin.sambale at gmail.com Thu Jul 17 07:54:41 BST 2014 Dear Petr, I don't see how your first question is related to the group G. If you want ALL extensions of A with a group of order 2, you could use CyclicExtensions(A,2) from the GrpConst package. However, if A is small, it is much faster to run through the groups of order 2|A| in the small groups library and check which groups have maximal subgroups isomorphic to A (i.e. the same GroupID). I don't have any good advice concerning your second question. Am 16.07.2014 16:39, schrieb Petr Savicky: > Dear GAP Forum: > Assume, G is a finite 2-group and A is its subgroup. > The groups may be permutation groups or pc groups. > I would like to construct all extensions B of A, such > that [B:A] = 2. > One way is to perform > N := Normalizer(G, A); > R := RightTransversal(N, A); > L := []; > for elm in R do > if elm in A then > continue; > fi; > if elm^2 in A then > Add(L, ClosureGroup(A, elm)); > fi; > od; > Is there a better way? > Another question is as follows. Let G be a 2-group > and H and A its subgroups, such that the intersection > of H and A is trivial. Is it possible to determine > in GAP, whether there is a subgroup B of G, such > that B contains A and is a complement of H in G? > I appreciate any help concerning these questions. > Best regards, > Petr Savicky. > _______________________________________________ > Forum mailing list > Forum at mail.gap-system.org > http://mail.gap-system.org/mailman/listinfo/forum More information about the Forum mailing list
{"url":"https://www.gap-system.org/ForumArchive2/2014/004552.html","timestamp":"2024-11-11T03:29:46Z","content_type":"text/html","content_length":"4858","record_id":"<urn:uuid:2ceb8d9b-e4fe-4718-87b3-cbf4956c611b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00159.warc.gz"}
Duct Shape Converter | Building Calculators top of page Duct Shape Converter The cross-sectional Area of a duct is a determining factor in duct sizing for how much airflow the duct is designed for at a specific allowable friction or noise level. A rectangular duct of a certain width and height can be equated with an equivalent round duct of diameter where both shapes of ducts share the same cross-sectional area. Similarly, a round duct of a specific diameter can be equated with an equivalent rectangular duct of which the combination of width and height gives the same cross-sectional area as that of the round The formula for the computation of rectangular and round duct size: Equating the formulae for rectangular and round duct’s size: A downloadable Excel File for Rectangular to Round / Round to Rectangular Duct Converter is available below. Toward the end of the calculator, the Aspect Ratio of a duct is the Ratio of Width to Height or Height to Width (longer side to shorter side) of a Rectangular Duct. To calculate Total Duct Surface Area based on a duct's dimension and aspect ratio, refer to the page Rectangular Duct Aspect Ratio Calculator. bottom of page
{"url":"https://www.buildingcalculators.com/duct-shape-converter","timestamp":"2024-11-13T02:59:28Z","content_type":"text/html","content_length":"757337","record_id":"<urn:uuid:16350e32-5944-4633-8db5-f2cce8668668>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00539.warc.gz"}
Balanced Equation and Coefficients in Chemical Reactions What is the coefficient for the reactant Nb in the balanced equation Nb + O2 → Nb2O5? In the given reaction Nb + O2 → Nb2O5, the coefficient for the reactant Nb is 4. A coefficient is a number that appears before a chemical formula in a balanced equation. It indicates the number of molecules or formula units involved in the reaction. In this case, the coefficient of 4 for Nb means that there are four atoms of Nb on the reactant side of the equation. To balance a chemical equation, the number of each type of atom must be the same on both sides of the reaction arrow. This ensures that the law of conservation of mass is obeyed, which states that matter cannot be created or destroyed in a chemical reaction. To balance the given reaction Nb + O2 → Nb2O5, the coefficients are adjusted as follows: - Nb: 1 (reactant side) -> 4 (product side) - O2: 2 (reactant side) -> 5 (product side) The balanced equation becomes: 4Nb + 5O2 → 2Nb2O5 Therefore, in the balanced equation, the coefficient for the reactant Nb is 4. Understanding Balanced Equations and Coefficients Balanced Equation: A balanced equation is a chemical equation where the number of each type of atom is the same on both sides of the reaction arrow. This means that the total mass of the reactants is equal to the total mass of the products. Balancing an equation involves adjusting the coefficients of the formulas to achieve this balance. Role of Coefficients: Coefficients are the numbers placed in front of chemical formulas to balance the equation. They indicate the relative amounts of each reactant and product involved in the reaction. In the given example, the coefficient of 4 for Nb indicates that four moles of Nb are required for the reaction with five moles of O2 to produce two moles of Nb2O5. How to Balance Equations: 1. Start by writing down the unbalanced equation with the reactants on the left side and the products on the right side. 2. Determine the number of each type of atom on both sides of the equation. 3. Adjust the coefficients of the formulas to balance the number of atoms on both sides. 4. Verify that the equation is balanced by counting the atoms again. By following these steps, chemical equations can be balanced to ensure that the law of conservation of mass is upheld in the reaction. This process is essential for understanding the stoichiometry of reactions and predicting the outcomes of chemical reactions based on the quantities of reactants and products involved. For further information on balanced reactions and coefficients, you can visit reputable educational platforms like Chemistry textbooks or online resources dedicated to chemical reactions and equations.
{"url":"https://tutdenver.com/chemistry/balanced-equation-and-coefficients-in-chemical-reactions.html","timestamp":"2024-11-08T20:13:29Z","content_type":"text/html","content_length":"23203","record_id":"<urn:uuid:a9123f57-c9b8-4641-a31f-ac2e11939dbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00373.warc.gz"}
The Residue of an Analytic Function at a Point The Residue of an Analytic Function at a Point Consider the annulus $A(z_0, 0, r)$ (where $r > 0$), and suppose that $f$ is analytic on $A$. Then the Laurent series expansion of $f$ on $A(z_0, 0, r)$ is: \quad f(z) = \sum_{n=-\infty}^{\infty} a_n(z - z_0)^n Where for each $n \in \mathbb{Z}$, $\displaystyle{a_n = \frac{1}{2\pi i} \int_{\gamma} \frac{f(w)}{(w - z_0)^{n+1}} \: dw}$ where $\gamma$ is any circle inside $A(z_0, 0, r) = D(z_0, r) \setminus \{ z_0 \}$. By The Deformation Theorem, each $a_n$ can be evaluated by instead considering $\gamma$ to be any positively oriented closed piecewise smooth curve inside $A(z_0, 0, r)$. Now look specifically at the term $a_{-1}$. We have that: \quad a_{-1} = \frac{1}{2\pi i} \int_{\gamma} \frac{f(w)}{(w - z_0)^{-1 + 1}} \: dw = \frac{1}{2\pi i} \int_{\gamma} f(w) \: dw For any positively oriented closed piecewise smooth curve $\gamma$ in $A(z_0, 0, r)$. This coefficient is very important and given a name for that reason. Definition: If $f$ is analytic on $A(z_0, 0, r)$ and if $\displaystyle{\sum_{n=-\infty}^{\infty} a_n(z - z_0)^n}$ is the corresponding Laurent series expansion on $A(z_0, 0, r)$ then the Residue of $f$ at $z_0$ is $\mathrm{Res} (f, z_0) = a_{-1}$. For example, suppose that we want to evaluate the following integral: \quad \int_{\gamma} z \sin \left ( \frac{1}{z} \right ) \: dz Where $\gamma$ is any positively oriented closed piecewise smooth curve in $\mathbb{C} \setminus \{ 0 \}$. Note that the function $\displaystyle{f(z) = z \sin \left ( \frac{1}{z} \right )}$ is analytic on $\mathbb{C} \setminus \{ 0 \}$. We've already found the Laurent series expansion for $\displaystyle{\ sin \left ( \frac{1}{z} \right )}$ on the Laurent's Theorem for Analytic Complex Functions page to be: \quad \sin \left ( \frac{1}{z} \right ) = \sum_{n=0}^{\infty} \frac{(-1)^n}{z^{2n+1} (2n+1)!} = \frac{1}{z} - \frac{1}{3! z^3} + \frac{1}{5! z^5} - ... Therefore the Laurent series expansion for $\displaystyle{z \sin \left ( \frac{1}{z} \right )}$ is: \quad z\sin \left ( \frac{1}{z} \right ) = \sum_{n=0}^{\infty} \frac{(-1)^n}{z^{2n} (2n+1)!} = 1 - \frac{1}{3! z^2} + \frac{1}{5! z^4} - ... Note that $\mathrm{Res} (f, 0) = a^{-1}$ is the coefficient attached to $\frac{1}{z}$ in the expansion above. But this means that $\mathrm{Res} (f, 0) = 0$. In other words: \quad \mathrm{Res} (f, 0) = \frac{1}{2\pi i } \int_{\gamma} z \sin \left ( \frac{1}{z} \right ) \: dz = 0 For any positively-oriented closed piecewise smooth curve in $\mathbb{C} \setminus \{ 0 \}$. Hence $\displaystyle{\int_{\gamma} z \sin \left ( \frac{1}{z} \right ) \: dz = 0}$ for any of such $\ We will now state an extremely simple result regarding the residues of an analytic function $f$ at removable singularities $z_0$. Proposition 1: If $z_0$ is a removable singularity of $f$ then $\mathrm{Res} (f, z_0) = 0$. • Proof: By the Theorem on the Laurent's Theorem for Analytic Complex Functions page, if $z_0$ is a removable singularity then $a_n = 0$ for all $n < 0$ in the Laurent series expansion of $f$ on $A (z_0, 0, r)$ ($r > 0$). Therefore $a_{-1} = \mathrm{Res} (f, 0) = 0$. $\blacksquare$
{"url":"http://mathonline.wikidot.com/the-residue-of-an-analytic-function-at-a-point","timestamp":"2024-11-06T09:25:17Z","content_type":"application/xhtml+xml","content_length":"18812","record_id":"<urn:uuid:781f69cc-6a54-4332-88d0-e4199e08d326>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00848.warc.gz"}
3DIC SoC Test Benchmarks (Floor Plans) 3DSIC SOC Test Benchmarks (Floor Plans) The format for floor plan files for each die is based on the floor plan file format used in the Hotspot thermal simulator. In each floor plan file, each line corresponds to a single module. The exact format is as follows, with all dimensions being in meters. Module_name module_width module_height bottom_left bottom_y Embedded Modules Since the original ITC'02 Benchmarks consisted of embedded modules, embedded modules are also included in this set of benchmarks. The naming convention for a standard module is... For an embedded module, the format is For the purposes of testing, a test for a given module is presumed to dissipate power in all embedded modules, and applying tests for tests for embedded modules is impossible while the parent module is being tested. Each die in each benchmark has a floor plan area of 10 by 10 mm^2. Although this is a fair size for modern integrated circuits, this floor plan area may not fit the need of every research project. Some projects may desire smaller floor plan areas, while other may required several different floor plan areas. Researchers are encouraged to scale the dies to the sizes required by their study. Doing so is a simple process of multiplying/dividing the values of each floor plan file by a given constant. Last Updated: Nov 10, 2021
{"url":"https://www.eng.auburn.edu/ece/people/skm0049/vlsi/3dsocbench/flp.html","timestamp":"2024-11-06T10:55:54Z","content_type":"text/html","content_length":"9925","record_id":"<urn:uuid:54dda660-3507-468b-9381-a99e231b3729>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00657.warc.gz"}
Counting sums and differences Posted on 17 August 2006 by Brian Hayes Take a set of integers, say {0, 2, 5, 8, 11}, and write down all the numbers that can be represented as sums of two elements drawn from this set. For our example the answer is {0, 2, 4, 5, 7, 8, 10, 11, 13, 16, 19, 22}. Now construct the corresponding set of pairwise differences: {–11, –9, –8, –6, –5, –3, –2, 0, 2, 3, 5, 6, 8, 9, 11}. Note that there are only 12 distinct sums but 15 differences. Let’s try it with another set, {5, 8, 17, 26, 41}: sums: {10, 13, 16, 22, 25, 31, 34, 43, 46, 49, 52, 58, 67, 82} diffs: {–36, –33, –24, –21, –18, –15, –12, –9, –3, 0, 3, 9, 12, 15, 18, 21, 24, 33, 36} Again the differences outnumber the sums, this time by a margin of 19 to 14. It’s not hard to see why differences tend to be more numerous: Addition is commutative but subtraction isn’t. Thus the sums 5+8 and 8+5 both yield the single result 13, whereas 5–8 and 8–5 produce two distinct differences, –3 and +3. It is rumored that someone other than John Horton Conway once conjectured that the number of sums formed in this way never exceeds the number of differences. But the conjecture is false. A counterexample is the set {0, 2, 3, 4, 7, 11, 12, 14}, which has 26 distinct pairwise sums but only 25 differences: sums: {0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28} diffs: {–14, –12, –11, –10, –9, –8, –7, –5, –4, –3, –2, –1, 0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14} Melvyn B. Nathanson of Lehman College—the Bronx campus of the City University of New York—has lately called attention to such anomalous sets of integers, which he identifies by the abbreviation MSTD (more sums than differences). He has lots of questions. Why do such sets exist? Where are they found? How many are there? What is their structure? This past April Nathanson discussed MSTD sets in a talk titled “Problems in Additive Number Theory” at the University of Montreal; the talk is available on the arXiv as math.NT/0604340. In June Nathanson delivered a follow-up talk, “Sets with More Sums than Differences,” at the SIAM Conference on Discrete Mathematics in Victoria, British Columbia; that paper was released last week on the arXiv as math.NT/0608148. Meanwhile Kevin O’Bryant of the College of Staten Island (another CUNY unit) has addressed somewhat different aspects of the MSTD problem in a paper titled “Many Sets Have More Sums than Differences” (math.NT/0608131). The appeal of a problem like this one is that it seems to get a lot of mileage out of the simplest mathematics: adding, subtracting and counting—operations that most of us know how to do. As the papers of Nathanson and Bryant show, the math is not all so trivial, and yet an amateur like me can still hope to have some fun with these questions. I’ve been toying with MSTD sets for the past week or so. First, a few preliminaries. A set, as defined for this discussion, is a collection of items without duplicates. For example, {1, 3} is a two-element set. There are four ways to add these elements in pairs—1+1, 1+3, 3+1 and 3+3—but two of those summations yield the same result, and so the “sumset” has just three elements, {1, 4, 6}. The order of the elements in a set has no significance, but for convenience I’ll always list them in ascending sequence. All the sets discussed here are finite. For a clearer understanding of set sums and differences, it helps to write down an example in matrix format: A set of n elements has n^2 pairwise sums and differences, but they are not all distinct. In the case of the sums, the matrix is symmetric, and so everything above the main diagonal is duplicated below it. Thus the maximum possible number of unique sums is given by counting the entries along the diagonal plus those in either the upper or the lower triangle, but not both. This number is equal to n(n+1)/2; for the example given here, where n=4, the maximum size of the sumset is 10. However, for the specific set shown here the maximum is not attained, because of a few “coincidences”: 4 appears as both 0+4 and 2+2, and 6 arises as both 2+4 and 3+3. Thus the sumset has only eight elements: {0, 2, 3, 4, 5, 6, 7, 8}. The difference matrix is antisymmetric, and so the elements of both the upper and the lower triangles need to be counted. On the other hand, the diagonal of the difference matrix is all zeros. The maximum number of distinct differences is n(n–1)+1. Again, though, the maximum is not reached in this example; coincidences reduce the size of the “diffset” from 13 to 9. Still, the differences outnumber the sums, and so {0, 2, 3, 4} is not an MSTD set. The smallest possible sumset or diffset has 2n–1 elements. (You might want to work out why.) It’s easy to construct a set that attains this minimum: Just choose elements in an arithmetic progression. For example, the set {0, 2, 4, 6} has the sumset {0, 2, 4, 6, 8, 10, 12} and the diffset {–6, –4, –2, 0, 2, 4, 6}, both of size 7. It’s also straightforward to build a set that generates the largest possible sumset and diffset; the trick is to make each element more than twice as large as the next smaller element, as in the set {0, 1, 3, 7}. This structure eliminates all coincidental duplicates in both the sums and the differences. In the search for MSTD sets we don’t have to look at all possible sets of integers. It turns out that both the number of sums and the number of differences generated by a set remain unchanged if you add a constant to each member of the set. Likewise, multiplying each element by a constant also leaves the number of sums and differences invariant. In other words, you can transform each element x into ax+b (an affine transformation) without altering the size of the sumset or the diffset. This property is important because it means we can represent any MSTD set in a canonical form. We can shift it along the number line until its smallest element is 0, and we can shrink it down to its smallest possible span of integers by dividing out any factors that are common to all the nonzero elements. For example, the set {5, 8, 17, 26, 41} mentioned above has the canonical form {0, 1, 4, 7, 12}. Both of these sets have 19 differences and 14 sums. Now for some questions. What is the smallest MSTD set? The smallest known set is the example {0, 2, 3, 4, 7, 11, 12, 14} that I have already introduced. It has eight elements, and in the canonical representation the largest element is 14. There is one other known eight-element example, {0, 2, 3, 7, 10, 11, 12, 14}. A few seconds of computing is all it takes to show that there is no smaller eight-element MSTD set. But is there an example with fewer than eight elements? I don’t think so, but I can’t prove it. A brute-force search rules out any MSTD set with seven or fewer elements where the largest element is less than 81. Imre J. Ruzsa of the Mathematical Institute of the Hungarian Academy of Sciences claims that any MSTD set must have at least seven elements. How many MSTD sets are there? In one sense, this is a very easy question. Given the affine invariance of MSTD sets, if just one such set exists, then we can generate infinitely many of them by translation and dilation. But most people would agree that these are all just copies of the same set in disguise. What we really want to know is the number of MSTD sets when they are all reduced to canonical form. Nathanson has shown that in this scheme of reckoning, too, the number of sets is infinite. He gives a formula for generating infinite families of MSTD sets. Starting with the example {0, 2, 3, 4, 7, 11, 12, 14}, the formula yields a sequence of progressively larger sets that Nathanson proves must all have more sums than differences: {0, 2, 3, 4, 7, 11, 15, 16, 18}, then {0, 2, 3, 4, 7, 11, 15, 19, 20, 22}, then {0, 2, 3, 4, 7, 11, 15, 19, 23, 24, 26}, and so on. The question then arises, are all MSTD sets members of such infinite families, or are there also “sporadic” MSTD How rare are MSTD sets? Having already established that there are infinitely many MSTD sets, it might seem that they can’t be very rare, but that’s not necessarily true. The primes are also infinite, yet they are vanishingly rare. Take the ratio of the number of primes less than N to the number of integers less than N; as N goes to infinity, the ratio goes to zero. MSTD sets could be rare in a similar sense. O’Bryant shows that within a certain infinite series of integer sets, the probability of finding an MSTD set is greater than zero. Does that result hold also for integer sets in By how much can the number of sums exceed the number of differences? Let’s define the discrepancy Δ of a set as the number of differences minus the number of sums. For all “ordinary” sets, Δ ≥ 0. For MSTD sets, Δ is negative. In all the MSTD examples I’ve shown so far, Δ = –1, or in other words the number of sums is just 1 greater than the number of differences. I’ve been able to find lots of sets with Δ = –2; the smallest, with 11 elements, is {0, 1, 2, 4, 5, 9, 12, 13, 14, 16, 17}, which has 35 sums and only 33 differences. I’ve also stumbled upon a few sets with Δ = –3, such as the 16-element set {0, 1, 2, 4, 5, 9, 12, 13, 14, 16, 17, 21, 24, 25, 26, 28}, which has 56 sums and 53 differences. I’m guessing there is no lower bound on Δ. (Update: As I was smoothing out the last details of this report, I discovered a 1973 paper by Sherman K. Stein. He proves that the ratio of the number of differences to the number of sums can be made arbitrarily large or small.) How are MSTD sets distributed among all integer sets? The trouble with even asking a question like this is that we can’t look at all integer sets. If we try to answer the question statistically, by choosing a representative sample of sets, then we have to wade into the messy business of deciding what sort of sample is representative. For lack of a better idea, I have tried the following approach. Assuming that all sets are in canonical form, I classify them by two parameters: n, the number of elements, and m, the largest element. Then for any given values of n and m I can either examine all sets (if n and m are small enough) or generate a random sample. Note that m cannot be less than n–1. If m = n–1, then there is only one possible set, namely the counting sequence 0, 1, 2,…, m. As m increases for a fixed value of n, so does the number of possible sets and the average gap between the elements. Now we can ask how Δ varies as a function of m and n. In the case of m = n –1, the answer can be given unequivocally: Δ = 0, because the set is an arithmetic progression, and both the number of sums and the number of differences is 2n–1. When m is much greater than n, Δ is almost surely positive. The reason is that the elements of the set are widely dispersed, and a coincidence in which different pairs of elements yield the same sum or the same difference is unlikely. In almost all sets with m >> n, the number of sums takes its maximum value n(n+1)/2 and the number of differences is also at its maximum, n(n–1)+1. If you do the subtraction, you’ll find that Δ increases in proportion to n^2. The interesting region would appear to be the middle ground, where m is not too much larger than n. Here is a graph showing the frequency of Δ values for n = 10 and all values of m between 9 and 27. As m increases, the distribution grows wider and shifts to the right—toward more-positive values of Δ. (Note that the graph is based on a complete enumeration of all sets, not on a statistical MSTD sets are so rare that in the graph above their frequency is indistinguishable from zero. Below we look exclusively at the frequency of MSTD sets as a function of m for three values of n. It appears that MSTD sets are most common (or maybe one should say least rare) at the smallest value of m where such sets first appear. But this impression is somewhat misleading. As the graph below reveals, the absolute number of MSTD sets increases as a function of m (or, in the case of n = 8, remains constant). Although it’s true that the proportion of MSTD sets declines, that’s only because the total number of integer sets grows exponentially with m. Thus if you are wandering aimlessly in the universe of integer sets, the number of sets with more sums than differences increases with m, but the probability that a randomly encountered member of the population has the MSTD property falls to zero as m increases. Finally, where did this problem come from? The locus classicus appears to be an unpublished 1967 edition of a list of unsolved problems compiled by Hallard T. Croft of the University of Cambridge. Other authors, citing the Croft list, attribute the conjecture that differences always outnumber sums to John Horton Conway. Nathanson writes, however: “I asked Conway about this at the Logic Conference in Memory of Stanley Tennenbaum at the CUNY Graduate Center on April 7, 2006. He said that he had actually found a counterexample to the conjecture, and that this is recorded in unpublished notes of Croft.” The mention of Croft refers to the same 1967 list that others cite in attributing the conjecture to Conway. I have not seen this document; if anyone can send me a copy, I would be most grateful. Here are a few more slightly less obscure references: Marica, John. 1969. On a conjecture of Conway. Canadian Mathematical Bulletin 12:233–234. Ruzsa, Imre Z. 1984. Sets of sums and differences. In Séminaire de Théorie des Nombres, Paris, 1982–83, pp. 267–273. Boston: Birkhäuser. Ruzsa, I. Z. 1992. On the number of sums and differences. Acta Mathematica Hungarica 59:439–447. Stein, Sherman K. 1973. The cardinalities of A+A and A–A. Canadian Mathematical Bulletin 16:343–345. This entry was posted in mathematics, problems and puzzles.
{"url":"http://bit-player.org/2006/counting-sums-and-differences","timestamp":"2024-11-10T06:05:41Z","content_type":"text/html","content_length":"45631","record_id":"<urn:uuid:df773bda-ac36-4d2c-a5aa-05cf054169c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00029.warc.gz"}
Andreas Wächter We propose a new method for linear second-order cone programs. It is based on the sequential quadratic programming framework for nonlinear programming. In contrast to interior point methods, it can capitalize on the warm-start capabilities of active-set quadratic programming subproblem solvers and achieve a local quadratic rate of convergence. In order to overcome the non-differentiability … Read more Exploiting Prior Function Evaluations in Derivative-Free Optimization A derivative-free optimization (DFO) algorithm is presented. The distinguishing feature of the algorithm is that it allows for the use of function values that have been made available through prior runs of a DFO algorithm for solving prior related optimization problems. Applications in which sequences of related optimization problems are solved such that the proposed … Read more Solving Chance-Constrained Problems via a Smooth Sample-Based Nonlinear Approximation We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation, and illustrate empirically that the reformulation can … Read more Sample Average Approximation with Adaptive Importance Sampling We study sample average approximations under adaptive importance sampling in which the sample densities may depend on previous random samples. Based on a generic uniform law of large numbers, we establish uniform convergence of the sample average approximation to the true function. We obtain convergence of the optimal value and optimal solutions of the sample … Read more A Limited-Memory Quasi-Newton Algorithm for Bound-Constrained Nonsmooth Optimization We consider the problem of minimizing a continuous function that may be nonsmooth and nonconvex, subject to bound constraints. We propose an algorithm that uses the L-BFGS quasi-Newton approximation of the problem’s curvature together with a variant of the weak Wolfe line search. The key ingredient of the method is an active-set selection strategy that … Read more Uniform Convergence of Sample Average Approximation with Adaptive Importance Sampling We study sample average approximations under adaptive importance sampling. Based on a Banach-space-valued martingale strong law of large numbers, we establish uniform convergence of the sample average approximation to the function being approximated. In the optimization context, we obtain convergence of the optimal value and optimal solutions of the sample average approximation. Citation Technical Report … Read more A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling In this paper we consider the optimization of a functional $F$ defined as the co nvolution of a function $f$ with a Gaussian kernel. We propose this type of objective function for the optimization of the output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for … Read more An Active-Set Quadratic Programming Method Based On Sequential Hot-Starts A new method for solving sequences of quadratic programs (QPs) is presented. For each new QP in the sequence, the method utilizes hot-starts that employ information computed by an active-set QP solver during the solution of the first QP. This avoids the computation and factorization of the full matrices for all but the first problem … Read more An Inexact Sequential Quadratic Optimization Algorithm for Nonlinear Optimization We propose a sequential quadratic optimization method for solving nonlinear optimization problems with equality and inequality constraints. The novel feature of the algorithm is that, during each iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the … Read more More Branch-and-Bound Experiments in Convex Nonlinear Integer Programming Branch-and-Bound (B&B) is perhaps the most fundamental algorithm for the global solution of convex Mixed-Integer Nonlinear Programming (MINLP) problems. It is well-known that carrying out branching in a non-simplistic manner can greatly enhance the practicality of B&B in the context of Mixed-Integer Linear Programming (MILP). No detailed study of branching has heretofore been carried out … Read
{"url":"https://optimization-online.org/author/andreasw-2/","timestamp":"2024-11-14T08:51:28Z","content_type":"text/html","content_length":"108168","record_id":"<urn:uuid:f59fb9b6-7b7b-47f0-8aca-4db6874e248e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00693.warc.gz"}
Population Variance Learning Statistics with Python Learning Statistics with Python Variance measures how much the numbers deviate from the mean. Examine the distribution of salaries in our dataset. The formula varies for the sample and population. In this chapter, we will calculate population variance. Population variance is calculated by summing the squares of the differences between each data point and the population mean, and then dividing by the number of elements in the population. Thanks for your feedback!
{"url":"https://codefinity.com/courses/v2/a849660e-ddfa-4033-80a6-94a1b7772e23/50bab4f9-b308-4ff0-b5d2-0989afa4ddd7/3eb0f679-f54c-43e5-86e2-8950009faae6","timestamp":"2024-11-06T15:35:22Z","content_type":"text/html","content_length":"320679","record_id":"<urn:uuid:e87ad58f-6004-4d70-bf91-edcdf5eb140e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00237.warc.gz"}
Survival curves are traditionally plotted forward from time 0, but since the true starting time is not known as a part of the data, the survfit routine does not include a time 0 value in the resulting object. Someone might look at cumulative mortgage defaults versus calendar year, for instance, with the `time' value a Date object. The plotted curve probably should not start at 0 = 1970-01-01. Due to this uncertainty, it was decided not to include a "time 0" as part of a survfit object. Whether that (1989) decision was wise or foolish, it is now far too late to change it. (We tried it once as a trial, resulting in over 20 errors in the survival test suite. We extrapolate that it might break 1/3 of the other CRAN packages that depend on survival, if made a default.) One problem with this choice is that some functions must choose a starting point, plots and computation of the restricted mean survival time are two primary examples. This utility function is used by plot.survfit and summary.survfit to fill in that gap. The value used for this first time point is the first one below 1. the value of the start.time argument 2. a start.time argument used in the survfit call itself 3. for single state survival □ min(0, time) for Surv(time, status) data □ min(time1) for Surv(time1, time2, status) data 4. for multi state survival □ min(0, time) for Surv(time, event) data, e.g., competing risks □ min(time1) for Surv(time1, time2, event) data, if everyone starts in the same state □ no addition: the timepoint used to estimate p0, the intial prevalence of states, will already be the first point of the curve. See survfit.formula (Remember that negative times are allowed in Surv objects.) This function will add a new time point at the front of each curve, but only if said time point is less than existing points in the curve. If there were a death on day 0, for instance, it will not add a (time=0, survival=1) point. (The question of whether the plotted curve in this case should or should not start with a vertical segment can be debated ad nauseum. It has no effect on the area under the curve (RMST), and the summary for time 0 should report the smaller value.) Likewise if the start.time argument is not prior to a curve's first time point. The resulting object is not currently guarranteed to work with functions that further manipulate a survfit object such as subscripting, aggregation, pseudovalues, etc. (remember the 20 errors). Rather it is intended as a penultimate step, most often when creating a plot.
{"url":"https://www.rdocumentation.org/packages/survival/versions/3.6-4/topics/survfit0","timestamp":"2024-11-09T18:56:46Z","content_type":"text/html","content_length":"64402","record_id":"<urn:uuid:961b3d17-9e55-494e-9ed2-22527e1180a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00628.warc.gz"}
How to evaluate a Stock before Investing There exist a multitude of perspectives and approaches reliant on diverse variables and various ideologies, ranging from acquiring undervalued stocks to investing in growth-oriented equities, all the way to aligning investments with global developments. How can one possibly assimilate the full spectrum of market dynamics, spanning shifts in individual stock performance to seismic changes in the global economy, from trends in commodity markets to fluctuations in currency values? The intrinsic worth of a stock, deeply rooted in its fundamental business attributes, frequently diverges from its prevailing market price—despite certain contrary beliefs. A stock's value is an intricate interplay of numerous factors, encompassing the company's sustained profitability potential, its customer base, its financial underpinnings, the broader economic context, political and cultural trends, and its relative positioning within the industry. Grasping this concept constitutes a pivotal step in your journey to crafting a well-rounded stock portfolio. To this end, three primary methodologies come into play: Net Asset Valuation, comparative valuation utilizing multiples, and the Discounted Cash Flow (DCF) valuation approach, widely regarded as the most dependable of the three. Current Valuation Analysis Enterprise Value serves as a practical gauge for assessing a company's present market worth. Its primary application lies in the evaluation of acquisition or merger pricing for a corporation. In contrast to Market Capitalization, this metric offers a more comprehensive perspective by factoring in the entirety of liquid assets, outstanding debt obligations, and complex equity instruments present on the company's balance sheet. In the event of an acquisition, the acquiring company assumes the target company's liabilities while gaining control of all available cash and cash Enterprise Value = Market Cap + Debt - Cash Probability Of Bankruptcy The Probability of Bankruptcy is a metric employed to assess the likelihood of a company facing financial turmoil in the upcoming two years, given existing economic and market conditions. This probability is calculated by adjusting and interpolating the Altman Z Score, accounting for off-balance-sheet items and any missing or unreported public information. All data used in this analysis is extracted from Editas' balance sheet, as well as their cash flow and income statements, as per the most recent filings. This metric provides a relative indication of the company's susceptibility to financial distress. In the context of stocks, it represents the normalized Z-Score value, while for funds and ETFs, it's derived from a multi-factor model developed by Macroaxis. The score is utilized to forecast the likelihood of a company or fund encountering financial difficulties within the following 24 months. Unlike the Z-Score, the Probability of Bankruptcy falls within a range of 0 to 100, reflecting the actual probability that the firm will face financial distress in the next two fiscal years. Probability Of Bankruptcy = Normalized Z-Score Z Score The Altman Z Score stands as one of the most straightforward foundational models for assessing a company's vulnerability to financial failure. This score serves the purpose of predicting the likelihood of a company entering bankruptcy within the subsequent 24 months or two fiscal years, commencing from the date specified in the financial statements used for its computation. Professor Edward Altman devised this model in the late 1960s at New York University, incorporating five essential business ratios, each weighted according to his algorithm. Calculating the Z-Score necessitates knowledge of the company's current working capital, total assets, total liabilities, the most recent retained earnings figure, as well as earnings before interest and taxes (EBIT). Companies boasting Z-Scores exceeding 3.1 are generally regarded as stable and robust, with a low probability of bankruptcy. Scores falling within the range of 1.8 to 3.1 are situated within a "grey area," while scores less than 1 indicate a high probability of financial distress. The Z Score finds widespread utility among financial auditors, accountants, asset managers, loan processors, wealth advisors, and day traders. Over the past quarter-century, numerous financial models employing the Z Score have demonstrated their efficacy in predicting corporate Z Score = Sum Of 5 Factors Book Value Per Share A straightforward method for assessing Book Value per Share involves comparing it to the current stock price. If the Book Value per Share exceeds the prevailing market value of the stock, it suggests that the company may be undervalued. However, it's crucial for investors to recognize that the traditional calculation of Book Value excludes intangible assets like goodwill, intellectual property, trademarks, or brands, making it an incomplete measure for many businesses. To compute Book Value per Share (B/S), one subtracts the company's liabilities from its assets and then divides the result by the total number of outstanding shares. This figure signifies the degree of security associated with each common share after accounting for liabilities. In essence, shareholders can employ this ratio to estimate the potential proceeds from selling their stake in the company in the event of liquidation. Book Value Per Share = Common Equity/Average Shares The Book Value per Share offers valuable insights to shareholders by indicating the worth of their shares in the event of the company's closure and asset liquidation, satisfying any outstanding obligations. This information is particularly crucial when investing in potentially unstable or speculative companies, providing a sense of what one might recover if the company faces insolvency. While investing heavily in such situations is generally ill-advised, there are specific circumstances where it may prove advantageous. Furthermore, Book Value per Share can serve as a tool to gauge potential risks. If the calculated book value stands at $10, and the stock is trading above this book value, it can be used as a reference point for setting a stop-loss level. Prudent risk assessment is a cornerstone of effective portfolio management, as while profits are enticing, safeguarding against risks is equally vital. What is considered to be a high beta? Beta is a metric that quantifies how a stock's price tends to move in relation to a chosen benchmark. Typically, this benchmark is the broader stock market, often represented by the S&P 500, although it can also be an industry-specific index or a collection of companies of similar size. A beta value of less than 1 suggests that the stock exhibits lower volatility than its benchmark, while a value greater than 1 indicates higher volatility, amplifying the benchmark's fluctuations. Conversely, a stock with a negative beta tends to move in the opposite direction of its chosen benchmark. For instance, consider a stock with a beta of 1.3. On average, it moves 1.3 times the magnitude of each movement in the S&P 500. If the S&P 500 rises by 10%, the stock with a beta of 1.3 would typically increase by 13%. Conversely, if the S&P 500 falls by 10%, the stock would decline by 13%. This implies that the stock is characterized by a higher degree of risk and potential reward. In a declining market, the stock tends to suffer more significant losses, but in a rising market, it has the potential to outperform. Calculating beta is a straightforward process, and numerous websites, including Yahoo Finance, provide this information. High beta values are commonly associated with smaller, more speculative companies, such as biotech firms developing innovative treatments or small tech companies with cutting-edge technologies and significant growth potential but limited market share. While beta can be a valuable tool when used in conjunction with other metrics, it's essential to remember that it solely reflects past volatility in relation to an index and does not gauge the safety of an investment. When researching any stock, a comprehensive analysis of the entire business is crucial, focusing on identifying enduring competitive advantages. Financial ratios to help with stock evaluation Ratios provide valuable insights into a company's financial well-being, enabling meaningful comparisons with other firms in the same industry or in relation to the broader market. What is a good Earnings Per Share (EPS)? Earnings per share (EPS) serve as a metric that informs investors about the portion of earnings allocated to each shareholder if the company were to be immediately liquidated. Earnings typically form the foundation for a company's valuation, and investors are inclined to favor increasing earnings. A rising EPS suggests that the company may have more resources available for distribution to shareholders or for reinvestment in its operations. Earnings, also known as net income or net profit, represent the funds remaining after a company settles all its financial obligations. To calculate earnings per share, one simply divides the company's reported earnings by the number of outstanding shares. For instance, if XYZ Corp. has 1 million shares outstanding and has earned $1 million over the past 12 months, its trailing EPS is $1. Earnings per Share = (Net Income-Preferred Dividends)/(End-of-Period Common Shares Outstanding) However, the EPS figure on its own lacks context and significance. To assess a company's earnings in relation to its stock price, most investors utilize the price/earnings (P/E) ratio. What is a Good Price-to-Earnings (P/E) ratio? This formula serves as a tool for assessing the relative valuation of one company's stock compared to another. The price-to-earnings (P/E) ratio is a straightforward metric that divides a company's stock market price by its earnings per share (EPS). This ratio offers insights into how many years it would take for a company to generate sufficient earnings to buy back its own stock. PE ratio = Current share price / Earnings per share The P/E ratio considers the stock price and divides it by the earnings over the past four quarters. For instance, if XYZ Corp., as mentioned earlier, is currently trading at $15 per share, its P/E ratio would be 15. Often referred to as a "multiple," the P/E ratio is frequently compared to the current growth rate of EPS. Returning to our XYZ Corp. example, if we discover that the company achieved a 13% growth in EPS over the past year, a P/E of 15 would indicate that the company is reasonably valued. Furthermore, the P/E ratio can offer insights into the market's expectations for a company's future profit growth. A small, rapidly-growing company may have a high P/E ratio because it generates limited earnings but commands a high stock price. Investors purchasing stocks with high P/E ratios typically anticipate future earnings growth. If the company can sustain robust growth and rapidly increase its earnings, a stock that appears expensive based on P/E can transform into a bargain. Conversely, a low P/E ratio may suggest good value for investors, but it can also indicate skepticism about the company's future performance. Smart investors assess companies based on their future prospects rather than past performance. Stocks with low P/E ratios may signal potential issues on the horizon. If a company has incurred losses in the past year or experienced a decline in EPS, the P/E ratio becomes less reliable, and alternative valuation methods should be considered. One commonly used derivative of the P/E ratio is the P/E and growth ratio (PEG). The PEG ratio, which factors in growth, offers a more comprehensive assessment of a company's price relative to its future earnings growth potential, making it a valuable measure of value. What is a good PEG ratio? The Price/Earnings-to-Growth ratio, commonly referred to as the PEG ratio, is a metric that aids investors in assessing a stock's value by considering factors such as the company's market price, earnings, and future growth prospects. Relying solely on a trailing Price/Earnings (P/E) ratio is akin to driving while fixating on the rearview mirror; the PEG ratio offers a more comprehensive view of whether a stock is overvalued or undervalued. In essence, the PEG ratio provides a quick and effective means to gauge how a company's current stock price relates to both its earnings and its anticipated growth rate. One of its notable advantages is its ability to facilitate comparisons between companies in different industries without delving into the intricacies of their P/E ratios. The PEG ratio extends beyond the trailing P/E ratio by factoring in the furthest estimated rate of growth and then comparing it to the trailing P/E ratio. A company expected to experience significant growth in revenue, earnings, and cash flow is inherently more valuable, assuming all other factors remain equal. Consequently, growth-oriented companies tend to command higher P/E ratios, as investors are willing to pay a premium for the potential for future growth. The PEG Ratio is computed as follows: PEG ratio = PE ratio/EPS Growth For instance, if a company is projected to grow at a rate of 10% annually over the next two years and currently boasts a P/E ratio of 20, its PEG ratio would be 2 (20 trailing P/E / 10% projected EPS growth rate = 2.0 PEG). To calculate the PEG ratio, you need three essential pieces of information: the stock price, earnings per share, and the expected growth rate. Many analysts and investors prefer the PEG ratio over the P/E ratio because it incorporates a firm's future growth prospects. A lower PEG ratio typically suggests that a company is undervalued or fairly priced, with a PEG ratio of 1.0 or lower indicating that a stock is reasonably priced or potentially undervalued. Conversely, a PEG ratio exceeding 1.0 implies that a stock is overvalued. For instance, a company labeled A with a PEG ratio of 1.2 would be considered more "expensive" than company B with a PEG ratio of 1.0. Company C, with a PEG ratio of 0.8, would be considered "fairer" than company B. In essence, investors relying on the PEG ratio seek stocks with a P/E ratio equal to or exceeding the company's expected growth rate. When employing the PEG ratio, the goal is to assess the stock's value while considering earnings growth. Unlike its close relative, the P/E ratio, which is primarily used to determine if a stock is overvalued or undervalued, a lower PEG ratio could indicate potential growth issues. The PEG ratio is especially useful for evaluating companies with growth potential. However, for larger and more established companies, the Year-Ahead P/E and Growth ratio (YPEG) is often preferred. The YPEG employs similar assumptions to the PEG but uses forward earnings estimates instead of trailing earnings. It also considers estimated five-year growth rates, readily available from various financial sources. For example, if a company has a forward P/E ratio of 10, and analysts anticipate a 20% growth rate over the next five years, the YPEG would be 0.5. Price-to-sales ratio (P/S) The price-to-sales ratio, calculated by dividing a company's market capitalization by its revenue, focuses solely on revenue and does not take into account profits. This metric proves beneficial for evaluating companies that have yet to turn a profit or have minimal profitability. Ideally, the P/S ratio should approximate one. If it falls below one, it is typically considered an excellent Price to Book (P/B) The Price to Book (P/B) ratio serves as a metric for comparing a company's market value to its book value. A high P/B ratio suggests that investors anticipate higher returns on their investments from the company's existing set of assets. The book value, in this context, represents the accounting value of assets minus liabilities. P/B = MV Per Share/BV Per Share The Price to Book ratio finds extensive application in the financial services sector, where assets and liabilities are typically quantified in monetary terms. While a low P/B ratio generally signals that the company may be undervalued, it can also be an indicator that the firm might be facing financial or managerial challenges, necessitating more thorough investigation. Return on equity (ROE) Return on Equity (ROE) stands as a pivotal metric for investors, offering insights into a company's profit growth. To calculate ROE, one divides the company's net income by its shareholders' equity and then multiplies the result by 100. This ratio essentially quantifies the value shareholders would receive if the company were to liquidate its assets instantly. For certain investors, a desirable ROE trend entails annual growth of 10 percent or more, mirroring the performance of the S&P 500. Debt-to-equity ratio (D/E) The debt-to-equity ratio is a financial metric obtained by dividing a company's total liabilities by its total shareholder equity. This ratio provides investors with insights into the extent to which the company relies on debt to finance its operations. A high debt-to-equity ratio signifies a company that heavily relies on borrowing. Whether this ratio is considered excessive or not depends on its comparison to other companies within the same industry. For instance, businesses in the technology sector typically maintain a debt-to-equity ratio of approximately 2, while companies in the financial sector may exhibit much higher ratios, reaching up to 10 or more. Debt-to-asset ratio (D/A) Comparing a company's debt level to its assets through the debt-to-asset ratio provides valuable insights when assessing its financial health relative to others in the same industry. This comparison enables potential investors to gain a clearer understanding of the investment's risk profile. Excessive debt can serve as a cautionary signal for investors. Current Ratio In general, short-term creditors tend to favor a higher current ratio as it lowers their overall risk exposure. However, from an investor's perspective, a lower current ratio may be preferred because the focus is often on utilizing the company's assets to fuel business growth. While acceptable current ratios can vary across different industries, the widely accepted standard is to have current assets at least twice the value of current liabilities, resulting in a Current Ratio of 2 to 1. Current Ratio = Current Asset/Current Liabilities The Current Ratio is calculated by dividing a company's Current Assets by its Current Liabilities, serving as an indicator of whether the company possesses sufficient cash or liquid assets to cover its short-term obligations in the coming fiscal year. This ratio is commonly used as a measure of a company's liquidity. In conclusion, financial ratios are essential tools for investors, creditors, and analysts alike. They offer valuable insights into a company's financial health, performance, and risk profile. Whether you're assessing a company's profitability, evaluating its ability to meet short-term obligations, or comparing its valuation to industry peers, these ratios provide a quantitative framework for making informed decisions. It's important to remember that the interpretation of these ratios can vary across industries, and it's often best to consider them in conjunction with other qualitative and quantitative factors. By harnessing the power of financial ratios, investors and stakeholders can navigate the complex world of finance with greater confidence and precision. Post a Comment
{"url":"https://www.triangle.finance/2023/09/how-to-evaluate-stock-before-investing.html","timestamp":"2024-11-07T06:17:51Z","content_type":"application/xhtml+xml","content_length":"166299","record_id":"<urn:uuid:71181a5f-12e5-473e-ad79-cbd3a9c3d7bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00058.warc.gz"}
Private approximation of search problems Many approximation algorithms have been presented in the last decades for NP-hard search problems. The focus of this paper is on cryptographic applications, where it is desirable to design algorithms which do not leak unnecessary information. Specifically, we are interested in private approximation algorithms - efficient approximation algorithms whose output does not leak information not implied by the optimal solutions to the search problems. Privacy requirements add constraints on the approximation algorithms; in particular, known approximation algorithms usually leak a lot of information. For functions, Feigenbaum et al. [ACM Trans. Algorithms, 2 (2006), pp. 435-472] presented a natural requirement that a private algorithm should not leak information not implied by the original function. Generalizing this requirement to relations is not straightforward as an input may have many different outputs. We present a new definition that captures a minimal privacy requirement from such algorithms; applied to an input instance, it should not leak any information that is not implied by its collection of exact solutions. We argue that our privacy requirement is natural and quite minimal. We show that, even under this minimal definition of privacy, for well-studied problems such as vertex cover and max exact 3SAT, private approximation algorithms are unlikely to exist even for poor approximation ratios. Similarly to Halevi et al. [in Proceedings of the 33rd ACM Symposium on Theory of Computing, ACM, New York, 2001, pp. 550-559], we define a relaxed notion of approximation algorithms that leak (a little) information, and demonstrate the applicability of this notion by showing near optimal approximation algorithms for max exact 3SAT that leak a little • Private approximation • Secure computation • Solution-list algorithms • Vertex cover ASJC Scopus subject areas • General Computer Science • General Mathematics Dive into the research topics of 'Private approximation of search problems'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/private-approximation-of-search-problems-5","timestamp":"2024-11-04T09:17:12Z","content_type":"text/html","content_length":"61098","record_id":"<urn:uuid:dbcf407e-43bf-45ec-a5c3-c04999801e8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00868.warc.gz"}
The water inside the motor is essentially stationary and is accelerated to the velocity at the nozzle expressed by Burnouli's equation: Solving for Velocity: The thrust is expressed by Newton's Second Law: The Mass of the fluid flowing out of the nozzle in time dt is Where A is the nozzle area. The Momentum (mass x velocity) is, therefore If we neglect the relatively small velocity of the water inside the motor, we can say that the momentum is gained entirely within time by substituting for velocity from Eq. 1 above, or
{"url":"http://www.water-rockets.com/article.pl?2,0","timestamp":"2024-11-07T20:17:34Z","content_type":"text/html","content_length":"8929","record_id":"<urn:uuid:8429612a-17ec-4144-8e33-2fc9be17a7c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00874.warc.gz"}
wu :: forums - Angles around the circle wu :: forums medium (Moderators: Eigenray, towr, SMQ, Icarus, ThudnBlunder, Grimbal, william wu) « Previous topic | Next topic » Pages: 1 Reply Notify of replies Send Topic Print Author Topic: Angles around the circle (Read 562 times) jollytall Angles around the circle Senior « on: Jan 4^th, 2015, 2:24pm » Quote Modify There is a given ABCD quadrangle in a circle (A, B, C, D are on the perimeter. We extend AB after B and AD after D. On the AB we select B', so that BB'=BC and on AD we select D', so that DD'=DC. Now we take the middle point of the B'D' sector, called C'. What is the BC'D angle? Posts: 585 towr Re: Angles around the circle wu::riddles Moderator « Reply #1 on: Jan 4^th, 2015, 6:54pm » Quote Modify Neat, it's always 90 degrees. But I haven't put more effort into it than playing around in cinderella, so no proof yet. Some people are average, some are just mean. Posts: 13730 Wikipedia, Google, Mathworld, Integer sequence DB rloginunix Re: Angles around the circle Uberpuzzler « Reply #2 on: Jan 4^th, 2015, 5:13pm » Quote Modify Beautiful problem, jollytall, and long time no hear. According to a sample Geogebra construction it must always be exactly ninety degrees. Preliminary idea: prove it for squares first, then rectangles, then inscribed trapezoids and then for an arbitrary cyclic quadrilateral. Posts: 1029 rloginunix Re: Angles around the circle Uberpuzzler « Reply #3 on: Jan 6^th, 2015, 9:32pm » Quote Modify Found one solution. It will take me a while to type it all in (hopefully by this weekend) but for now the key idea: the two circles used to obtain B' and D' intersect at two points. One point is obviously C, call the other one M. I proved that B'MD' is also exactly 90 degrees. From there the proof that BC'D is 90 degrees follows. Posts: 1029 jollytall Re: Angles around the circle Senior Riddler « Reply #4 on: Jan 6^th, 2015, 11:58pm » Quote Modify Let me try to write it down (though the solution is yours): BAD+MB'B+MB'D'+MD'B'+MD'D=180 (triangle) D'MB'+MD'B'+MB'D'=180 (triangle) BAD+BB'M+DD'M=B'MD' (difference of the above two) DMD'=DD'M and BMB'=BB'M (D and B are the middles of the two circles) Gender: BAD+BMB'+DMD'=B'MD' (replacing in the above equation the last two) Posts: 585 BMD=BCD (symmetry) BCD=180-BAD (inscribed quadriteral) BMD=BMB'+B'MD'+DMD' (adding up three parts of the angle) Adding up this and the one few lines above 90=B'MD' (this is your first statement) From here we know MC'=B'C' (Thales) Then MC'B'B is a deltoid, i.e. BC' is right angle to B'M (crossing in E). Similarly DC' is also right angle to D'M (crossing in F). So in the quadrangle of MEC'F three angles are 90, consequently the fourth is also. jollytall Re: Angles around the circle Senior « Reply #5 on: Jan 7^th, 2015, 12:03am » Quote Modify FYI: My less successful prove tried to use vector algebra, having four vectors pointing from the middle of the circle to the four corners. From there both C' and the middle point of BD (call it G) can be expressed and the length of BD is twice as the length of the GC'. It got into an ugly 10 page set of equation, but I got lost in it and was lazy to work it through... Posts: 585 rloginunix Re: Angles around the circle Uberpuzzler « Reply #6 on: Jan 7^th, 2015, 10:18am » Quote Modify You got it. Thank you for typing it in. After a few false starts I realized that if it's the current problem I can't solve then there exists a simpler problem I can't solve. Find it. Repeat. Until a problem can be solved, reverse the steps back to the original problem, etc. Hence, my earlier post. One thing I would like to add. In GeoGebra you can manipulate a number of properties for points, straight lines, circles. One such property is called "Show Trace". If turned on for, say, a point then any movement of this "traced" point is recorded visually by GeoGebra in the form of (usually) some curve. So I decided to trace the point C'. Posts: 1029 Look at what happens when we fix A, C, D but move B around: The bottom left-most edge of the snowman, the point X, corresponds to the state when B == A and we approached A moving clockwise. If we stop here and move B away from A counterclockwise then the smaller part of the snowman is drawn. Its waist or the cusp, Y, (where I suspect there is no derivative) corresponds to the state when B == C. As we keep moving B (still counterclockwise) towards D the larger part of the snowman is drawn. As we pass D and move towards A the snowman sweeps through D'. When B == A and we keep moving (counterclockwise) then the trace curve breaks at Z and C' jumps abruptly from Z to X. If we fix A, B, C and move D around a similar shape emerges: Not sure what is the meaning of this. May be SWF will come up with another awesome tessellation or explain it in 3-D with spheres, cubes and cylinders. I can just remark that in 2-D it is a linkage that transforms a circular motion into almost figure eight motion. Its parameters can be calibrated via choosing an appropriate shape of the cyclic quadrilateral. If only I could spell, "waste" became "waist". « Last Edit: Jan 7^th, 2015, 10:39am by rloginunix » rloginunix Re: Angles around the circle Uberpuzzler « Reply #7 on: Jan 13^th, 2015, 7:59pm » Quote Modify While searching for an inversion-based solution I came up with this small more symmetrical variant instead. It is based on Euclid's B6.P2 - a parallel to a triangle's base cuts its sides in the same proportion and, conversely, if a line cuts triangle's sides in the same proportion then it is parallel to its base. Here in triangle EB'D' B cuts EB' in half (construction) and C' cuts B'D' in half (given). Hence, BC' is parallel to ED'. In triangle FB'D' D cuts FD' in half (construction). Hence, DC' is parallel to MB': Posts: 1029 But we already proved that the angle B'MD' is right. From B3.P31 (in a circle the angle in the semicircle is right) it follows that the angle B'ME = 90 degrees. Hence, the points E, M, D' are collinear, points B', F, M are collinear, points E, B', D' indeed form a triangle and the above B6.P2 idea holds. Pages: 1 Reply Notify of replies Send Topic Print « Previous topic | Next topic »
{"url":"https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_medium;action=display;num=1420381494","timestamp":"2024-11-08T23:53:51Z","content_type":"text/html","content_length":"51360","record_id":"<urn:uuid:355f986e-1406-40d2-901a-ebcc8fd79dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00548.warc.gz"}
∫x31[logxx]2dx=a. x31(logx)+x+c b. 31(logx)3+c c. 3log(logx)... | Filo Question asked by Filo student Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 8 tutors discussing this question Discuss this question LIVE 7 mins ago Practice more questions on Integration View more Students who ask this question also asked View more Question Text a. b. c. d. Updated On Apr 23, 2024 Topic Integration Subject Mathematics Class Class 12
{"url":"https://askfilo.com/user-question-answers-mathematics/a-b-c-d-3130313238323837","timestamp":"2024-11-04T00:40:42Z","content_type":"text/html","content_length":"238716","record_id":"<urn:uuid:4b0adfa6-cc14-4df1-8688-92fba90672cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00494.warc.gz"}
Limits (An Introduction) - A Plus Topper Limits (An Introduction) Limit of a function Let y = f(x) be a function of x. If at x = a, f(x) takes indeterminate form, then we consider the values of the function which are very near to ‘a’. If these values tend to a definite unique number as x tends to ‘a’, then the unique number so obtained is called the limit of f(x) at x = a and we write it as Left hand and right hand limit Consider the values of the functions at the points which are very near to a on the left of a. If these values tend to a definite unique number as x tends to a, then the unique number so obtained is called left-hand limit of f(x) at x = a and symbolically we write it as Similarly we can define right-hand limit of f(x) at x = a which is expressed as Method for finding L.H.L. and R.H.L. (i) For finding right hand limit (R.H.L.) of the function, we write x + h in place of x, while for left hand limit (L.H.L.) we write x – h in place of x. (ii) Then we replace x by ‘a’ in the function so obtained. (iii) Lastly we find limit h → 0. Existence of limit Fundamental theorems on limits Limits Problems with Solutions
{"url":"https://www.aplustopper.com/limits-introduction/","timestamp":"2024-11-11T14:47:44Z","content_type":"text/html","content_length":"48108","record_id":"<urn:uuid:93f92caa-d329-471e-95f5-29a96cbfcdb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00324.warc.gz"}
How is damage calculated? Hey guys, I have a problem understanding how damage is calculated in this game. Let’s say my naked char (no gear, no auras, devotion grant some minor things) put on a mace, here are the screenshots: Why the heck do I get 71-132 value? How is it calculated? I cannot understand it. Hi cel3b0rn and welcome to the forums. Your question is a fairly complex one if you want to find out how much damage an enemy will take, but I’ll take a stab at it. Damage output can be basically understood as flat damage multiplied by damage modifiers. Your scepter gives a base damage of 18-33 fire damage, your searing ember gives 2-4 fire damage, for a total of 20-37 fire damage. This is then modified by your fire damage modifier of 90% and your spirit modifier of 167% for a total of +257%. 257% of 20-37 is 51-95 approximately so you add 51-95 to your original 20-35 for a total of 71-130. There is some rounding in that value so the leftovers probably account for the few extra missing points. Once you have your damage output, you can multiply that value by your attacks per second for a rough DPS value (in this case ~126-223 fire damage). However, enemy resists and defenses often modify the ACTUAL damage you deal. Even if you deal 10000 damage per hit, if your enemy has 80% resistance to that damage type, you’re only dealing 2000 actual damage per hit. Hope that helped clarify!
{"url":"https://forums.crateentertainment.com/t/how-is-damage-calculated/39053","timestamp":"2024-11-04T08:19:36Z","content_type":"text/html","content_length":"17471","record_id":"<urn:uuid:eda8e7a3-a314-4ae6-8ce0-8993955b331e>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00581.warc.gz"}
Trading Rules on Stock Markets Using Genetic Network Programming with Sarsa Learning Yan Chen, Shingo Mabu, Kaoru Shimada, and Kotaro Hirasawa Graduate school of Information, Production and Systems, Waseda University, 2-7 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka October 23, 2007 March 11, 2008 July 20, 2008 genetic network programming, reinforcement learning, stock trading model, technical index, candlestick chart In this paper, the Genetic Network Programming (GNP) for creating trading rules on stocks is described. GNP is an evolutionary computation, which represents its solutions using graph structures and has some useful features inherently. It has been clarified that GNP works well especially in dynamic environments since GNP can create quite compact programs and has an implicit memory function. In this paper, GNP is applied to creating a stock trading model. There are three important points: The first important point is to combine GNP with Sarsa Learning which is one of the reinforcement learning algorithms. Evolution-based methods evolve their programs after task execution because they must calculate fitness values, while reinforcement learning can change programs during task execution, therefore the programs can be created efficiently. The second important point is that GNP uses candlestick chart and selects appropriate technical indices to judge the timing of the buying and selling stocks. The third important point is that sub-nodes are used in each node to determine appropriate actions (buying/selling) and to select appropriate stock price information depending on the situation. In the simulations, the trading model is trained using the stock prices of 16 brands in 2001, 2002 and 2003. Then the generalization ability is tested using the stock prices in 2004. From the simulation results, it is clarified that the trading rules of the proposed method obtain much higher profits than Buy&Hold method and its effectiveness has been confirmed. Cite this article as: Y. Chen, S. Mabu, K. Shimada, and K. Hirasawa, “Trading Rules on Stock Markets Using Genetic Network Programming with Sarsa Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.12 No.4, pp. 383-392, 2008. Data files: 1. [1] S. Mabu, K. Hirasawa, and J. Hu, “A graph-based evolutionary algorithm: Genetic network programming and its extension using reinforcement learning,” Evolutionary Computation, MIT Press, Vol.15, No.3, pp. 369-398, 2007. 2. [2] T. Eguchi, K. Hirasawa, J. Hu, and N. Ota, “Study of evolutionary multiagent models based on symbiosis,” IEEE Trans. Syst., Man and Cybern. B, Vol.36, No.1, pp. 179-193, 2006. 3. [3] J. H. Holland, “Adaptation in Natural and Artificial Systems,” Ann Arbor, University of Michigan Press, 1975. 4. [4] D. E. Goldberg, “Genetic Algorithm in search, optimization and machine learning,” Addison-Wesley, 1989. 5. [5] J. R. Koza, “Genetic Programming, on the programming of computers by means of natural selection,” Cambridge, Mass., MIT Press, 1992. 6. [6] J. R. Koza, “Genetic Programming II, Automatic Discovery of Reusable Programs,” Cambridge, Mass., MIT Press, 1994. 7. [7] R. S. Sutton and A. G. Barto, “Reinforcement Learning -An Introduction,” Cambridge, Massachusetts, London, England, MIT Press, 1998. 8. [8] N. Baba, N. Inoue, and Y. Yanjun, “ Utilization of soft computing techniques for constructing reliable decision support systems for dealing stocks,” in Proc. of Int. Joint Conf. on Neural Networks, 2002. 9. [9] J.-Y. Potvin, P. Soriano, and M. Vallee, “Generating trading rules on the stock markets with genetic programming,” Computers & Operations Research, Vol.31, pp. 1033-1047, 2004. 10. [10] K. J. Oh, T. Y. Kim, S.-H. Min, and H. Y. Lee, “Portfolio algorithm based on portfolio beta using genetic algorithm,” Expert Systems with Application, Vol.30, pp. 527-534, 2006. 11. [11] S. Mabu, H. Hatakeyama, M. T. Thu, K. Hirasawa, and J. Hu, “Genetic Network Programming with Reinforcement Learning and Its Application to Making Mobile Robot Behavior,” IEEJ Trans. EIS, Vol.126, No.8, pp. 1009-1015, 2006. 12. [12] K. H. Lee and G. S. Jo, “Expert system for predicting stock market timing using a candlestick chart,” Expert Systems with Applications, Vol.16, pp. 357-364, 1999. 13. [13] Y. Izumi, T. Yamaguchi, S. Mabu, K. Hirasawa, and J. Hu, “Trading Rules on the Stock Market using Genetic Network Programming with Candlestick Chart,” 2006 IEEE Congress on Evolutionary Computation, Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada, pp. 8531-8536, July 16-21, 2006. 14. [14] S. Mabu, Y. Izumi, K. Hirasawa, and T. Furuzuki, “Trading Rules on Stock Markets Using Genetic Network Progamming with Candle Chart,” T. SICE, Vol.43, No.4, pp. 317-322, 2007 (in 15. [15] Y. Izumi, K. Hirasawa, and T. Furuzuki, “Trading Rules on the Stock Markets Using Genetic Network Progamming with Importance Index,” T. SICE, Vol.42, No.5, pp. 559-566, 2006 (in 16. [16] V. Dhar, “A Comparison of GLOWER and Other Machine Learning Methods for Investment Decision Making,” Springer Berlin Press, pp.208-220, 2001. 17. [17] S. Duerson, F. S. Khan, V. Kovalev, and A. H. Malik, “Reinforcement Learning in Online Stock Trading Systems,” 2005. 18. [18] S. Pafka, M. Potters, and I. Kondor, “Exponential Weighting and Random-Matrix-Theory-Based Filtering of Financial Covariance Matrices for Portfolio Optimization,” arXiv:cond-mat/ 0402573v1, 2004. Quantitative Finance, (to be appeared). 19. [19] N. Basalto, R. Bellotti, F. De Carlo, P. Facchi, and S. Pascazio, “Clustering stock market companies via chaotic map synchronization,” Physica A, 345, p. 196, arXiv:cond-mat/0404497v1, 20. [20] W. Huang, Y. Nakamori, and S. Y. Wang, “Forecasting stock market movement direction with support vector machine Source,” Computers and Operations Research, Vol.32, Issue 10, pp. 2513-2522, 2005. 21. [21] M. B. Porecha, P. K. Panigrahi, J. C. Parikh, C. M. Kishtawal, and S. Basu, “Forecasting non-stationary financial time series through genetic algorithm,” arXiv:nlin/0507037v1, 2005. 22. [22] M. H. Jensen, A. Johansen, F. Petroni, and I. Simonsen, “Inverse Statistics in the Foreign Exchange Market,” Physica A, 340, p. 678, arXiv:cond-mat/0402591v2, 2004. 23. [23] T. Mikosch and C. Starica, “Stock Market Risk-Return Inference. An Unconditional Non-parametric Approach,” SSRN Working Paper Series, 2004. 24. [24] H. Iba and T. Sasaki, “Using Genetic Programming to Predict Financial Data,” Proc. of the Congress of Evolutionary Computation, pp. 244-251, 2001.
{"url":"https://www.fujipress.jp/jaciii/jc/jacii001200040383/","timestamp":"2024-11-11T17:08:07Z","content_type":"text/html","content_length":"50089","record_id":"<urn:uuid:1d3c5ad2-7200-4ddd-9783-456c05f274ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00509.warc.gz"}
Cylinder drag at low Reynolds number From terminal velocity measurements of solitary finite length circular cylinders settling in a large rectangular tank of fluid, drag values for infinite length circular cylinders translating at constant speed through an unbounded fluid were obtained. The drag was determined as the effective weight of the cylinder in the liquid and the Reynolds number was determined from the terminal velocity measurements by the time of flight method. The effects of finite length and finite boundaries were accounted for by empirical corrections. The effects of the container endwalls and bottom were observed to be negligibly small. The sidewalls caused a variation in the terminal velocity proportional to the reciprocal of the wall separation squared. When the influence of the side walls was removed, it was found that the reciprocal of the terminal velocity was linearly proportional to the reciprocal of the length. Ph.D. Thesis Pub Date: August 1977 □ Cylindrical Bodies; □ Drag; □ Reynolds Number; □ Numerical Analysis; □ Terminal Velocity; □ Time Of Flight Spectrometers; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1977PhDT.......108H/abstract","timestamp":"2024-11-06T19:19:46Z","content_type":"text/html","content_length":"34622","record_id":"<urn:uuid:6fa89642-1a40-45cb-89ff-75b173840bbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00388.warc.gz"}
How Far Are Ten Blocks?How Far Are Ten Blocks? How Far Are Ten Blocks? If you’ve ever gone to a city, you might have noticed that different cities have different city blocks. These differ in size and are determined by the location of the city and the roadways that run through it. For example, one city may only have eight blocks equal to a mile, while another might have 15 city blocks per mile. This can be confusing when traveling across different cities. Distance Between Two Points A block is a measure of distance that can be used to determine the distance between two points in a row. It is usually used when measuring distance in cities and can help you understand how far you must go to reach a specific location. The block size can vary depending on your city, but it is normally about 300 to 325 feet long. For example, you can use this length to estimate the distance between your house and a mall or other large retail outlet. However, it is important to know that the distance between a street and a block does not typically count the extra 50 to 100 feet that would make up the road you are crossing. Therefore, it is a good idea to think of these distances in terms of miles instead of blocks when you are planning your trip. One of the easiest ways to calculate how far a particular place is from another is to use a distance calculator. Simply enter the coordinates of the two points you want to find the distance between and click the button that shows how far it is between them. Once you have the distance, you can look up the coordinates on a map to see where you are located. This can be useful if you have many addresses you need to map out. If you are trying to map out your entire neighborhood, it can be difficult to figure out the distance between all your points. For this reason, it may be helpful to use a program such as BatchGeo to do the calculations for you. The program creates a symmetric matrix that stores the distances between each pair of points. Each entry in this matrix contains the metric dist(u=XA[i], v=XB[j]). The metric can be a string, such as ‘braycurtis,’ or an array of strings representing various distance metrics (such as ‘city block’). You can also give an optional weight vector for metrics that support weights. This is especially useful when calculating the distance between two points close together but not necessarily the same size. Distance Between Two Streets The distance from one place to another is often the most important number for most people. But how do you measure that? The answer is surprisingly easy: just use your smartphone. In fact, a special algorithm built into Apple Maps will allow you to know how long it will take to get from A to B or wherever your current location is on the map. But that’s not all: it also allows you to customize your route based on traffic conditions, weather, and personal preferences. A well-designed, high-quality tool is the best way to learn this important equation. This guide will help you select the best – a collection of reputable apps and websites, all free to download and use on your phone or tablet. In math and technology, there is an awe-inspiring number of tools on the market to aid your students as they grow in their understanding of numbers and arithmetic. These include base ten blocks, the latest version of the classic wooden block, and other interactive educational games that will have your kids put their learning to good use. Distance Between Two Avenues New York City has a huge grid of streets and avenues that run north, south, east, and west. Each Street and avenue are numbered sequentially, as shown in the diagram below. The distance between two avenues in a row is called the block length. The average block length is 750 feet, but some blocks are longer than others. For example, the block between the First and Second avenues is 650 feet, and the blocks between the Third and Sixth are 920 feet. If you’re planning to walk in Manhattan, it’s important to remember that the distance between two avenues can vary quite a bit. In fact, many of the avenues were not built according to the original plan, allowing them to be spaced a bit farther apart than they are today. You’ll probably have to take a long way around when you’re in Manhattan. So, if you plan to walk along Fifth Avenue between 53rd and 59th streets, you’ll have to take 20 blocks south (from 53rd to 59th) and ten blocks east (from 59th to 53rd). This contrasts with walking uptown from 1st Street to 6th Street, which is only about a quarter of a mile. How Far Are Ten Blocks? Better Guide When it comes to measuring distance, different units of measurement are used depending on the context. For example, in urban areas, blocks are often used as a unit of measurement to describe distances. This guide will discuss how far ten blocks are and how they can be measured. Part 1: Understanding Blocks As A Unit Of Measurement Blocks are a unit of measurement used in urban areas to describe distances between streets or city blocks. The size of a block can vary depending on the city, but typically, it is defined as the distance between two parallel streets. The length of a block can vary from 100 feet to 900 feet depending on the city and neighborhood. In some cities, blocks are also referred to as “streets.” Part 2: Understanding The Distance Of 10 Blocks The distance of 10 blocks can vary depending on the city and the size of the blocks. For example: • In New York City, the distance of 10 blocks is approximately 0.5 miles or 800 meters. • In Chicago, the distance of 10 blocks is approximately 0.6 miles or 970 meters. • In San Francisco, the distance of 10 blocks is approximately 1 mile or 1.6 kilometers. Part 3: Measuring The Distance Of 10 Blocks Measuring the distance of 10 blocks can be done in a few different ways. Here are some examples • Walking: One of the easiest ways to measure the distance of 10 blocks is to walk. Using a pedometer or a fitness tracker can help you keep track of the distance you cover. • Driving: If you are driving, you can measure the distance using your car’s odometer. Start at one end of the ten blocks and drive to the other, noting the mileage on your odometer. • Online Mapping Tools: Several online mapping tools, such as Google Maps or MapQuest, can help you calculate the distance between two points. Part 4: Other Interesting Facts About Blocks Here Are A Few Other Interesting Facts About Blocks • In some cities, blocks are numbered instead of named. For example, in Manhattan, the blocks are numbered, with each block representing approximately 1/20th of a mile. • In Washington, D.C., blocks are measured in a grid pattern with lettered streets running east-west and numbered streets running north-south. Each block is approximately 400 feet long. • In some cities, blocks can be irregularly shaped. For example, in Boston, blocks are often irregularly shaped due to the city’s history and development. In conclusion, the distance of 10 blocks can vary depending on the city and the size of the blocks. The distance can be measured in several ways, including walking, driving, or online mapping tools. Understanding blocks as a unit of measurement can be helpful when navigating through urban areas and provide insight into a city’s history and development. What is the standard size of a block when we refer to ten blocks distance? A block’s size varies based on location, although it is often estimated as a distance of 100 to 200 metres. Are the blocks measured in terms of length, width or height? Blocks are commonly measured in terms of length, which is the distance between one end of the block and the other. How long does it typically take to walk or drive ten blocks? Walking ten blocks may take 10-20 minutes depending on your pace, whereas driving may take 5-10 minutes depending on traffic and speed limitations. Are the blocks in a straight line, or do they follow a curved path? The course of the blocks varies depending on the form of the city, although it is commonly thought that they follow a straight path. Can the distance of ten blocks vary depending on the location, city, or country? Absolutely, depending on the region, city, or nation, the distance of 10 blocks might vary. Blocks might be substantially smaller in certain places and much larger in others. Is there a convenient way to measure ten blocks without a measuring tool? Counting the number of street junctions you pass is one approach to determine the distance of ten blocks without a measuring gadget. A block is often defined in most cities as the distance between two crossing streets, therefore counting the number of intersections can provide an approximation of the distance travelled. How Far Are Ten Blocks? If you’ve ever gone to a city, you might have noticed that different cities have different city blocks. These differ in size and are determined by the location of the city and the roadways that run through it. For example, one city may only have eight blocks equal to a mile, while another might have 15 city blocks per mile. This can be confusing when traveling across different cities. Distance Between Two Points A block is a measure of distance that can be used to determine the distance between two points in a row. It is usually used when measuring distance in cities and can help you understand how far you must go to reach a specific location. The block size can vary depending on your city, but it is normally about 300 to 325 feet long. For example, you can use this length to estimate the distance between your house and a mall or other large retail outlet. However, it is important to know that the distance between a street and a block does not typically count the extra 50 to 100 feet that would make up the road you are crossing. Therefore, it is a good idea to think of these distances in terms of miles instead of blocks when you are planning your trip. One of the easiest ways to calculate how far a particular place is from another is to use a distance calculator. Simply enter the coordinates of the two points you want to find the distance between and click the button that shows how far it is between them. Once you have the distance, you can look up the coordinates on a map to see where you are located. This can be useful if you have many addresses you need to map out. If you are trying to map out your entire neighborhood, it can be difficult to figure out the distance between all your points. For this reason, it may be helpful to use a program such as BatchGeo to do the calculations for you. The program creates a symmetric matrix that stores the distances between each pair of points. Each entry in this matrix contains the metric dist(u=XA[i], v=XB[j]). The metric can be a string, such as ‘braycurtis,’ or an array of strings representing various distance metrics (such as ‘city block’). You can also give an optional weight vector for metrics that support weights. This is especially useful when calculating the distance between two points close together but not necessarily the same size. Distance Between Two Streets The distance from one place to another is often the most important number for most people. But how do you measure that? The answer is surprisingly easy: just use your smartphone. In fact, a special algorithm built into Apple Maps will allow you to know how long it will take to get from A to B or wherever your current location is on the map. But that’s not all: it also allows you to customize your route based on traffic conditions, weather, and personal preferences. A well-designed, high-quality tool is the best way to learn this important equation. This guide will help you select the best – a collection of reputable apps and websites, all free to download and use on your phone or tablet. In math and technology, there is an awe-inspiring number of tools on the market to aid your students as they grow in their understanding of numbers and arithmetic. These include base ten blocks, the latest version of the classic wooden block, and other interactive educational games that will have your kids put their learning to good use. Distance Between Two Avenues New York City has a huge grid of streets and avenues that run north, south, east, and west. Each Street and avenue are numbered sequentially, as shown in the diagram below. The distance between two avenues in a row is called the block length. The average block length is 750 feet, but some blocks are longer than others. For example, the block between the First and Second avenues is 650 feet, and the blocks between the Third and Sixth are 920 feet. If you’re planning to walk in Manhattan, it’s important to remember that the distance between two avenues can vary quite a bit. In fact, many of the avenues were not built according to the original plan, allowing them to be spaced a bit farther apart than they are today. You’ll probably have to take a long way around when you’re in Manhattan. So, if you plan to walk along Fifth Avenue between 53rd and 59th streets, you’ll have to take 20 blocks south (from 53rd to 59th) and ten blocks east (from 59th to 53rd). This contrasts with walking uptown from 1st Street to 6th Street, which is only about a quarter of a mile. How Far Are Ten Blocks? Better Guide When it comes to measuring distance, different units of measurement are used depending on the context. For example, in urban areas, blocks are often used as a unit of measurement to describe distances. This guide will discuss how far ten blocks are and how they can be measured. Part 1: Understanding Blocks As A Unit Of Measurement Blocks are a unit of measurement used in urban areas to describe distances between streets or city blocks. The size of a block can vary depending on the city, but typically, it is defined as the distance between two parallel streets. The length of a block can vary from 100 feet to 900 feet depending on the city and neighborhood. In some cities, blocks are also referred to as “streets.” Part 2: Understanding The Distance Of 10 Blocks The distance of 10 blocks can vary depending on the city and the size of the blocks. For example: • In New York City, the distance of 10 blocks is approximately 0.5 miles or 800 meters. • In Chicago, the distance of 10 blocks is approximately 0.6 miles or 970 meters. • In San Francisco, the distance of 10 blocks is approximately 1 mile or 1.6 kilometers. Part 3: Measuring The Distance Of 10 Blocks Measuring the distance of 10 blocks can be done in a few different ways. Here are some examples • Walking: One of the easiest ways to measure the distance of 10 blocks is to walk. Using a pedometer or a fitness tracker can help you keep track of the distance you cover. • Driving: If you are driving, you can measure the distance using your car’s odometer. Start at one end of the ten blocks and drive to the other, noting the mileage on your odometer. • Online Mapping Tools: Several online mapping tools, such as Google Maps or MapQuest, can help you calculate the distance between two points. Part 4: Other Interesting Facts About Blocks Here Are A Few Other Interesting Facts About Blocks • In some cities, blocks are numbered instead of named. For example, in Manhattan, the blocks are numbered, with each block representing approximately 1/20th of a mile. • In Washington, D.C., blocks are measured in a grid pattern with lettered streets running east-west and numbered streets running north-south. Each block is approximately 400 feet long. • In some cities, blocks can be irregularly shaped. For example, in Boston, blocks are often irregularly shaped due to the city’s history and development. In conclusion, the distance of 10 blocks can vary depending on the city and the size of the blocks. The distance can be measured in several ways, including walking, driving, or online mapping tools. Understanding blocks as a unit of measurement can be helpful when navigating through urban areas and provide insight into a city’s history and development. What is the standard size of a block when we refer to ten blocks distance? A block’s size varies based on location, although it is often estimated as a distance of 100 to 200 metres. Are the blocks measured in terms of length, width or height? Blocks are commonly measured in terms of length, which is the distance between one end of the block and the other. How long does it typically take to walk or drive ten blocks? Walking ten blocks may take 10-20 minutes depending on your pace, whereas driving may take 5-10 minutes depending on traffic and speed limitations. Are the blocks in a straight line, or do they follow a curved path? The course of the blocks varies depending on the form of the city, although it is commonly thought that they follow a straight path. Can the distance of ten blocks vary depending on the location, city, or country? Absolutely, depending on the region, city, or nation, the distance of 10 blocks might vary. Blocks might be substantially smaller in certain places and much larger in others. Is there a convenient way to measure ten blocks without a measuring tool? Counting the number of street junctions you pass is one approach to determine the distance of ten blocks without a measuring gadget. A block is often defined in most cities as the distance between two crossing streets, therefore counting the number of intersections can provide an approximation of the distance travelled. • Trending • Comments • Latest
{"url":"https://rochaksafar.com/how-far-are-ten-blocks/","timestamp":"2024-11-09T17:18:33Z","content_type":"text/html","content_length":"248735","record_id":"<urn:uuid:d8f19664-c79c-4d79-9cd9-a7d36c8d4609>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00582.warc.gz"}
The ideal programming environment for computational mathematics enjoys the following characteristics: • It must be based on a computer language that allows the user to work quickly and integrate systems effectively. Ideally, the computer language should be portable to all platforms: Windows, Mac OS X, Linux, Unix, Android, and so on. This is key to fostering cooperation among scientists with different resources and accessibilities. It must contain a powerful set of libraries that allow the acquisition, storing, and handling of large datasets in a simple and effective manner. This is central—allowing simulation and the employment of numerical computations at a large scale. • Smooth integration with other computer languages, as well as third-party software. • Besides running the compiled code, the programming environment should allow the possibility of interactive sessions as well as scripting capabilities for quick experimentation. • Different coding paradigms should be supported—imperative, object-oriented, and/or functional coding styles. • It should be an open source software, that allows user access to the raw data code, and allows the user to modify basic algorithms if so desired. With commercial software, the inclusion of the improved algorithms is applied at the discretion of the seller, and it usually comes at a cost of the end user. In the open source universe, the community usually performs these improvements and releases new versions as they are published—at no cost. • The set of applications should not be restricted to mere numerical computations; it should be powerful enough to allow symbolic computations as well. Among the best-known environments for numerical computations used by the scientific community is MATLAB, which is commercial, expensive, and which does not allow any tampering with the code. Maple and Mathematica are more geared towards symbolic computation, although they can match many of the numerical computations from MATLAB. These are, however, also commercial, expensive, and closed to modifications. A decent alternative to MATLAB and based on a similar mathematical engine is the GNU Octave system. Most of the MATLAB code is easily portable to Octave, which is open source. Unfortunately, the accompanying programming environment is not very user friendly, it is also very much restricted to numerical computations. One environment that combines the best of all worlds is Python with the open source libraries NumPy and SciPy for numerical operations. The first property that attracts users to Python is, without a doubt, its code readability. The syntax is extremely clear and expressive. It has the advantage of supporting code written in different paradigms: object oriented, functional, or old school imperative. It allows packing of Python codes and to run them as standalone executable programs through the py2exe, pyinstaller, and cx_Freeze libraries, but it can also be used interactively or as a scripting language. This is a great advantage when developing tools for symbolic computation. Python has therefore been a firm competitor to Maple and Mathematica: the open source mathematics software Sage (System for Algebra and Geometry Experimentation). NumPy is an open source extension to Python that adds support for multidimensional arrays of large sizes. This support allows the desired acquisition, storage, and complex manipulation of data mentioned previously. NumPy alone is a great tool to solve many numerical computations. On top of NumPy, we have yet another open source library, SciPy. This library contains algorithms and mathematical tools to manipulate NumPy objects with very definite scientific and engineering The combination of Python, NumPy, and SciPy (which henceforth are coined as "SciPy" for brevity) has been the environment of choice of many applied mathematicians for years; we work on a daily basis with both pure mathematicians and with hardcore engineers. One of the challenges of this trade is to bring about the scientific production of professionals with different visions, techniques, tools, and software to a single workstation. SciPy is the perfect solution to coordinate computations in a smooth, reliable, and coherent manner. Constantly, we are required to produce scripts with, for example, combinations of experiments written and performed in SciPy itself, C/C++, Fortran, and/or MATLAB. Often, we receive large amounts of data from some signal acquisition devices. From all this heterogeneous material, we employ Python to retrieve and manipulate the data, and once finished with the analysis, to produce high-quality documentation with professional-looking diagrams and visualization aids. SciPy allows performing all these tasks with ease. This is partly because many dedicated software tools easily extend the core features of SciPy. For example, although graphing and plotting are usually taken care of with the Python libraries of matplotlib, there are also other packages available, such as Biggles (http://biggles.sourceforge.net/), Chaco (https://pypi.python.org/pypi/chaco), HippoDraw (https://github.com/plasmodic/hippodraw), MayaVi for 3D rendering (http://mayavi.sourceforge.net/), the Python Imaging Library or PIL (http://pythonware.com/products/pil/), and the online analytics and data visualization tool Plotly (https:/ Interfacing with non-Python packages is also possible. For example, the interaction of SciPy with the R statistical package can be done with RPy (http://rpy.sourceforge.net/rpy2.html). This allows for much more robust data analysis.
{"url":"https://subscription.packtpub.com/book/data/9781783987702/1/ch01lvl1sec02/what-is-scipy","timestamp":"2024-11-02T10:51:09Z","content_type":"text/html","content_length":"249244","record_id":"<urn:uuid:9a84983e-8e04-47ed-a8b6-1a78bc97a06f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00734.warc.gz"}
Frequency Encoding Let us consider a one-dimensional case, i.e. let the sample be completely homogeneous along the y and z axes. Let the high external field continue to point in the direction of the z axis. Let us excite the system so that the magnetization gets in the x-y plane; after that, let us change the magnitude of the magnetic field so that it changes linearly as a function of position instead of being homogeneous. Let the changed field be Figure 2. shows the draft of the spin echo sequence: time Let us measure the induced signal. Due to the gradient, the magnetization will precess with different frequencies in every position. The measured signal is the sum of the many oscillating signals with different frequencies. These sums can evidently be resolved into their components using the Fourier transform, i.e. the position-dependence of the magnetization can be restored. Let us write the frequency encoding formally as well. The phase will be position and time-dependent because of the gradient: where it is made possible for the gradient to change over time after the second equals sign. Let us introduce a new variable instead of time: By writing the signal using the new variable, an important relation can be discovered: That is, the signal is the Fourier transform of the effective spin density, the introduced quantity
{"url":"http://oftankonyv.reak.bme.hu/tiki-index.php?page=Frequency+Encoding","timestamp":"2024-11-11T16:31:10Z","content_type":"application/xhtml+xml","content_length":"22765","record_id":"<urn:uuid:925b42d0-c54b-46e1-bc48-0bfe3b14f169>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00736.warc.gz"}
Lin.ear th.inking There is an elegant mathematical theory of binary relations. Homogeneous relations are an important subclass of binary relations in which both domains are the same. A homogeneous relation R is a subset of all ordered pairs (x,y) with x and y elements of the domain. This can be thought of as a boolean-valued function R(x,y), which is true if the pair has the relationship and false if not. The restricted structure of homogeneous relations allows describing them by various properties, including: Reflexive - R(x, x) is always true Irreflexive - R(x, x) is never true Symmetric - if R(x, y), then R(y, x) Antisymmetric - if R(x, y) and R(y, x), then x = y Transitive - if R(x, y) and R(y, z), then R(x, z) The Dimensionally Extended 9-Intersection Model (DE-9IM) represents the topological relationship between two geometries. Various useful subsets of spatial relationships are specified by named spatial predicates. These are the basis for spatial querying in many spatial systems including the JTS Topology Suite, GEOS and The spatial predicates are homogeneous binary relations over the Geometry domain They can thus be categorized in terms of the relational properties above. The following table shows the properties of the standard predicates: │Predicate │Reflexive / │ Symmetric / │Transitive│ │ │Irreflexive │Antisymmetric │ │ │ Equals │ R │ S │ T │ │Intersects│ R │ S │ - │ │ Disjoint │ R │ S │ - │ │ Contains │ R │ A │ - │ │ Within │ R │ A │ - │ │ Covers │ R │ A │ T │ │CoveredBy │ R │ A │ T │ │ Crosses │ I │ S │ - │ │ Overlaps │ I │ S │ - │ │ Touches │ I │ S │ - │ • Contains and Within are not Transitive because of the quirk that "Polygons do not contain their Boundary" (explained in this post). A counterexample is a Polygon that contains a LineString lying in its boundary and interior, with the LineString containing a Point that lies in the Polygon boundary. The Polygon does not contain the Point. So Contains(poly, line) = true and Contains(line, pt) = true, but Contains(poly, pt) = false. The predicates Covers and CoveredBy do not have this idiosyncrasy, and thus are transitive. The previous post discussed polygonal coverages and outlined the plan to support them in the JTS Topology Suite. This post presents the first step of the plan: algorithms to validate polygonal coverages. This capability is essential, since coverage algorithms rely on valid input to provide correct results. And as will be seen below, coverage validity is usually not obvious, and cannot be taken for granted. As described previously, a polygonal coverage is a set of polygons which fulfils a specific set of geometric conditions. Specifically, a set of polygons is coverage-valid if and only if it satisfies the following conditions: • Non-Overlapping - polygon interiors do not intersect • Edge-Matched (also called Vector-Clean and Fully-Noded) - the shared boundaries of adjacent polygons has the same set of vertices in both polygons The Non-Overlapping condition ensures that no point is covered by more than one polygon. The Edge-Matched condition ensures that coverage topology is stable under transformations such as reprojection, simplification and precision reduction (since even if vertices are coincident with a line segment in the original dataset, this is very unlikely to be the case when the data is An invalid coverage which violates both (L) Non-Overlapping and (R) Edge-Matched conditions (note the different vertices in the shared boundary of the right-hand pair) Note that these rules allow a polygonal coverage to cover disjoint areas. They also allow internal gaps to occur between polygons. Gaps may be intentional holes, or unwanted narrow gaps caused by mismatched boundaries of otherwise adjacent polygons. The difference is purely one of size. In the same way, unwanted narrow "gores" may occur in valid coverages. Detecting undesirable gaps and gores will be discussed further in a subsequent post. Computing Coverage Validation Coverage validity is a global property of a set of polygons, but it can be evaluated in a local and piecewise fashion. To confirm a coverage is valid, it is sufficient to check every polygon against each adjacent (intersecting) polygon to determine if any of the following invalid situations occur: • Interiors Overlap: □ the polygon linework crosses the boundary of the adjacent polygon □ a polygon vertex lies within the adjacent polygon □ the polygon is a duplicate of the adjacent polygon • Edges do not Match: □ two segments in the boundaries of the polygons intersect and are collinear, but are not equal If neither of these situations are present, then the target polygon is coverage-valid with respect to the adjacent polygon. If all polygons are coverage-valid against every adjacent polygon then the coverage as a whole is valid. For a given polygon it is more efficient to check all adjacent polygons together, since this allows faster checking of valid polygon boundary segments. When validation is used on datasets which are already clean, or mostly so, this improves the overall performance of the algorithm. Evaluating coverage validity in a piecewise way allows the validation process to be parallelized easily, and executed incrementally if required. JTS Coverage Validation Validation of a single coverage polygon is provided by the JTS CoveragePolygonValidator class. If a polygon is coverage-invalid due to one or more of the above situations, the class computes the portion(s) of the polygon boundary which cause the failure(s). This allows the locations and number of invalidities to be determined and visualized. The class CoverageValidator computes coverage-validity for an entire set of polygons. It reports the invalid locations for all polygons which are not coverage-valid (if any). Using spatial indexing makes checking coverage validity quite performant. For example, a coverage containing 91,384 polygons with 10,474,336 vertices took only 6.4 seconds to validate. In this case the coverage is nearly valid, since only one invalid polygon was found. The invalid boundary linework returned by CoverageValidator allows easily visualizing the location of the issue. A polygonal dataset of 91,384 polygons, containing a single coverage-invalid polygon The invalid polygon is a tiny sliver, with a single vertex lying a very small distance inside an adjacent polygon. The discrepancy is only visible using the JTS TestBuilder Reveal Topology mode. The size of the discrepancy is very small. The vertex causing the overlap is only 0.0000000001 units away from being valid: [921] POLYGON(632) [922:4] POLYGON(5) Ring-CW Vert[921:0 514] POINT ( 960703.3910000008 884733.1892000008 ) Ring-CW Vert[922:4:0 3] POINT ( 960703.3910000008 884733.1893000007 ) This illustrates the importance of having fast, robust automated validity checking for polygonal coverages, and providing information about the exact location of errors. Real-world testing With coverage validation now available in JTS, it's interesting to apply it to publicly available datasets which (should) have coverage topology. It is surprising how many contain validity errors. Here are a few examples: Source City of Vancouver Dataset Zoning Districts This dataset contains 1,498 polygons with 57,632 vertices. There are 379 errors identified, which mainly consist of very small discrepancies between vertices of adjacent polygons. Source British Ordnance Survey OpenData Dataset Boundary-Line File unitary_electoral_division_region.shp This dataset contains 1,178 polygons with 2,174,787 vertices. There are 51 errors identified, which mainly consist of slight discrepancies between vertices of adjacent polygons. (Note that this does not include gaps, which are not detected by CoverageValidator. There are about 100 gaps in the dataset as well.) An example of overlapping polygons in the Electoral Division dataset Source Hamburg Open Data Platform Dataset VerwaltungsEinheit (Administrative Units) The dataset (slightly reduced) contains 7 polygons with 18,254 vertices. Coverage validation produces 64 error locations. The errors are generally small vertex discrepancies producing overlaps. Gaps exist as well, but are not detected by the default CoverageValidator usage. An example of overlapping polygons (and a gap) in the VerwaltungsEinheit dataset As always, this code will be ported to GEOS. A further goal is to provide this capability in PostGIS, since there are likely many datasets which could benefit from this checking. The piecewise implementation of the algorithm should mesh well with the nature of SQL query execution. And of course the next logical step is to provide the ability to fix errors detected by coverage validation. This is a research project for the near term. UPDATE: my colleague Paul Ramsey pointed out that he has already ported this code to GEOS. Now for some performance testing! An important concept in spatial data modelling is that of a coverage. A coverage models a two-dimensional region in which every point has a value out of a range (which may be defined over one or a set of attributes). Coverages can be represented in both of the main physical spatial data models: raster and vector. In the raster data model a coverage is represented by a grid of cells with varying values. In the vector data model a coverage is a set of non-overlapping polygons (which usually, but not always, cover a contiguous area). This post is about the vector data coverage model, which is termed (naturally) a polygonal coverage. These are used to model regions which are occupied by discrete sub-regions with varying sets of attribute values. The sub-regions are modelled by simple polygons. The coverage may contain gaps between polygons, and may cover multiple disjoint areas. The essential characteristics are: • polygon interiors do not overlap • the common boundary of adjacent polygons has the same set of vertices in both polygons. There are many types of data which are naturally modelled by polygonal coverages. Classic examples include: • Man-made boundaries □ parcel fabrics □ political jurisdictions • Natural boundaries □ vegetation cover □ land use A polygonal coverage of regions of France Topological and Discrete Polygonal Coverages There are two ways to represent polygonal coverages: as a topological data structure, or as a set of discrete polygons. A coverage topology consists of linked faces, edges and nodes. The edges between two nodes form the shared boundary between two faces. The coverage polygons can be reconstituted from the edges delimiting each face. The discrete polygon representation is simpler, and aligns better with the OGC Simple Features model. It is simply a collection of polygons which satisfy the coverage validity criteria given above. Most common spatial data formats support only a discrete polygon model, and many coverage datasets are provided in this form. However, the lack of inherent topology means that datasets must be carefully constructed to ensure they have valid coverage topology. In fact, many available datasets contain coverage invalidities. A current focus of JTS development is to provide algorithms to detect this situation and provide the locations where the polygons fail to form a valid coverage. Polygonal Coverage Operations Operations which can be performed on polygonal coverages include: • Validation - check that a set of discrete polygons forms a valid coverage • Gap Detection - check if a polygonal coverage contains narrow gaps (using a given distance tolerance) • Cleaning - fix errors such as gaps, overlaps and slivers in a polygonal dataset to ensure that it forms a clean, valid coverage • Simplification - simplify (generalize) polygon boundary linework, ensuring coverage topology is preserved • Precision Reduction - reduce precision of polygon coordinates, ensuring coverage topology is preserved • Union - merge all or portions of the coverage polygons into a single polygon (or multipolygon, if the input contains disjoint regions) • Overlay - compute the intersection of two coverages, producing a coverage of resultant polygons Implementing polygonal coverage operations is a current focus for development in the JTS Topology Suite. Since most operations require a valid coverage as input, the first goal is to provide Coverage Validation. Cleaning and Simplification are priority targets as well. Coverage Union is already available, as is Overlay (in a slightly sub-optimal way). In addition, a Topology data structure will be provided to support the edge-node representation. (Yes, the Topology Suite will provide topology at last!). Stay tuned for further blog posts as functionality is rolled out. As usual, the coverage algorithms developed in JTS will be ported to GEOS, and will thus be available to downstream projects like PostGIS. JTS 1.19 has just been released! There is a great deal of new, improved and fixed functionality in this release - see the GitHub release page or the Version History for full details. This blog has several posts describing new functionality in JTS 1.19: New Functionality Many of these improvements have been ported to GEOS, and will appear in the soon-to-appear version 3.11. In turn this has provided the basis for new and enhanced functions in the next PostGIS release, and will likely be available in other platforms via the many GEOS bindings and applications. The previous post introduced the new ConcaveHullOfPolygons class in the JTS Topology Suite. This allows computing a concave hull which is constrained by a set of polygonal geometries. This supports use cases including: • generalization of groups of polygon • joining polygons • filling gaps between polygons A concave hull of complex polygons The algorithm developed for ConcaveHullOfPolygons is a novel one (as far as I know). It uses several features recently developed for JTS, including a neat trick for constrained triangulation. This post describes the algorithm in detail. The construction of a concave hull for a set of polygons uses the same approach as the existing JTS ConcaveHull implementation. The space to be filled by a concave hull is triangulated with a Delaunay triangulation. Triangles are then "eroded" from the outside of the triangulation, until a criteria for termination is achieved. A useful termination criteria is that of maximum outside edge length, specified as either an absolute length or a fraction of the range of edge lengths. For a concave hull of points, the underlying triangulation is easily obtained via the Delaunay Triangulation of the point set. However, for a concave hull of polygons the triangulation required is for the space between the constraint polygons. A simple Delaunay triangulation of the polygon vertices will not suffice, because the triangulation may not respect the edges of the polygons. Delaunay Triangulation of polygon vertices crosses polygon edges What is needed is a Constrained Delaunay Triangulation, with the edge segments of the polygons as constraints (i.e. the polygon edge segments are present as triangle edges, which ensures that other edges in the triangulation do not cross them). There are several algorithms for Constrained Delaunay Triangulations - but a simpler alternative presented itself. JTS recently added an algorithm for computing Delaunay Triangulations for polygons. This algorithm supports triangulating polygons with holes (via hole joining). So to generate a triangulation of the space between the input polygons, they can be inserted as holes in a larger "frame" polygon. This can be triangulated, and then the frame triangles removed. Given a sufficiently large frame, this leaves the triangulation of the "fill" space between the polygons, out to their convex hull. Triangulation of frame with polygons as holes The triangulation can then be eroded using similar logic to the non-constrained Concave Hull algorithm. The implementations all use the JTS Tri data structure, so it is easy and efficient to share the triangulation model between them. Triangulation after removing frame and eroding triangles The triangles that remain after erosion can be combined with the input polygons to provide the result concave hull. The triangulation and the input polygons form a polygonal coverage, so the union can be computed very efficiently using the JTS CoverageUnion class. If required, the fill area alone can be returned as a result, simply by omitting the input polygons from the union. Concave Hull and Concave Fill A useful option is to compute a "tight" concave hull to the outer boundary of the input polygons. This is easily accomplished by removing triangles which touch only a single polygon. Concave Hull tight to outer edges Concave Hull of complex polygons, tight to outer edges. Like the Concave Hull of Points algorithm, holes are easily supported by allowing erosion of interior triangles. Concave Hull of Polygons, allowing holes The algorithm performance is determined by the cost of the initial polygon triangulation. This is quite efficient, so the overall performance is very good. As mentioned, this seems to be a new approach to this geometric problem. The only comparable implementation I have found is the ArcGIS tool called Aggregate Polygons, which appears to provide similar functionality (including producing a tight outer boundary). But of course algorithm details are not published and the code is not available. It's much better to have an open source implementation, so it can be used in spatial tools like PostGIS, Shapely and QGIS (based on the port to GEOS). Also, this provides the ability to add options and enhanced functionality for use cases which may emerge once this gets some real-world use. A common spatial need is to compute a polygon which contains another set of polygons. There are numerous use cases for this; for example: This post describes a new approach for solving these problems, via an algorithm for computing a concave hull with polygonal constraints. The algorithm builds on recent work on polygon triangulation in JTS, and uses a neat trick which I'll describe in a subsequent post. Approach: Convex Hull The simplest way to compute an area enclosing a set of geometries is to compute their convex hull. But the convex hull is a fairly coarse approximation of the area occupied by the polygons, and in most cases a better representation is required. Here's an example of gap removal between two polygons. Obviously, the convex hull does not provide anything close to the desired result: Approach: Buffer and Unbuffer A popular suggestion is to buffer the polygon set by a distance sufficient to "bridge the gaps", and then "un-buffer" the result inwards by the same (negative) distance. But the buffer computation can "round off" corners, which usually produces a poor match to the input polygons. It also fills in the outer boundary of the original polygons. Approach: Concave Hull of Points A more sophisticated approach is to use a concave hull algorithm. But most (or all?) available concave hull algorithms use points as the input constraints. The vertices of the polygons could be used as the constraint points, but since the polygon boundaries are not respected, the computed hull may cross the polygon edges and hence not cover the polygons. Densifying the polygon boundaries helps, but introduces another problem - the computed hull can extend beyond the outer boundaries of individual polygons. And it introduces new vertices not present in the original data. Solution: Concave Hull of Polygons What is needed is a concave hull algorithm that accepts polygons as constraints, and thus respects their boundaries. The JTS Topology Suite now provides this capability in a class called ConcaveHullOfPolygons (not a cute name, but descriptive). It provides exactly the solution desired for the gap removal example: The Concave Hull of Polygons API Like concave hulls of point sets, concave hulls of polygons form a sequence of hulls, with the amount of concaveness determined by a numeric parameter. ConcaveHullOfPolygons uses the same parameters as the JTS ConcaveHull algorithm. The control parameter determines the maximum line length in the triangulation underlying the hull. This can be specified as an absolute length, or as a ratio between the longest and shortest lines. Further options are: • The computed hull can be kept "tight" to the outer boundaries of the individual polygons. This allows filling gaps between polygons without distorting their original outer boundaries. Otherwise, the concaveness of the outer boundary will be decreased to match the distance parameter specified (which may be desirable in some situations). • Holes can be allowed to be present in the computed hull • Instead of the hull, the fill area between the input polygons can be computed. As usual, this code will be ported to GEOS, and from there it can be exposed in the downstream libraries and projects. Examples of Concave Hulls of Polygons Here are examples of using ConcaveHullOfPolygons for the use cases above: Example: Generalizing Building Groups Using the "tight" option allows following the outer building outlines. Example: Aggregating Block Polygons The concave hull of a set of block polygons for an oceanside suburb. Note how the "tight" option allows the hull to follow the convoluted, fine-grained coastline on the right side. Example: Removing Gaps to Merge Polygons Polygons separated by a narrow gap can be merged by computing their concave hull using a small distance and keeping the boundary tight. Example: Fill Area Between Polygons The "fill area" portion of the hull between two polygons can be computed as a separate polygon. This could be used to provide an "Extend to Meet" construction by unioning the fill polygon with one of the input polygons. It can also be used to determine the "visible boundary", provided by the intersection of the fill polygon with the input polygon(s). The electrons were hardly dry on the JTS Outer and Inner Polygon Hull post when another interesting use case popped up on GIS StackExchange. The question was how to remove aliasing artifacts (AKA " jaggies") from polygons created by vectorizing raster data, with the condition that the result should contain the original polygon. A polygon for Vancouver Island vectorized from a coarse raster dataset. Aliasing artifacts are obvious. This problem is often handled by applying a simplification or smoothing process to the "jaggy" polygon boundary. This works, as long as the process preserves polygonal topology (e.g. such as the JTS TopologyPreservingSimplifier). But generally this output of this process does not contain the input polygon, since the simplification/smoothing can alter the boundary inwards as well as outwards. Simplification using TopologyPreservingSimplifier with distance = 0.1. Artifacts are removed, but the simplified polygon does not fully contain the original. In contrast, the JTS Polygon Outer Hull algorithm is designed to do exactly what is required: it reduces the number of vertices, while guaranteeing that the input polygon is contained in the result. It is essentially a simplification method which also preserves polygonal topology (using an area-based approach similar to the Visvalingham-Whyatt algorithm). Outer Hull using vertex ratio = 0.25. Artifacts are removed, and the original polygon is contained in the hull polygon. Here's a real-world example, taken from the GADM dataset for administrative areas of Germany. The coastline of the state of Mecklenburg-Vorpommern appears to have been derived from a raster, and thus exhibits aliasing artifacts. Computing the outer hull with a fairly conservative parameter eliminates most of the artifacts, and ensures polygonal topology is preserved. A portion of the coastline of Mecklenburg-Vorpommern showing aliasing artifacts. The Outer Hull computed with vertex ratio = 0.25 eliminates most artifacts, and preserves the coastline topology. Future Work A potential issue for using Outer Hull as a smoothing technique is the choice of parameter value controlling the amount of change. The algorithm provides two options: the ratio of reduction in the number of vertices, or the fraction of change in area allowed. Both of these are scale-independent, and reflect natural goals for controlling simplification. But neither relate directly to the goal of removing "stairstep" artifacts along the boundary. This might be better specified via a distance-based parameter. The parameter value could then be determined based on the known artifact size (i.e. the resolution of the underlying grid). Since the algorithm for Outer Hull is quite flexible, this should be feasible to implement. The JTS Topology Suite recently gained the ability to compute concave hulls. The Concave Hull algorithm computes a polygon enclosing a set of points using a parameter to determine the "tightness". However, for polygonal inputs the computed concave hull is built only using the polygon vertices, and so does not always respect the polygon boundaries. This means the concave hull may not contain the input polygon. It would be useful to be able to compute the "outer hull" of a polygon. This is a valid polygon formed by a subset of the vertices of the input polygon which fully contains the input polygon. Vertices can be eliminated as long as the resulting boundary does not self-intersect, and does not cross into the original polygon. An outer hull of a polygon representing Switzerland As with point-set concave hulls, the vertex reduction is controlled by a numeric parameter. This creates a sequence of hulls of increasingly larger area with smaller vertex counts. At an extreme value of the parameter, the outer hull is the same as the convex hull of the input. A sequence of outer hulls of New Zealand's North Island The outer hull concept extends to handle holes and MultiPolygons. In all cases the hull boundaries are constructed so that they do not cross each other, thus ensuring the validity of the result. An outer hull of a MultiPolygon for the coast of Denmark. The hull polygons do not cross. It's also possible to construct inner hulls of polygons, where the constructed hull is fully within the original polygon. An inner hull of Switzerland Inner hulls also support holes and MultiPolygons. At an extreme value of the control parameter, holes become convex hulls, and a polygon shell reduces to a triangle (unless blocked by the presence of An inner hull of a lake with islands. The island holes become convex hulls, and prevent the outer shell from reducing fully to a triangle A hull can provide a significant reduction in the vertex size of a polygon for a minimal change in area. This could allow faster evaluation of spatial predicates, by pre-filtering with smaller hulls of polygons. An outer hull of Brazil provides a 10x reduction in vertex size, with only ~1% change in area. This has been on the JTS To-Do list for a while (I first proposed it back in 2009). At that time it was presented as a way of simplifying polygonal geometry. Of course JTS has had the TopologyPreservingSimplifier for many years. But it doesn't compute a strictly outer hull. Also, it's based on Douglas-Peucker simplification, which isn't ideal for polygons. It seems there's quite a need for this functionality, as shown in these GIS-StackExchange posts (1, 2, 3, 4). There's even existing implementations on Github: rdp-expansion-only and simplipy (both in Python) - but both of these sound like they have some significant issues. Recent JTS R&D (on concave hulls and polygon triangulation) has provided the basis for an effective, performant polygonal concave hull algorithm. This is now released as the PolygonHullSimplifier class in JTS. The PolygonHullSimplifier API Polygon hulls have the following characteristics: • Hulls can be constructed for Polygons and MultiPolygons, including holes. • Hull geometries have the same structure as the input. There is a one-to-one correspondence for elements, shells and holes. • Hulls are valid polygonal geometries. • The hull vertices are a subset of the input vertices. The PolygonHullSimplifier algorithm supports computing both outer and inner hulls. • Outer hulls contain the input geometry. Vertices forming concave corners (convex for holes) are removed. The maximum outer hull is the convex hull(s) of the input polygon(s), with holes reduced to triangles. • Inner hulls are contained within the input geometry. Vertices forming convex corners (concave for holes) are removed. The minimum inner hull is a triangle contained in (each) polygon, with holes expanded to their convex hulls. The number of vertices removed is controlled by a numeric parameter. Two different parameters are provided: • the Vertex Number Fraction specifies the desired result vertex count as a fraction of the number of input vertices. The value 1 produces the original geometry. Smaller values produce simpler hulls. The value 0 produces the maximum outer or minimum inner hull. • the Area Delta Ratio specifies the desired maximum change in the ratio of the result area to the input area. The value 0 produces the original geometry. Larger values produce simpler hulls. Defining the parameters as ratios means they are independent of the size of the input geometry, and thus easier to specify for a range of inputs. Both parameters are targets rather than absolutes; the validity constraint means the result hull may not attain the specified value in some cases. Algorithm Description The algorithm removes vertices via "corner clipping". Corners are triangles formed by three consecutive vertices in a (current) boundary ring of a polygon. Corners are removed when they meet certain criteria. For an outer hull, a corner can be removed if it is concave (for shell rings) or convex (for hole rings). For an inner hull the removable corner orientations are reversed. In both variants, corners are removed only if the triangle they form does not contain other vertices of the (current) boundary rings. This condition prevents self-intersections from occurring within or between rings. This ensures the resulting hull geometry is topologically valid. Detecting triangle-vertex intersections is made performant by maintaining a spatial index on the vertices in the rings. This is supported by an index structure called a VertexSequencePackedRtree. This is a semi-static R-tree built on the list of vertices of each polygon boundary ring. Vertex lists typically have a high degree of spatial coherency, so the constructed R-tree generally provides good space utilization. It provides fast bounding-box search, and supports item removal (allowing the index to stay consistent as ring vertices are removed). Corners that are candidates for removal are kept in a priority queue ordered by area. Corners are removed in order of smallest area first. This minimizes the amount of change for a given vertex count, and produces a better quality result. Removing a corner may create new corners, which are inserted in the priority queue for processing. Corners in the queue may be invalidated if one of the corner side vertices has previously been removed; invalid corners are discarded. This algorithm uses techniques originated for the Ear-Clipping approach used in the JTS PolgyonTriangulator implementation. It also has a similarity to the Visvalingham-Whyatt simplification algorithm. But as far as I know using this approach for computing outer and inner hulls is novel. (After the fact I found a recent paper about a similar construction called a Shortcut Hull [Bonerath et al 2020], but it uses a different approach). Further Work It should be straightforward to use this same approach to implement a variant of Topology-Preserving Simplifier using the corner-area-removal approach (as in Visvalingham-Whyatt simplification). The result would be a simplified, topologically-valid polygonal geometry. The simplification parameter limits the number of result vertices, or the net change in area. The resulting shape would be a good approximation of the input, but is not necessarily be either wholly inside or outside. As the title of this blog indicates, I'm a fan of linearity. But sometimes a little non-linearity makes things more interesting. A convenient way to generate non-linear curved lines is to use Bezier Curves. Bezier Curves are curves defined by polynomials. Bezier curves can be defined for polynomials of any degree, but a popular choice is to use cubic Bezier curves defined by polynomials of degree 3. These are relatively easy to implement, visually pleasing, and versatile since they can model ogee or sigmoid ("S"-shaped) curves. A single cubic Bezier curve is specified by four points: two endpoints forming the baseline, and two control points. The curve shape lies within the quadrilateral convex hull of these points. Note: the images in this post are created using the JTS TestBuilder. Cubic Bezier Curves, showing endpoints and control points A sequence of Bezier curves can be chained together to form a curved path of any required shape. There are several ways to join composite Bezier curves. The simplest join constraint is C0-continuity: the curves touch at endpoints, but the join may be a sharp angle. C1-continuity (differentiable) makes the join smooth. This requires the control vectors at a join point to be collinear and opposite. If the control vectors are of different lengths there will be a different radius of curvature on either side. The most visually appealing join is provided by C2-continuity (twice-differentiable), where the curvature is identical on both sides of the join. To provide this the control vectors at a vertex must be collinear, opposite and have the same length. Bezier Curve with C2-continuity A recent addition to the JTS Topology Suite is the CubicBezierCurve class, which supports constructing Bezier Curves from LineStrings and Polygons. JTS only supports representing linear geometries, so curves must be approximated by sequences of line segments. (The buffer algorithm uses this technique to approximate the circular arcs required by round joins.) Bezier Curve approximated by line segments Bezier curves can be generated on both lines and polygons (including holes): Bezier Curve on a polygon There are two ways of specifying the control points needed to define the curve: Alpha (Curvedness) Parameter The easiest way to define the shape of a curve is via the parameter alpha, which indicates the "curvedness". This value is used to automatically generate the control points at each vertex of the baseline. A value of 1 creates a roughly circular curve at right angles. Higher values of alpha make the result more curved; lower values (down to 0) make the curve flatter. Alpha is used to determine the length of the control vectors at each vertex. The control vectors on either side of the vertex are collinear and of equal length, which provides C2-continuity. The angle of the control vectors is perpendicular to the bisector of the vertex angle, to make the curve symmetrical. Bezier Curve for alpha = 1 Explicit Control Points Alternatively, the Bezier curve control points can be provided explicitly. This gives complete control over the shape of the generated curve. Two control points are required for each line segment of the baseline geometry, in the same order. A convenient way to provide these is as a LineString (or MultiLineString for composite geometries) containing the required number of vertices. Bezier Curve defined by control points, with C2 continuity When using this approach only C0-continuity is provided automatically. The caller must enforce C1 or C2-continuity via suitable positioning of the control points. Bezier Curve defined by control points showing C0 and C1 continuity Further Ideas • Allow specifying the number of vertices used to approximate each curve • Add a function to return the constructed control vectors (e.g. for display and analysis purposes) • Make specifying explicit control points easier by generating C2-continuous control vectors from a single control point at each vertex A common spatial need is to find a polygon that accurately represents a set of points. The convex hull of the points often does not provide this, since it can enclose large areas which contain no points. What is required is a non-convex hull, often termed the concave hull. The Convex Hull and a Concave Hull of a point set A concave hull is generally considered to have some or all of the following properties: • The hull is a simply connected polygon • It contains all the input points • The vertices in the hull polygon boundary are all input points • The hull may or may not contain holes For a typical point set there are many polygons which meet these criteria, with varying degrees of concaveness. Concave Hull algorithms provide a numeric parameter which controls the amount of concaveness in the result. The nature of this parameter is particularly important, since it affects the ease-of-use in practical scenarios. Ideally it has the following characteristics: • Simple geometric basis: this allows the user to understand the effect of the parameter and aids in determining an effective value • Scale-free (dimensionless): this allows a single parameter value to be effective on varying sizes of geometry, which is essential for batch or automated processing • Local (as opposed to global): A local property (such as edge length) gives the algorithm latitude to determine the concave shape of the points. A global property (such as area) over-constrains the possible result. • Monotonic area: larger (or smaller) values produce a sequence of more concave areas • Monotonic containment :the sequence of hulls produced are topologically nested • Convex-bounded: an extremal value produces the convex hull This is a well-studied problem, and many different approaches have been proposed. Some notable ones are: Of these, Delaunay Erosion (Chi-shapes) offers the best set of features. It is straightforward to code and is performant. It uses the control parameter of Edge Length Ratio, a fraction of the difference between the longest and shortest edges in the underlying Delaunay triangulation. This is easy to reason about, since it is scale-free and corresponds to a simple property of the point set (that of distance between vertices). It can be extended to support holes. And it has a track record of use, notably in Oracle Spatial. ConcaveHull generated by Delaunay Erosion with Edge Length Ratio = 0.3 Recently the Park-Oh algorithm has become popular, thanks to a high-quality implementation in Concaveman project (which has spawned numerous ports). However, it has some drawbacks. It can't support holes (and likely not disconnected regions and discarding outlier points). As the paper points out and experiment confirms, it produces rougher outlines than the Delaunay-based algorithm. Finally, the control parameter for Delaunay Erosion has a simpler geometrical basis which makes it easier to use. Given these considerations, the new JTS ConcaveHull algorithm utilizes Delaunay Erosion. The algorithm ensures that the computed hull is simply connected, and contains all the input points. The Edge Length Ratio is used as the control parameter. A value of 1 produces the convex hull; 0 produces a concave hull of minimal size. Alternatively the maximum edge length can be specified directly. This allows alternative strategies to determine an appropriate length value; for instance, another possibility is to use a fraction of the longest edge in the Minimum Spanning Tree of the input points. The recently-added Tri data structure provides a convenient basis for the implementation,. It operates as follows: 1. The Delaunay Triangulation of the input points is computed and represented as a set of of Tris 2. The Tris on the border of the triangulation are inserted in a priority queue, sorted by longest boundary edge 3. While the queue is non-empty, the head Tri is popped from the queue. It is removed from the triangulation if it does not disconnect the area. Insert new border Tris into the queue if they have a boundary edge length than the target length 4. The Tris left in the triangulation form the area of the Concave Hull Thanks to the efficiency of the JTS Delaunay Triangulation the implementation is quite performant, approaching the performance of a Java port of Concaveman. Concave Hull of Ukraine dataset; Edge Length Ratio = 0.1 Optionally holes can be allowed to be present in the hull polygon (while maintaining a simply connected result). A classic demonstration of this is to generate hulls for text font glyphs: This algorithm is in the process of being ported to GEOS. The intention is to use it to enhance the PostGIS ST_ConcaveHull function, which has known issues and has proven difficult to use. Further Ideas • Disconnected Result - It is straightforward to extend the algorithm to allow a disconnected result (i.e. a MultiPolygon). This could be provided as an option. • Outlier Points - It is also straightforward to support discarding "outlier" points. • Polygon Concave Hull - computing a concave "outer hull" for a polygon can be used to simplify the polygon while guaranteeing the hull contains the original polygon. Additionally, an "inner hull" can be computed which is fully contained in the original. The implementation of a Polygon Concave Hull algorithm is well under way and will be released in JTS soon.
{"url":"https://lin-ear-th-inking.blogspot.com/2022/","timestamp":"2024-11-12T13:34:18Z","content_type":"text/html","content_length":"222391","record_id":"<urn:uuid:0d3000cb-0fcb-490e-9d92-f2be524f429c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00009.warc.gz"}
Implementation of an implicit scheme for solving gas dynamics equations using the discontinuous Galerkin method on unstructured grids CitationMasyagin V. F. ''Implementation of an implicit scheme for solving gas dynamics equations using the discontinuous Galerkin method on unstructured grids'' [Electronic resource]. Proceedings of the International Scientific Youth School-Seminar "Mathematical Modeling, Numerical Methods and Software complexes" named after E.V. Voskresensky (Saransk, October 8-11, 2020). Saransk: SVMO Publ, 2020. - pp. 99-101. Available at: https://conf.svmo.ru/files/2020/papers/paper29.pdf. - Date of access: 14.11.2024.
{"url":"https://conf.svmo.ru/en/archive/article?id=274","timestamp":"2024-11-14T01:48:11Z","content_type":"text/html","content_length":"10905","record_id":"<urn:uuid:2489fbc5-0d05-4ee6-8ef4-815a919ebd1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00440.warc.gz"}
Properties of number 24218 24218 has 4 divisors (see below), whose sum is σ = 36330. Its totient is φ = 12108. The previous prime is 24203. The next prime is 24223. The reversal of 24218 is 81242. It is a semiprime because it is the product of two primes. It can be written as a sum of positive squares in only one way, i.e., 12769 + 11449 = 113^2 + 107^2 . It is a Curzon number. It is a plaindrome in base 5. It is a junction number, because it is equal to n+sod(n) for n = 24196 and 24205. It is an unprimeable number. It is a polite number, since it can be written as a sum of consecutive naturals, namely, 6053 + ... + 6056. 2^24218 is an apocalyptic number. 24218 is a deficient number, since it is larger than the sum of its proper divisors (12112). 24218 is a wasteful number, since it uses less digits than its factorization. 24218 is an odious number, because the sum of its binary digits is odd. The sum of its prime factors is 12111. The product of its digits is 128, while the sum is 17. The square root of 24218 is about 155.6213352982. The cubic root of 24218 is about 28.9320645218. Subtracting from 24218 its product of digits (128), we obtain a triangular number (24090 = T[219]). It can be divided in two parts, 242 and 18, that multiplied together give a square (4356 = 66^2). The spelling of 24218 in words is "twenty-four thousand, two hundred eighteen".
{"url":"https://www.numbersaplenty.com/24218","timestamp":"2024-11-14T18:57:13Z","content_type":"text/html","content_length":"8002","record_id":"<urn:uuid:77be9e2a-ec0d-4cdb-a3ee-6b2109475931>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00451.warc.gz"}
The procedure to find the square roots of the numbers like 11,15,13 by division method Yahoo visitors came to this page yesterday by using these keywords : │nonlinear ODE Runge Kutta matlab │fraction calculator with variable │ │least common multiple practice test │solve lcd alegebra │ │free 4th grade algebra worksheets │shortcuts for solving math matrix problems │ │online free paper grader │coordinate plane printouts │ │radical expressions calculator │slope worksheet │ │6th grade decimal worksheet dividing │ti calculators that convert hex to binary │ │how to find the square root │Density worksheets for middle school │ │online factoring │balancing equation answers │ │Y=MX+B CONVERT TO AX+BY=C FORM CALCULATOR │calculator free fraction key │ │Algebra formula Chart │pre algebra homework │ │rules for adding negatives and positives │adding fraction integers worksheet │ │emulator for TI-84 │ │ │java code sum equation │TO THE POWER OF A FRACTION │ │Using one graph to solve equations │making 8 grade algebra worksheets problems │ │answers for chapter test b worksheet │algebra help graphing linear equalities with three variables │ │example of poems test online │adding and subtracing unlike fractions │ │expanding & factoring worksheets │simplying exponents │ │ti 83 cube root │liner graph example │ │differential solving non linear systems │Algebra Solutions │ │descartes chart (and) college algebra │Online Chemistry Equation solver │ │ELEMENTARY MATH TRIVIA │multiplacation squares │ │prentice hall algebra 1,layout examples │addison wesley chemistry textbook download │ │converting second order equations to first order systems │simplifying program ti 83 │ │heaviside function ti 89 │algebra 1 questions and answers │ │how to order fractions from least to greatest │Hard maths expressions │ │entering accounting formulas in ti-83 │"relating graphs to events" math │ │solving algebraic equations in matlab │solve second order ode differential equation equal to constant │ │how to solve algebraic equations │Algebra Word Problem Answers Free │ │answers to homework the text book college algebra concepts and models fourth edition│algebra problem help onlin │ │online graphing calculator conic equations │per-algebra 3 glencoe answer textbook │ │how to factor trinomials with tic tac toe boxes │simplifying square roots worksheet │ │convert functions to vertex form │MATHEMATICS/INTEGERS PRACTISE PAPER │ │difficult fractions equations subtracting │y intercept online calculator │ │8th grade algebra one step equations distributive properties │algerbra 1 answer │ │factorise machine │algebra 2 help sites │ │integers review worksheet │Help passing Intermediate Algebra │ │cubed root │answers for mcdougal littell algebra textbook │ │cubed root tI 83 │grade 10 math and substitution eliminatin │ │base conversion program TI84 │elementary algebra tutorial │ │free aptitude test papers │quadratic equations square method powerpoint │ │calculator divide polynomial │exponent + worksheet │ │solving like terms equations with fraction in front ( ) │simplifying expressions calculator │ │Tips Do Algebra Factoring │free algebraic expressions calculator │ │algebra factor and expansion for 7 grade │long divison equation solver │ │find lcd worksheet │graphing cubes radical │ │matlab spline programs interpolation │software │ │calculator for radical equations │real number system │ │understanding decimels │"trigonometry project" story project │ │free physics problem solved eBooks │info on multiplying integers │ │linear algebra help free │algerbra notes │ │lattice multiplication lesson plan │WHOLE NUMBER WITH FRACTION TO DECIMAL │ │year 7 + freee maths worksheets │5th grade sceince compound engine │ │the history of the square root function │find slope calculator │ │ti 86 calculate log base 2 │saxon algebra 2 answers key │ │linear equations with three variables │solving basic equation worksheets │ │algebra equations variables and addition worksheets │5th grade algebra │ │proportion worksheet │Cheats implicit differentiation │ │using the distributive property to solve algebraic equations │algebra year 10 │ │algebra math trivia │solving nonhomogeneous equations using the coordinate change methods │ │ti-83 roots polynomial │permutation math test │ │math test generator │yr 8 math worksheet │ │diamond equation maths │table for converting time fractions into decimals │ │subtracting polynomials lcd │Pre Algebra Inequalitiy puzzle │ │interval notation calculator │TI 83 graphing calculator online │ │ti-83 equation solver program │change mixed fraction into a decimal │ │Exponets and radicals │show me the square root of 89 │ │how to teach functions to grade seven │free math work, 11th grade algebra │ │Calculating roots of a 3rd order polynomial │elementary worksheets for cubic volume │ │6th grade math poems │Factoring Sums and differences of cubes worksheets │ │two variable equation solver │subtract positive fraction from negative fraction │ │how to manually program the quadratic formula into TI-83? │TAKS release tests "Houston, Texas" │ │"mathematical book"+"free download" │what is the lowest common multiple of 7 & 28 │ │distributive property using fractions │mathematical formula for permutation and combination │ │NEGATIVE EXPONETS AND POLYNOMIALS │dividing decimals math "work sheet" │ │howto solve irrational inequality equation in pdf │algebra 2 test answers │ │(free "fractions lesson plan" + third grade) │adding and subtracting decimals fifth grade lesson │ │lesson plan+radicals │third grade algebra │ │year 8 fraction worksheet │algebra answers online │ │uses of radical expressions" │Detailed lesson plan on adding and subtracting polynomials │ │polynomial poem │online polynomial solver │ │holt handbook online grade 7 │solving for a specified variable calculator │ │I need the answers to the mcdougal littell middle school math course 2 │solve second order differential equations in matlab │ │2004 amatyc test answer │glencoe algebra 1 workbook answers │ │multiplying and dividing integers worksheet │lesson plans for translating simple english sentences into math equations│ │free online general aptitude test with answers │free college algebra worksheets online │ │exponent problem sheets 7th grade │ti 89 and systems with 3 variables │ │saxon algebra 2 lesson answers │grade 10 math algebra and exponents tests │ │elementary algebra topics review sheet │prentice hall physics problems │ │Algebra 1 saxon answers │square root + grade 6 │ │free excel algebra factor quadratic │foerster precalculus powerpoints │ │how to solve algebra homework │answers to all high school math workbook │ │maths gcse past papers download │inputting exponents into TI-84 plus │ │coordinates picture worksheet │real world examples of square root function │ │java check if number is prime boolean flag │gcd formula │ │simplify algebra fractions power │Glencoe Physics textbook answers │ │algebra1 prentice hall │free statistics practice fourth grade │ │Yr 8 maths test │Practice ERB Test │ │online calculator to solve expressions under radicals │math lessons + combining like terms │ │answers for dividing polynomials │mcdougal littell workbook │ │solving a radical sum │free online sats test │ │TI-84 plus puzzle pack cheat │multiplying permutations ti83 │ │ti online graphers │complete trig values chart │ │diferential equation solving basics │college algebra training │ │ordering fractions greatest to least │free multi number integer worksheets │ │free algebra calculator │algebra workbook for sixth and seventh graders │ │ti89 LCM │Divison Problems for third graders │ │Graphing Ellipse with Graphing Calculator │java aptitute question │ Bing users found our website yesterday by typing in these math terms : Free maths 11+ papers, SOLVING OPERATION FRACTION EQUATIONS, trigonometry answers, 4-6 Multiplying Polynomials, 8th grade math 101 visual matrices, solving second order differential equations, online matrix solver. Worksheets of linear equation of one variable of grade 6, how do you multiply and divide radical expressions, "expression problem" program, formula for ratio.. What is the square root method, abstract algebra homework solution, math trivia 2007, subtracting fractions from integers, Writing linear equations worksheets, precalculus mixtures. Adding integer worksheet, worksheets + percent as a proportion, solving radical equations on line, geometry(for people who didn't pass algebra 1), online equation calculator with fractions, examples of quadratic problems, "algebra ks3". Free download of algebra textbook, finding cube roots ti 89, worksheets algebraic equation writing, third grade polynomials, prentice hall biology workbook answers, C++ Program three numbers least to greatest, online problem solver geometry. How do you type a 5th root into a scientific calculator, convert percentage to fraction, worksheet on adding and subtracting integers. Multiplying and dividing positive and negative numbers mental worksheet, slopes with fractions algebra tutorial, Free Printable Sat Practice Tests. Balance chemical equations on ti calculator, Simplifying Algebraic Expressions , free math papers, difference quotient logs, combing like terms worksheet, factorising eqautions, calculate slope of a Dividing by decimals free workpage, "middle school math with pizzazz book c answers", how to solve factor expressions containing rational exponents, solving problems with expanding decimals into standard form, Who Invented Synthetic Division. Maths exponential operations, divide and multiplication error problem, solving multiple equations, "challenge questions" + logic + "4th grade", do my factoring equations, multiplying binomial by a monomial worksheet. Free problem solver quadratic formula, trigonomic, free math trivia question and answer, mathmatic symbols, free finite math 2 help on-line, how to pass pre-algebra. Simultaneous quadratic equations solver, how to cheat and get answers to algebric equation problems, 5th grade multiplication/division table printouts. Rational expressions calculator, get common factor of a number, year 5 fraction worksheets, how to optimize the area under the curve, rules for inequalities in algebra worksheet. How do you solve nonlinear inequality, online free time clock +caculators, how to store formulas ti-84 plus, logarithm practice. Solving decimal equations: addition and subtraction, math problems on algebra/substitution, worlds hardest algebra problem. Free algebra practice problems and answers, division variable expression problems, expression and equations powerpoint, saxon algebra 1 answers. Maths worksheets simple addition yr.9, free online 2007 task test preparation 3rd grade, solving quadratic system. Solve syntax on TI89, excel triangle solving, partial differences subtraction kids math. Practice sheets for high school, factoring trigonometry, compound inequalities solver, 7th grade math tutor unit ratio. Prentice hall mathematics pre-algebra, probability excel equation sheet, factoring third-order polynomials, how to change standard quad equations to vertex graphing equation, pre-al 8th grade rational numbers, Boolean Algebra Simplification Software. Permutations and combination for gre, reducing radicals to the simplest form calculator, how do you solve an exponential problem that has a variable for the exponent, free ks3 maths test papers, numerical radical expression chart, online grapher with slope. 2nd order ode solver, worksheets on integers, mix numbers, ti 83 emulator, test of genius worksheet, free calculator for rational expressions, glenco algegra. Finding answers to solving algebra questions, math help" linear programing, aptitude question papers, Java aptitude Question, multiply a polynominal', convert decimals to fractions calculator, quiz "SATS 2". Factor equation program, printable grade 8 math test, ti84 plus imaginary number matrix, Free Algebra Math Problem Solver. Free 4th grade homework sheets, easy way to multiply elementry, boolean logic reducer. Aptitude paper in mathematics, TI-83 Plus clearing memory, yr 8 fractions test, biology mcqs+9th grade, multiplying and dividing mixed numbers ppt. Free ucsmp geometry test solutions, hands-on equations printables, square root charts, whole number fractions to decimal converter. Calculator Program for Ellipses, What is the difference between evaluation and simplification of an expression in math?, Free answers for saxon math. Online graphing inequalities calculator with table, grade nine algebra math help, linear equation simple tasks, how to input logarithmic functions with a ti-89, Prentice Hall Mathematics Algebra 1 answer key, Simplify, add, and/or subtract each radical expression, matrix math formulas. "basic algebra problems", Algebra Equation Calculator, simple maths test. Lowest common multiple calculator, Calculating range with negative numbers and positive numbers, accounting books free india. Answers to Vocabulary power plus for the new sat level 3 unit 4, how do i get a third square on a TI 89, pritable math sheet, ordering odd fractions from least to greatest, using Gaussian elimination for dummies, Commutative Property free worksheets, Methods solving trinomials. Statistical techniques in business & economics 12th edition - answer keys, trinomial factorer, exponents, square roots, variables squared, "Ged maths". Solving exponential equation excel, rational algebraic expressions quiz, ti 83 rom image pocket pc, "7th grade" "square root". Simplifying algebraic expressions games, bbc algebra and fractions introduction g.c.s.e, factorising faction, factor polynomials online. Adding and subtracting integrals, answers for algebra3 online class, exponential equation with real number as base and variable as power, find free ucsmp geometry chapter test, math formulas matricies, decimals in linear equations with three variables. Chemistry of life games online 9th grade biology, printable online graphing calculator, solve for cubed, algebra-taking out common factors ks4, adding and subtracting integers, formulas with variables math worksheets middle school. Printable worksheets on factors/least common factors, square root worksheets, algebraic expression solver, pictograph printables, mcdougallittell math worksheets, aptitude question. Worksheet physics quiz, worksheet class 9th, adding positive and negative numbers worksheets. Matlab Nonlinear ODE, lowest common denominator calculator, Saxon Algebra 1 Answer Guide, Factorising Linear equations, ti 83 plus java, sample a first grade aptitude test. Algebra help solve prob;ems, 1st order pde how to solve, free download model question of Test of Reasoning., +cube root tables, Integer worksheets, example math trivia on geometry. Free intermediate algebra solutions, how to convert percent to decimal, aptitude questions with solutions, solving equations with determinants ti83, Adding Integers GAMES, Balance both half-reactions and cell reactions involving redox animation. David lay linear algebra homework solution, base exponent ppt simplify, solving systems of non linear equations, MATLAB, synthetic division calculator, calculator that solves problems, algebra teacher, easy way dividing decimals. 4th grade fraction problems, denominator calculator, FREE FIFTH GRADE MATH WORK SHEETS, everyday math text book online 6th grade. Free radicals worksheets, CLEP college mathematics help, 7th math poems, multiplying permutations ti 83, how to do algebra, tables with fraction, percentage and decimal equivalent, online graphing calculators polynomial degree 4. Graphing rational expression worksheet, Mcdougal Littell passport to algebra and geometry answer sheet for chapter 4, algebrator, poems that include math phrases. Multiplying and dividing integers test, mcdougal littell answer key online, subtracting negative fractions, fractions, year 10 revision sheets- maths stem leaf, online calculators + exponential expressions with variables, scott foresman 1st grade math worksheets. Differential definition and how to slove, trigonometry old maths papers, middle school math unit conversion table. Hard algebra problems, cheater phönix ti-84 plus, ordering fractions from least to greatest, online algebraic calculator. Code Orange pre quiz, free math simplifier, algebriac formulae, algebra expression calculator, understanding quadratic equations, permutation and combination worksheets. Reviewer for entrance exam, California Prentice Hall Answer Keys, equations with decimals-worksheets, Math Cheats, Rudin W. Principles of Mathematical Analysis + free ebook + pdf, excel secondary school exercise, examples addition of radical expressions. Free download algebra solver, Elementary Algebra-Notes on factoring, order of operations worksheet 6th grade, factorization solver. Simplified radical form, algebra with pizazz, examples of absolute value equations and inequalities in two variables, parabolic interpolation vb, algebra quizes and answer sheets. Russian ks3 year seven maths worksheets, solve for the variable calculator, solving simultaneous differential equations matlab, answers to mcdougal littell algebra 2 book, logic questions printables, 5th grade solving rational numbers, liner algebra 1. I dont understand algebra, answers to mcdougal math worksheets, using java to solve a quadratic question, how to divide adding subtracting and multiplying. Extension activity for slope 9th grade, division of exponents worksheets, college algebra calculators, algebra help vertex, domain, and range, free Algebra online for year 11. CONVERT DECIMAL TO FRACTION, mcdouglas littell science, intercept formula, mcdougal littell geometry book answers, how to factor third polynomial, fourth root, answers to polynomial problems. Ti-83 lagrange multiplier, online ti-83, boolean algebra solver, glencoe test booklet prealgebra, cubed root excel, fraction flowchart pictures. On line program to solve rational expressions, logarithms for dummies, answers for algebra 1, word problems in trigonometry with answers, free online algebra, solving radicals, explorations in college algebra answer key. Factorial multiplication, free download texas TI 83 graphical calculator, glencoe+geometry+integration+applications+connection+1998+practice+masters+booklet, quadratic root, algebra solved exercises, online factoring calculator , how to solve aptitude test papers. Simultaneous equation solver instructions for TI-89, worksheets proportions, permutations charts, simplify variable fraction, simultaneous equations in excel. Algebra simplify expressions, fraction conversion, algebra, "square root game", "How to Solve Quadratic Equations", finding scale worksheets for math, mcdougal littell algebra 2 test resource book. Free math printable sheet, sideways cubic function, "fourth order polynomial" "how to solve", symbolic method math, Like Terms Worksheet, fraction solver. Worksheet operations on algebraic expressions, solving homogeneous simultaneous equations in matrix form, convert .30 into a fraction. How to solve mixed review questions, formula to determine area of a quadrant, quadratic calculator, FACTORING ON TI 83, graphing quadratic equations cubed. Glencoe Algebra 2 ch 4 cumulative review answers, numerical easy questions for beginner, simplifying complex equations, math trivia puzzles, adding square roots of variances. Solving 3 simultaneous equations with 3 unknowns, what are mix numbers, Computer Based Testing systems AND High School Algebra, prentice hall algebra 1 quizzes. Functions worksheet ti calculator coordinate plane, algebra help for 7th grade algebra online, variable worksheets, transforming formulas practice test. Multivariable equation calculator, binary scale factor log "base 2", Algebrator, algibra, nth term. Mathpower 7 worksheets, math printouts schoolwork, "ratios and proportions" lesson plan free. Thirds grade chemistry worksheets, permutations and combinations basics, How do you solve Radicals?. Graphing the unit step function ti89, algebra cheater, scale math worksheets, y-intercepts.com, "pizzazz" +"Math worksheets". Factoring online applet, abstract algebra solutions Gallian, fun activity related to solving one-variable inequalities, easy absolute value worksheet. 6th grade Algebra practice online, Ti 84 graphing points, 2nd order differential equation calculator, revision math tests for ks3. Simplifying algebraic equations, algebra slope worksheets fun, ways to use exponents. Convert percetage to decimal, www.love caculater.com, adding rational expressions with different denominators need help, least to greatest fractions, algebra exercises parentheses, high school physics ebook + pdf. Algebra worksheets for k-2, Algebra Solver Free, Using charts to solve problems + algebra, math problems solver. Gauss-jordan elimination method on ti-83 plus, Glencoe Science- Biology "the Dynamics of Life" Cheat sheets, online math games including square roots, parabola graphing exponents, Flowchart for calculating the permutation and combination. Solving equations for y games, gratest common factors, convert fraction to decimal form, science worksheets for ninth grade, grade 5 math teaching algebra with substitution, worksheets, "download maths books", Elementry Algebra Equations. Mixed numbers to decimals, simplify fractions fun sheet, worksheet maths about factor, algebra with pizzazz! answers, pre algebra factor trees, math poems. Algebra canceling worksheets, algebra calculator with two variables, ladder method. McDougal Littell Earth Science Answers, fraction and decimal word problems, how to calculate standard deviation on ti83 plus, Pre-algebraic expressions worksheet, Analytical trigonometry 8th Cubed root 8 squared, dividing, algebra, exponents, ontario grade 8 math tests. Reciprocal polynomial equations from excel, online graphing calculator degrees, Formula in programming for L.C.M, algebra importance, TI-83+ rom image, writing exponents ti-89. Trivia questions algebra 2, college maths worksheets, mix fractions worksheet, "Fourth grade math worksheets". Hard 10th grade algebra math problems, free printable fraction number line, how to solve one step equation .ppt, polynomial equation matlab, converting fractions with whole numbers to decimals. Finding Common Denominators Practice, how to understand probability yr 11, add or subtract Rational Expressions Calculator, rational expression ti 83 programs, Algrebra least common dinominator, steps to solve an algebraic equation, quadratic formula program for Ti-84. Answers for mcdougal littell worksheets, Free Online Calculator+square roots, coverting celsius to farenheit, houghton mifflin "rules of division", basic algebra worksheets, GCF LCM math worksheets, multiplying and dividing square roots. 6th grade taks answer grid, 9th grade scientific notation, basic math percentage formulas, INDIA APTITUDE QUESTIONS PDF, technologies used in algebraic expression, how do you subtract integers, Prentice Hall Algebra 1 help. Math poems and trivia, TI 85 log base change, solving polynomial fractions, printable Square root Table, third grade multiplication math problems.com, circle equations, free printable trivia Algebra 2 HRW online textbook, Math Problems/Algebra 8th grade - 7th grade, latest math trivia. Put summation in ti-84, free online math solver, working lowest common multiple program, polynomial algebra solver. Factor decomposition excel, Impact Mathematics Course 2 answer key, online formula: finding zeros of 3rd degree polynomial, multiplying 2 quadratic equations matlab, like terms pre algebra work sheet, how do you put a logarithm in to a graphing calculater, multiplying decimals free worksheets (6th grade). Factoring quadratic equations calculator, Edhelper Worksheet dividin polynomials, buy college algebra calculators online, grade 10 algebra. Worksheets for physics year 8, free poems about math, second order equation graphs, free pass papers with questions and answers advanced level biology, How Do You Convert a Decimal into a Mixed Number?, free relations and functions worksheet, algebra calculator - lowest common multiple. Texas edition algebra 2 textbooks online, free 7th grade integers math worksheet, Geometry and trigonometry year 11 cheat sheet, algebra 9th grade, finding vertex of absolute value equations, Algebra 1 Test on Polynomials. Math poems algebra, TI 83 programming modular arithmetics, practice questions on adding, subtracting, multiplying and dividing fractions. Www.Middle School Math,Course 1 worksheet answer .com, Prentice Hall Pre-algebra california education book help in Exponents and division, math formulas and theory, passport to algebra and geometry chapter 4 answers, math trivias trigonometry, McDougal Littell Algebra book selected answers, factor polynomials online free d cubed plus 64. Writing a expression in radical form, pizzazz for algebra II, balancing chemical equation calculator, fraction cubes for addition and subtraction of fractions, answers to mcdougal littell course 2 math book, download of SAT math paper, 6th grade free math properties worksheet. Easy online graphing calculator, simplifying roots calculator online, gr 11 accounting past exams, "free maths font" fractions set theory logic, lay linear algebra solutions manual, simplify math functions online, algebra helper. 10 hardest math problems, t1-83 calculator emulator download, flowchart of quadratic equation. Glencoe algebra 1 worksheet answers, apptitute questions download, real life example of linear equations, finding the vertex, howto program ti-83 basic, word problems in mathematics for highest common factor. Help with algebra, middle school algerba, square worksheets, worksheets on substitution, how to integral on t-89 calculator, Adding And Subtracting Fractions for Dummies. Maths Grade 10 exam papers, pocket pc casio calculator, math test ks3, solving addition and subtraction equations (no integers). Algebra simplification calculator, imaginary roots in trinomials, solve a given quadratic equation in c. Graph basics equations, english grammer note.pdf, math test year 11, online polar coordinate. Latest Math Trivia, converting numbers in base 8, algebra for college students. Examples of how solve clearing an equation of fractions or decimals, factor equations online calculator, algbra solving, grade nine math worksheets, 4th root on calculator, JAva math worksheet sample program, Factoring calculator. Casio fx-83 emulator, mastering physics answers cracked, Easy Balancing Chemical Equations Worksheets, why learn algebra, solving first order nonlinear differential equation, free saxon second edition algebra 2 lesson 38 answers. Decimal notation worksheets, maths world text book answers, ti-86 graphing error, prentice hall mathematics 1. Summation java, Essentials of College Algebra with Modeling and Visualization, "ti-83 online". Basic Maths Formula for electricity, free algebra ii answers, ading fractions, method, calculator T83 solving quadratic equations, dummies guide to learning fractions, complex radicals. Study guide worksheets for long division, ordering fractions from least to greatest calculator, easy algebra worksheet, calculate fractions powers, ontario grade 10 math test. "math quiz generator", ti-83 plus multiplying powers, free math problem solvers, free practice solving literal equations. Games combining like terms, adding worksheets, how to cheat at college algebra. "mathematics" + "logic" + "aptitude test" + "demo", math trivia with answers enter, Pascal's Triangle, newtons method of solving roots of linear equation+java program. Online Derivative Calculator, equation calculator online, ti83 integral by parts, adding and subtracting positive and negative integers, RATIONAL EXPRESSIONS CALCULATOR, Year 9 Mathematics Exercise Boolean algebra practices, on line step by step tutorials and calculations on calulus integration techniques and applications, Solution each interger n ≥ 0, 3^2n - 1 is divisible by 8. Kumon downloading, evaluating integrals of radical expressions, square root and answer in excel, glencoe/mcGraw-hill study guide answers to chapter 12 study guide to absolute ages of rocks, Activities on the book Go Hang A Salami! I'M A Lasagna Hog!, mcdougal littell algebra 2 workbook answers, year 8 maths test online. Convert decimal to fraction worksheets, worksheets graphs "line plot" "stem and leaf" "4th grade", algerbra slover, implicit differentiation calculator online, trigonometry equation solver. Algebra formula plotting, Algebra online for year 11, maths SATs questions year 9, simplify irrational expressions, print a free practice sheet expanded notation. Converting general parabola formula to standard form, exponents worksheet college, Free Algebra 2 Worksheets, exercises on adding and subtracting algebraic expressions. 7th grade math commutative properties worksheets, rational EXPRESSION calculator, pre-algebra worksheet generator, permutation+combination+chapter, free worksheets on greatest common factors and least common multiples, equations polar linear. Adding subtracting expression different terms, answers to problem sets in algebra 2, online factoring polynomials calculator, celcius equasion, everyday science mcqs free, sats exam paper grade 6. Dividing decimals by decimals calculator, Thompson Learning Algerbra, linear congruences ti 89 calculator, ti 83 complex numbers program. Common Entrance past Papers, highest common factor of 39 and 104, w to divide on a calculator, free algebra homework answers, 8th grade pre-algebra word problems. Answer keys mcdougal littell, online ti-89 graphing calculator free, solve values of expressions online, complex fraction simplify calculator. Finding the coordinates of a square in MATLAB, radical expressions in everyday life, trigonometry practice problems identity, Prentice Hall Algebra 1 online, programing a calculator, mcdougal littell algebra 1 answers, first grade phoenix dittos f. Online calculator-subtracting binomials, online slope calculator, importance of word problems. What is a symbolic method for solving a linear equation?, free printalbe worksheets on exponents, insert linear equation in graphing calculator, algebra homework help, radical in simplified form, Free Equation Solver, power point presentation of linear and metrix algebra. Algebra british method, learning algabra free, 6th grade quiz and games printouts, FREE BOOKS ON DATA INTERPRETATION FOR CAT EXAM, all work sheet for gr school, standard grade maths trigonometry graphs and equaisons, ti 83 solve three variable system. Solving equation gcse, difficult algebra, problems of linear equation, excel vba combinations permutations order. Physics answers for free, mathimatical poems, Online Formula Calculator, easy factoring gcf, solving fraction equations tools, answers to algebra problems. Lesson plans surds, "algebra structure and method" solution key, fraction questions for algebra with answer key, EXPONENTIAL function on TI-83 Plus, how to show asymptotes on ti84, what is the greatest comon factor of 96 and 108, pizzaz sheets pre algebra. Quadratic factoring solver, algebra worksheets year 7, examples math trivia, algebra rules cheat sheet, 9th grade prentice hall world history chapter 5 worksheets, multiplicity of zeros and the Balancing Equation Calculator, Radical Math Charts, fractional coefficients, exponent involving variables. Prentice hall grade sheets, Linear Programing on TI 83, ti 89 fraction/decimal conversion. Algebra 2 problem solver, standard algebra equations, three term ratio worksheet, sixth grade decimal word problems and answers. Saxon algebra 2, factoring third order polynomials, gcse accounting exercises, absolute value worksheets. Ti 83 apps vector, glencoe answers accounting, Practice Worksheets for Divisibility. Formula for root, calculator to change fraction to decimal, how to convert file time to decimal, daughter, Linear combination methods for dummies, answers to glencoe algebra 1, factor polynomials online free. Graphing polar coordinates online, alzebra level one free sample, algebra tx glencoe, holt (algebra1). Ti 89 completing the square, what is a rational expression, check answers for Glencoe MAC 2 3-7 Skills Practice Scientific notation, MATLAB 2nd Order ODE, Radicals on TI-84 Plus Calculator, Factor Math Problems, answers to algebra 1 worksheets by Prentice-Hall, Inc.. Gcse higher algebra questions ppt, spelling tests or worksheet, decimals to mixed numbers. Free algebraic calculating, algebra dividing, inverse parabola definition. Trigonometric addition and subtraction formula questions, sixth grade pre algebra lessons, basic technics in math, symmetry printable, matlab function for second order ode, I need a summary of scott foresman science book fifth grade chapter 14, factoring (algerbra). Free,printable,coordinate,plane, 6th grade pre-algebra worksheets download, "real analysis with real applications" : homework solutions, what is mutiplying decimal numbers?, printable year 7 maths Least common multiple, calculator, monomials, how to solve literal fractional eguation, maths-fraction pyramids, adding 1-step equations printable worksheets, quadratic formula converter, prentice hall conceptual physics review question answers. Check for repeated characters in a given string+java source code, algebraic factoring calculator, solving systems of equations circle within substitution, slope lesson grade nine. Free adding integer worksheet, quadrilateral problem solving KS2 lesson plan, HELP WITH ALGEBRA LEAST COMMON DINOMINATOR, the mcgraw hill companies inc. world history worksheet teachers guide LEAST COMMON DENOMINATOR CALCULATOR, simultaneous equations solver, how to download KS3. Ti 84 plus downloads, ebook kumon pdf, graph calculator AND quadratic equations, teach yourself math free, first order differential equation green's function, free math worksheets on combining like terms, year 7 math sheet. Algebra 2 problems factor quad, solve 6.37 to fraction or mixed number, mcdougal littell algebra books free answer keys, free ti 84 calc applications download, how to algebra graph, algebra problems in fractions and solving for x online, gcse mathematical formulas. Algebraic fraction equations solving, simplify integers in radical form, solving quadratic equations in matlab, directed numbers worksheets ks3 free, java convert string to time, online 7th mathbook texas, find answers for algebra equations. "estimation" "pre-algebra", 9th grade algebra, vba code for calculating eigen values. Free middle school word problem printable worksheets, t183 calculator, solutions for Holt Algebra 2 texas, matlab transforming initial value problem with couple differential equations. Poems about math mathematics algebra, find square program, contemporary abstract algebra chapter 7 answers, shrinking a square root function, using graph to find the value of the discriminant, nested do while java examples, mixed numbers as a decimal. Modern biology hrw class notes, Prentice Hall Physics answer key, Math slope problems. Free algebra answers checkers, Exponentiation calc, ti-83 equation solver program text, teach "solving inequalities" algebra, free pre algebra charts, operations with polynomials worksheet, square root finder radical. How to do math homework using the Algebra 2: Prentice Hall Mathematics, solve quadratic equation using matlab, free algebra help games, simplifying square roots with algebraic equations, abstract algebra gallian chapter 6 solutions, adding positive and negative integers, excel + Multiplying binomial. Variable exponent calculator, linear sum of digits in java, adding and multiplying roots, Instructor's Solutions Manual. A First Course in Abstract Algebra. Seventh Edition. Gmat permutation example, Greatest common divisor of three integers, maths papers grade 11. How to solve parabolas, multiple equation matlab, dividing fractions by whole numbers worksheet, graph calculator derivative function, steps to make quadratic formula program on calculator, adding fractions with integers. Solving basic algebraic equations quiz, math fact print outs level 3rd grade, convert decimal fraction to a standard fraction, Texas Instruments T1-89 user manual. Free online 11+ mathematics, Calculating roots of polynomials with ti-83, MULTIPLY INTEGERS AND VARIABLES, factorise quadratic equation calculator, solving quadratic equations +samples. Gcf worksheet, slving trinomials, investigatory problem in geometry, answer rational expression, T-83 plus sequences. Algebra 2 easy Quadratic Equations calculators, easy 7th grade probability problems, printable worksheets operation with integers and solving equations, free algebra worksheets online. Fortran linear equation 6 variables, go.hrw test solutions algebra 2, how to turn decimals into fractions, free +algerbra problem solver. Pictograph worksheet for second grade, algebra definitions lessons, Answers to Conceptual physics, trivia about math, College Algebra worksheets w/ answer sheet, multivariable algebra, equations woorksheets grades 7, free. Automatic algebra 2 solver, ks3 science game free online, Calculating Algebra equations, trivia in trigonometry. Java code converter, ti-83 eigenvalue, how to cheat on gmat, "divisor calculator", Fraction worksheets for ninth grade, activities for quadratic functions. Nonlinear equations solving in MATLAB, find greatest common factor of large numbers, summation calculator online, decimals adding subtracting poem -dewey, ti-89 converting from hex to binary. Trinomials calculator, algebra function worksheet, how to solve scale factor problems, answer key to middle school math with pizzazz book b, pre-algebra solved 2007. Free coin word problem solver, calculator for radicals, free Pizzazz worksheets, Solutions to Dummit and Foote Algebra. Holt pre algebra answers, 8th grade algebra free worksheets, "advanced engineering mathematics 8th edition solutions". Trial for TI-83 calculator online, excel solve quadratic, parabolic Free Worksheets. Mcdougal Littel Math Worksheets, algebra power solve for x, 6th grade math homework help circumference, teaching algebraic expressions 4th grade. How to program graphing calculator to factorize, "negative numbers worksheet", proportions worksheet, quadratic equation solution program in calculator, linear programming ti-89, arithematic. How to factor out polynomials cubed, cubed quadratic formulas, how to cheat on math homework, factor plus greatest factor calculator, math help for dummies. Calculator for mixed fractions, free online graphing calculator ti-83, square route solver, solve for cubic equation with two points, adding and subtracting finding missing numbers, square root factor finder. Learn accounting logical approuch grade12 for free on line, how to convert decimal to fraction, square root logarithm solver, simplify square calculator, intermediate algebra worksheets, simplifying complex radicals, lattice math worksheets. Statistics software for homework on cd, linear functions solved by substitution, substitution method quadratic formula, simplifying square root algebraic expressions, simple equation worksheets, simplify polynomials online calculator, Apptitude Questions & Answers. Algebrator free download, solutions of differential equation by excel, step by step elementary algebra help, solving for a variable in a rational equation calculator, bittinger: elementary and intermediate algebra, concepts and applications problems for chapter four, Algebra 1 Linear Equation Worksheets. 7TH GRADE free maths worksheets, fractional programing-pdf, alegbra problems, perfect square roots 5th grade activity. How to convert a fraction into a decimal, order of operation homework sheets, beginning algebra worksheets, simultaneous equations matlab, glencoe books for alegrbra 2, quadratic graphs worksheets. Algebra with pizzazz worksheets, HOW TO FACTOR ON TI-83, convert fraction to decimal using bar notation, decomposition math exponents, significant figures worksheet adding subtracting multiply Australian year 8 maths test printout, algebra for GED, Algebra 2 problem solver, algebraic factoring practice sheets. Equations solving online calculator, factor four term polynomial, free online textbooks for 8th grade texas math, factor cube worksheet, solutions manual for math power 11 western addition, quadratic equation solver fraction. Fractions least to greatest, 6th class primary maths homework, online modern algebra resource, funtions statistics and trignometry tutorial, nonlinear matlab, ti-86 to find discriminant. Dividing by decimals worksheets for sixth grade free, factoring completely worksheet, adding and subtracting integers + game, combination permutation calculate matlab. Gcse algebra exercises, convert mixed fractions to decimals, inequalities, factoring using the square method. Mcdougal littell printouts, Algebra calculator for finding the Slope, using +ti84 calculator online. Ti-83 log base 2, sideway parabolas, fun 7th grade TAKS worksheets. Solving equation with the domain, free maths work sheet, free pre-algebra worksheets, algebraic reasoning + problem solving + worksheets 5th. How o solve biology formulas, worksheets on ordering decimal numbers, accounting math equations. Finding least common denominator advanced algebra, how to solve y intercept, download for free and algebra calculator online, square roots chart, lesson plan solve quadratic factoring. Factoring algebra problem creator, best way to learn algebra 6th grade, Middle School Math with Pizzazz!book D Answers, fraction with variable simplifier, THE LANGUAGE OF ALGEBRA ANSWER CHAPTER 5 HOMEWORK ANSWER, converting double to time JAVA. Pearson prentice hall literacy worksheet answers, free download excel intermediate gcse paper, Variables Worksheet, printable elementary grade sheets. Graphing Coordinate Pairs Worksheets, math help algebra 1, homogeneous linear second order partial differential equations, who invented the order of operations, Balancing Chemical Equations calculator, decimal square worksheet, Algebra Problem Solver. Quick maths tricks ks3, free printable addition integer worksheets, how multiplication and division of rational expressions can be done., Solving equation with integer worksheets negatives and positives, math trivia with question, mathematical codes for ks2 children. Binary division applet, vba code for calculating eigenvalues, "Scott foresman Science Study Guides", pre algebra with pizzazz answers free. Teach yourself Algebra, practice and quiz on Math on Scientific , Exponental and standard Notation for 6th grade, balancing equation solver. Math inequalities, grade 10 linear equations tests and answers, Solve Nonhomogeneous Equations, how to square route on ti 83, factorial division. Where can i find free printables that shows you how to divide fractions using a different denominator in math, WRITING BALANCED CHEMICAL EQUATIONS TO REPRESENT CHEMICAL REACTIONS, dividing polynomials online. "how to" convert fraction texas ti 86, Printable Multiplication Facts Practice Test advanced, exponential slope formulas, free worksheets for algebra II, "free e book mathematic". Divisibility poem, ti 83 emulator download, solving systems of linear equations worksheets, answers to summation problems. Factor equations GCF, the secret to passing Intermediate Algebra, pre algebra math worksheets 8th grade, ebooks on cost accounting, matrix math grade nine. Power points using saxon math, simplify radical expression, fraction, java program to read values and calculate sum. 72327013997241, math trivia, adding and multiplying variable formulas, ODE23 SOLVER MATLAB, worksheets on solving algebraic equations with the distributive property. Ontario math worksheet area, free algebra problem solver, mathmatical.pdf, online fraction solver, sq root formula. Learn algerbra, Free Printable Graph Paper x y, simplifying factors calculator, rudin chapter 7 homework solutions, 6th grade math "test bank" free, least common multiple game. ANSWER TO QUESTION IN PEARSON /PRENTICE HALL ALGEBRA 1 PRACTICE WORKBOOK, tutor slope-intercept, free 4th grade worksheets with answers, fraction formula, "Boolen algebra". Free physics for dummies online, matlab solve system of nonlinear equations, merrill algebra 1, algebra II help, square root arithmetic, slope problems printable, maths question paper grade 10th, adding whole numbers and money. Fractions for dummies online, java nested do-while examples, matlab solve two nonlinear equation two unknowns simultaneous, program quadratic equation ti 84. Quadratic formula solver for graphing calculator program, free algebra, program for roots of quadratic equation in java, grade ten algebra, rules to solve addition and subtraction equations, systems of equations simple word problems worksheet. Basic algibra problem solver, scott foresman ucsmp free geometry Quizzes, Chapter Tests,, integer worksheets for free to do on the computer, Evaluating radicals for idiots. Factoring a trinomial with two variables, science question & answers free download, pre algebra sixth. Teach me trigonometry, Phoenix ti89 Cheat, simplify expressions under radicals online calculator, easy way to solve algebra questions. Year 8 online maths test, grade 9 math for dummies, adding and subtracting 3 and 4 digit equations, multiplying decimals with decimals worksheets, converting decimals into fractions . Caculator, maths fraction solver. Math + FACTORIZATION algebraic expression + test paper, foil method lesson plan, free online graphing calculator TI-84, conceptual physics prentice hall, second order differential equations, check algebra 2 answers. Books of cost accounting, math investigatory project, rational algebraic solving problems, complex equation solver, Scott Foresman math worksheets print 6th grade, square root calculator base 4, decimal word problems 6th grade. Algebra word problems and 5th grade, Algebra 2 tutor, quick answers to algebra problems. Factoring trinomials online, how to calculate quickly arithmatic free exercises, Algebra 1 answers, free math motion problem solver, calculator steps to finding residuals, www.tx.algebra.com, biology final answers matric 2004. Algebra for dummies online learning, Topic 7-b Test of Genius, impact math .com/ self- check- quiz, math practice sheets for the college student. Scott Foresman powerpoints, meaning of greatest common factor and example, simplifying equations exponent, trivias+fraction, difference of squares calculator. Help with eighth grade algebra 1, Online radical expressions calculator, holt online algebra 1 answers, free solve my algebra problem. Addison- wesley chemistry workbook answers, free online graphing calculator, trig identity solver. Balance equations on ti calculator, steps and procedure involved adding and subtracting polynomials, Printable reflection drawing worksheets in maths, quizzes on modern world history text book for 9th graders in ca, logarithmic exponentials and calculator, algebra equations calculator for fraction and equations, fraction formula 7th grade. A calculator that converts decimals into fractions, suare root calculator, fraction equation negative, rules adding exponents lessons. Holt algebra 1 answers equations, cardinal directions worksheets for first grade, ucsmp geometry lesson master tests, stats california disabled, algebra 1 exponents answers, online boolean simplify, math poems about fractions. Chemical equation balancer program ti 84 plus, ti-84 plus trigonometrical identities, solving differential equations ti-89 dx/dt, Geometry answers for scottforesman integrated mathematics, subtracting mixed missing fractions, excel three variable equation solver, Kumon Math estimating fractions. Online mATH WORKSHEETS removing brackets, TI 83 plus Rom download, expanded form worksheets using exponents, polynomial square root, simultaneous equations+powerpoint, free online algerbra math cheats, algebra 2 worksheets on the square root method. Worksheets for textbook comparison, y = sin(X) graphing calculator help, download graphing calculator T1-83. How to do inequalities in math 9th grade, algebra answers on expressions, on-line graphic caculator, grade 11 exampler papers mathematics, what is the flow chart method to a distributive problem, text programs for ti-83 plus, Polynomials; Using the Laws of Exponents worksheet. Scott foresman third grade life science worksheets and study guides, 'free intermediate test online', unit plan, prentice hall math A chapter 5, problem solver- inequalities, casio calculator to solve numerical, ti-83 quadratic formula solver. Standard form graphing calculator, code multiplication decimals worksheets, solutions algebra gallian. Free worksheets for solving literal equations, Grade seven math printable worksheets, how to do the highest common factor. Free symmetry worksheets, basic algebra absolute value worksheets, simplify fractions into decimals for dummies, advanced algebra work sheets, root finder c++ newton's method, finding LCMs by factoring, prime+number+poem. Ordered pair picture worksheet printable, dividing polynomials solutions, beginner algebra worksheets, solving 3rd order equations, dividing large numbers without calculator, how to enter multiple equqtion in ti89. How to solve chemical equations with molecule diagrams, modern chemistry holt rinehart winston chapter 6 practice quiz, transformations worksheets elementary, online calculator for adding subtracting multiplying and dividing fraction, pictograph worksheet free, solving 3rd order polynomial, free math review intermedial. Convert decimals to fractions using ti-89, equation solver for finding the reciprocal, real yr 9 sats papers + online, online holt algebra 1 workbooks, simplifying variable expressions online game, 8th grade prealgebra crypto. Real life quadratic equations, storing formulas ti-84, math multiply divide fractions word problems free, GRE permutation and combination problems, step by step to algebra, Algebra Properties Physics a-level textbook statics calculations answers, dividing polynomials calculator exponents, functional notation worksheet, solving gr 10 quadratic equations, college algebra software. Answer to Worksheet Glencoe Math, how to pass college math, 72280838572693, ti 89 binomial pdf, grade nine algebra lessons, least common multiple calculator for more than three numbers, differential equations charts. Solve my algebra problem, prentice hall math answers, online graphing calculator, equation solver 4 unknowns, improper integrals substitution, College Preparatory Mathematics 1, 2nd edition, answers to page 83, dividing rational exponents. Trigonometric equations worksheet, quadratic equations by factoring calculator, graphing calculater, radical multiplication calculator, glencoe/mcgraw-hill worksheet answers, algebra problems and get the answer. Math worksheets ratios, samples of some math trivias, maths+circle+KS2. Rearrange an algebraic equation ks3, absolute value sample GMAT problems, online statistics tutorial mcgraw hill, worksheets,percentages, factors, worksheets. Algebra 2 tutoring, 6th grade multipication sample test, investigaory problem in math. Saxon algebra book worksheets, word problems, slope, grade 7, 6th grade practice test and quizes, hyperbola year 10 questions, log in TI-89. "sixth+grade+mathworksheets", year 11 maths cheat sheet, algebra real numbers, second order non homogeneous, finding greatest common factor with square roots, solve a problem in algebra, interactive Completing square calculator, simplify radicals solver, percent of equations. Cubed polynomials, cat exam+find remainder, dividing fractions, "Worksheets" AND "balancing chemical equations" AND "free", example problems parabola, reading vocabulary for third grade TAKS. Worksheet fraction to decimal to percents conversion, algebra trivia, advanced algebra help, matrix subtraction calculator applet, subtracting trig functions, "8th grade math" "compounded interest". Yr 10 mathematics practise exam, Complex integer worksheets, introductory algebra an applied approach answers. Writing and evaluating expressions, writing algebraic expressions worksheet, how to solve fractions algebra. Free printable worksheets on changing fractions to percents, non linear simultaneous equation solver, algebra calculator simplify, Math Problem Solver. Foundations for Algebra year 2 cheats, writing mixed number as a decimal, year 8 physics revision games, how to simplify the square root of x^3. Coordinate point/ integers/ worksheets, order the fractions from least to greatest, florida clep algebra exam practice. Ti calculator negative number squared, 9th grade free aptitude test, solving algebra questions. Aptitude Question Paper, Math for Dummies free, interactive cube factorization, math games adding and subtracting integers. Algebra mixtures, radical finder, factor trinomials with 2 variables, free printable worksheets for math solving equations, calculate negative exponents, Aleks answers. Lowest common multiple of 9 and 7 and how you got it, rationalizing expressions worksheet, free solve exponents online, printouts for G.E.D pretests, aptitude question paper, past papers maths grade "online equation solver", simple math trivia, non algebraic variable in expression. Algebra tiles lesson plan factoring, beginner algebra, how to solve polynomial equations with a TI-83 plus, how to solve square roots. Quadratic equation factoring calculator, "Heath Biology" Computer Test Bank, formula summation of i cubed, simplify the cube root calculator, balancing equations worksheet for third grade. Free 8th Grade Math Worksheets, printable prime factorization chart, multi step equation worksheets, simple maths exercises adding and subtracting. Greastest Common Factor Power point, free ALGEBRA CHEATING, square root method. KS3 How to do variations in maths, identify and order fractions worksheets, prentice hall pre algebra, multiplying 7-9 worksheets. Mcdougal Littell Answer key 13.1, math trivia with answers, fun worksheets solving one step equations, symbolic method algebra. Sample math trivia with answers, probability worksheets gmat, college pre algebra rates need help. Rational expression application problem, how to convert second order differential equations to first order, polynomial factoring calculator. Math scale, problem Math PowerPoint Presentations arabic, algebraanswer, Sove probelems involving linear functions, fractions from least to greatest, finding and graphing slope easy, math printouts for first graders. Multiplying adding subtracting and dividing fractions, Math Scale Factors, algebra 2 mcdougal littell answers, tksolver finding formulas variables, worksheets on solving equations, what are radical expressions and their applications, math trivia with answer. Algebra activities for first graders, aptitude question and answer book free, comparing like terms, evaluating expressions worksheet, Boolean algebra Calculator Program, Prentice Hall Math Book Answers, nonlinear differential equations matlab. Math investigatory problem, algebra games for 9th grade, calculate log base 10 change base. Algebra with pizzazz objective 6-answer, printable 6th grade ratio, answers of scott foresman lessonb review sixth grade assesment unit 2 chapter 3 lesson 4, arithmetic reasoning worksheets, algebra rose problem, TI equation Apps. Subracting Integers Worksheet, how to factor polynomials using a "TI-83" calculator, cubed roots in math, distributive property printable worksheets. Steps to convert a decimal into a fraction or mixed number, middle school simple quadratic equations lesson plan, free online maths practice papers ks3, apptitude questions + download, pre algebra order numbers least to greatest, fusing algebra in addition of fractions. Binary point calculator, Algebra formula Chart, converting decimal to a mixed number, AP physics worksheets fluids, pre-algebra polynomials, free english lesson for 2n grade, how to do binomial equations on TI 83. Maths year 7 problem solving worksheet, square root simplified radical form, factoring using "master product", Free Algebra Problem Solver, Algebrator for Algebra 2. "how to" program "graphing calculator" to "factor trinomials", lesson plan/first grade math, how do i change the function to vertex form by completing the square. How to get the inverse of quadratic functions, how to find higher roots on TI-83 scientific calculator, synthetic division calculator online, positive and negative integers worksheets, ti 83 variable exponent, GMAT Aptitude questions. Evaluate expression worksheets, Least Common Multiple of 15 and 27, square root of 5 as a fraction, T1-84 plus; guide to fractions, section reviews modern chemistry teachers guide. Free Algebra Solver, Maths Paper two Grade 10, maple to foil polynomial. Multiplying and dividing equations, free math worksheets with answers, implicit differentiation calculator. Binomial theory, Balancing equations calculator, calculator to find slope, learn algebra easy, examples of exponential equations, parabola calculator freeware, why cant radicals be put in the Star test sample 2nd grade, unknowns and algebra work sheets, find the line of a parabola through the graph, free download calculator ti 83, Glencoe Algebra 2 answer cheats, factorising quadratics calculator, Pre Algebra PHS Answers. Working out a denominator for 25, 48, 81, How to put formulas in the TI-84 Calculator, fifth grade easy worksheets, How do I find linear regression model on a TI-86 calculator, free adding and subtracting integers worksheet. Free year 9 maths worksheets, applications for algebra, Solving the Distributive property, Prentice hall mathematics pre algebra answers. Solving polynomial equations with TI 89 calculator, 2005 prentice hall chemistry textbook answers, ppt.math+high school. Irrational radical 2 use, 2nd order nonhomogeneous differential equations, company accounting free books. Mathamatics conversion, completing the square calculator, ti 83 plus rational expression factoring program, order fractions least to greatest, If you subtract a 3-digit whole number from a 4-digit whole number, what is the least number of digits?. Sample test 6th grade math, solving linear systems in three variables answers, download College algebra 8th edition book, to simplify radical the easy way, math tests yr9, simplifying radical expression with fraction purplemath. HOLT PRE-ALGEBRA ANSWERS, Glencoe accounting workbook answers, algebra 9th grade inequalities, holt mathematics answers, mcdougal littell modern world history answers. Factoring cubic equation, radical expressions chart, solving addition equations, prentice hall pre algebra california edition work book, lattice multiolication worksheets. Yahoo visitors found our website yesterday by using these keyword phrases : • ti-83 plus factor polynomials • simplifying like terms worksheet • worksheet multiplying and dividing integers • calculator with simplifying • "ti 83" "integration by parts" • mathcad physics worksheets • complex cube root calculator • common denominators calculator • chapter five test in merrill advanced mathematical concepts • free graphic calculator online that shows the work • write 3.54 as a mixed number • Algebra 1 solutions • Multiplying Matrices • quadratic formula ti-84 plus • 6th grade prealgebra teach • Free Online Algebra Solver • Factor quadratic equations calculator • mathmatical percentages • free calculator that does fractions • denominators with cube roots • lenear and nonlinear equations • finding slope with a ti-83 • ppt.math+high school+limits • zeros of a polynomial poem • kumon work sheet • absolute value complex constant • Type in Algebra Problem Get Answer • worksheet adding subtracting multiplying and dividing integers • heaviside function laplace transform ti-89 • rules for square roots • Solving a System of Linear Equations using Graphing Calculators • displaying decimals as fractions • math worksheet mcgraw hill 5th grade • compare and contrast polynomials and rational functions • varibles as exponents • free third grade geomety worksheets • Teach Me the Pythagorean Theory • free school worksheets yr 9 • Rational expression calculators • worksheets on prime and composite numbers free printable with answers • examples of mathtrivia • free excel algebra solver • maths yr 8 activities • TI 84 emulator • The GCF of 2 numbers is 871 • math poems • solve my factoring • how to convert decimal number into fractions • Glencoe pre-algebra answers to worksheets • polar equation online calculator • kumon answer book online • decimals and fractions chart • subtraction fact to 12,13,14,15 worksheets • free algebra solvers online • Ti 89 log • solving equations with cubed variables • examples of math trivia and question • usable online TI graphing calculators • algebra structure and method book 1 online tutor • free algebra 1 warm ups • algebra: decimal expressions and equations • dummit foote algebra • download free ebook accountant/bookkeeper • steps for pre algebra • ti89 rom image • algebra 2 solving by elimination • how to find the domain of a square root function • solve lcd for fractions online • square root rules • free ordered pair worksheets • find the intersect of a polynomial and a linear equation • how to teach 4th grade fractions • CHEMISTRY past exam papers +pdf • U.S. History 1 chapter 14 test form A prentice hall • online graphing calculator, inverse matrix • Cramer's rule for dummies • simplify algebra calculator • write seven different equations • ti 83 apps • exponential functions ti-89 brendan kelly • distance time graphs ks2 travel graphs • kumon cheats • trigonometry formula parabola • real world math problems-multiplying decimals • online factorization • algebraeic • solving nonlinear simultaneous equations with bounds matlab • factoring binomials calculator • math trivia all about decimals • equatioN CALCULATOR with fractions • volume +cone+excel formula=download • free math for kids 7 to 8 years of age • binary polynomial division on matlab • algebra with pizzazz book • extracting square roots for kids • finding the slope of an equation with fractions • radical expressions • online calculator to solve exponential exponents • how to do vertex form • balance chemical equation calculator • solvings quadratic equations and functions • exponential expressions • solved sample paper of tenth class • solving radical inequalities on TI-89 • second order differential equation solver ti89 • MATH INVESTIGATORY • dividing decimals answered • math trivia on fraction • linear programming programs for TI-84 plus • understanding algabra • +Graphing Coordinate Pairs Worksheets • online leats common denomnator solver -tv • binomial worksheets • ratio solutions in algebra • math scale factor • printable grade 1maths worksheet • how to convert a +sqaure root to numerical value • • Presentation of Radicals, to work with simplified radicals and to rationalize the denominator. • online T-83 • primary school "physics workbook" • glencoe algebra 2 workbook • free saxon math answer keys • math 208 final exam aleks • finding gcf algebra • Logarithmic equations + Worksheet • solving equations with integers worksheet • how to do factorials on ti84 calculator • Free printables adding subtracting integers • Solving three variable system of quadratic equations • exponent lesson plans • TI 83 plus function log to base e • completing the square worksheet • math absolute power worksheets • Free help with learning beginners algebra • graph equation from formula • Free Study Guide for Basic Algebra • grade 7 3.3 integer review answer key page 35 • check to see if a polynomial is factorable • 5th grade inequality examples • Program the quadratic equation into your calculator • permutation and combination ebook free download • online graphic calculater • laplace transform ti 89 • on a map scale,1 cm equals 1 km what distanceis represented by 10 cm on the map? • SOLVE LINEAR EQUATIONS ON TI-84 • online free math trivia questions grade 6 • nonlinear simultaneous equation solver • 11+ maths paper free • explain algebraic fraction equations • simultaneous first order linear equation • alegebra 2 work sheets • algebra cubic square formula • math problem solver online • ks3 year 8 maths algebra • investigatory problem in math • integration by substitution calculator • simultaneous equation problems interactive • hex to decimal conversion java base 16 • online graphing calculator ti-84 instructions • test fractions GCSE level • 6th grade math square root excerises • java calculator emulator • abstract algebra - homework solutions • solving equations power point • nonlinear equation solver • problem solvers for expanding decimals • hands on equations solver • free polynomial factoring calculator • prove algebraic equation of ellipse • algebra structure and method answers • factor trinomial dividing • algebra lessons exponents lesson plans • real numbers • automatic solving fractional expressions • Slope Intercept Equations worksheets • free online algebra calculator • solve rational expressions calculators • Saxon Math Homework Answers • simultaneous equations and quadratics • maple calculate permutations • sum 3 numbers java • adding and subtracting up to 10 worksheets • math problems.com • math quiz variables • heath geometry integrated approach answers • how to factor using a TI-83 • simplyfy fractions • adding, subtracting, dividing and multiplying decimals • Mathematic lesson plans for first or secong grade • power point presentation on linear equations • boolean algebra software • Harcourt 5th Grade Math text book Florida Edition • algebra question sheets • solving non linear second order differential equation • free math problem answers • saxon math compatible numbers • Download Ti 83 Applications • glencoe mathematics answers • college+algebra+glossary • Calculater with radical sign • glencoe algebra 1 workbook teachers edition, -amazon • SOLVING THIRD order equations • Greatest Common Factors For all the numbers through 200 • answer key to work and power science sheet by holt pusblishing • online calculator simplify • harcourt math area • McDougal Littell Middle School Math workbook • Algebra Expressions Percentages • printable math fact sheets for third grade • free math worksheets for ninth grade • simple variables worksheets • How Do You Change a Mixed Fraction into a Decimal • what is laplace for dummies • algebra calculator radicals • order of operations math worksheets • algebra 2 calculator downloads • famous math poems • greatest common factor of 12 and 24 • basic algebra 1 problem solving answer • math solver online factoring • 10th grade math practice worksheets • 72316124212855 • glencoe algebra 2 answer key • multiply and divide rational expressions • math free online textbooks by McDougal Littell • second order differential equations with constant coefficients + powerpoint • how to graph sideways parabola on a graphing calculator • cubed functions • wronskian differential equation • mcqs of mechanics • KS3 SCIENCE SATs PAST PAPER QUESTIONS • Algebra Problem Solvers for Free • explicit Euler Method calculation examples • Using Domain and Range on TI-84 Plus Calculator • factor complex polynomial ti 89 • simple math explanations for adding and subtracting negative numbers • free worksheet on algebraic expressions • give examples of polynomial functions solving it • combination permutation worksheet • solving systems algebraically quadratics and lines • Texas Instruments T1-89 • linear programing-pdf • mixed number conversion to decimal • mathematics formula chart for 5th grade • 4th and harcourt and math and florida and "chapter 7" and test and graph • 7th grade math problems and answers • covariance matrix ti83 • inequalities and fourth grade • pre-algebra 2 concept and skills • simplify radical calculator • free balancing equations • PV=nRT calculator • step by step math tutoring for first graders • elementary integers worksheets • simplify equations online • variables TI 83 • algebra parabola formula • prentice hall pre-algebra answers • free worksheet measure perimeter of shapes for third grade • pic regression math • Roads Extraction matlab • balancing chemical equations using matricies on ti 83 • ucsmp geometry lesson master tests by scotts foresman • multiplying with integers printable worksheets for free • reducing radicals worksheet puzzle • cubed root function excel • adding rational expression calculator • mathtype + laplace • ratios worksheets 6th grade • Pre-algebra problem worksheets • divide polynomials calculator • sample year 11 methods mathematics exams • algebraic formulaes • Solving Frcation Equations Power point • answers to masteringphysics • 3rd order polynomial root solver • year 9 sats past exam papers free online • worksheets for adding 13 and 14 • what does 89 square meters convert to in square feet? • online interactive algebra tiles • finding common factors worksheet • how to enter quadratic formula into ti-84 • IQ test printable free sample • cost accounting book • free printable physics notes • algebra with pizzazz • answers to Algebra Connections • more simplifying radicals • ks2 math revision games and learning is fun • quadratic equation solver variables • calculators completing the square online • Prentice Hall Charles pre-algebra math textbook • free intermediate algebra practice problems and answers • balance equations online • grade 10 Math I Q exercise • Squares and square roots worksheets • antiderivative solver • least common multiple calculator algebra • free 7th grade fractions worksheets • revise maths vector equations year 10 • prentice hall of india free book download • online 7th grage life science prentice hall workbook • gcse science free test sheets • examples of algebra problems for beginners • "percent worksheet" "middle school" • decimal order • prime factored form • solving exponents calculator • 8 yr maths • how to program ti-83 basic • multiplying and adding variables • question bank on Linear equation for high school students • Common denominator table • boolean foil • online algebra calculator • printable coordinate plane pictures • automatic factorer for trinomials • math test printout • Algebra Concept and Skills Mcdougal Littell download • easy variable expression worksheets • translating algebraic expressions practice worksheet • MATH TRIVIA WITH QUESTION AND ANSWERS • Chemistry Chapter 6 Prep-Test Answers • radical expression simplifier • intercept calculations • quadratic 3rd order • explanation descartes rule of signs for dummies • least common uses for oxygen element • what are cumulative and associative math problems • multiplying and adding variables • Algebra games - solving equations • solving exponential equations on mathcad • Finding Scale Factor • simple algebraic expressions with one variable • a caculator • algebra baldor on line • monomials for dummies • how do you determine the slope for a mathmatical equation • cpt for entrance exam objective model material free • factoring interactive adding subtracting factors • trigonomic solver • pictures of prentice hall pre-algebra • conceptual physics worksheet answers • cramer's rule template • solving equations using the least common denominator • algerbra 2 • 8th grade history free qworksheets • convert decimal to fraction on ti-83 • algebra, problems, 8th grade • equation solver with fractions • "how to" fraction texas ti 86 • 3 simultaneous equation solver • solve my algebra equation • ti-84 algebra 2 Quadratic Equations • TI 83 using value for y • free online math calculator fractions • lessons on the discriminant math 11 • prentice hall physics answers in the 2007 edition workbook • Rules of simplifying radical expressions • Definition applicationmath problems • slope and y-intercept worksheet for high school • multiplying and dividing integers real life • java divisible • integers worksheet • Algebra 2 for slow learners • math tutorials polynominals • plato math problem solving + free download • free worksheets for "division with remainders" • solving systems of equations on ti 83 plus • least common multiple prime factorization worksheets • least common multiple algebra calculator • online calculator for solving algebra • distributive property worksheet for fourth graders • common denominators algebra x • simplify square root fractions with exponents • revise arabic online ks3 • quadratic graphs worksheets gcse • "Glencoe Algebra 1 Answers" • Cube Roots in Algebra • 6th grade symmetry • mathematics for dummies • pre-algebra with pizzazz book D'D online book • hardest math problem • teach me algebra • math rules for finding slope • Solutions Rudin Chapter 4 • What Occupation Uses Logarithms • calculator cheet sheets for statistics ti-84 • find quadratic maximum algebraically • example of investigatory problem about math • fractions greatest to least • software solve mathematical • factoring program ti-84 • lesson plan on multiplying and dividing polynomials • Understanding Intermediate Algebra practice problems • radical expression worksheets • ti-83 change base log • how to do exponents on a TI 81 • solving third order ordinary differentail equations • free basic algebra [problems • dividing decimals with remainders worksheet • ti-89 discrete • wronskian calculator • finding roots of third order polynomial • Algebra word problems substitution method motion • Algebra homework • multiple variables math worksheets • basic+algebra.pdf • Multiplication equation that compares numbers • "group theory" "solution""homework""pdf" " • method for solving the integration and its formulas • proportions worksheet+free • how to solve cubic quadratic equations • free 9th grade english worksheets • excel algebra factor quadratic • free printables for 6th grade • Holt Science Technology Section Review Answers • kumon answer book of group c • problems on permutation with answers • slope calculator algebra • square root to the 4th • top algebra software lessons • square cube root chart • factoring help • algebra 1 workbook littell mcdougal • College Algebra Sample • help with permutations on excel • greatest common factor worksheets 5th grade • free MAT sample papers • distributive law worksheets • arranging quadratic equation • "Quadratic Expressions" calculator • parametric equation for non linear motion tutorial • Cramer's Rule made easy to understand • permutations into calculator • substitution method solver • indefinite integral calculator • Basic geometry for 7th grade free worksheets • How do I add, subtract and multiply integers • orders of operation for multiplying exponents and whole numbers • algebra factorising, yr 8 • grade 10 algebra ontario • online florida algebra 1 book • algebra homework helper • qudratic equations • Algebra Dummies Free • practice tests 6th grade decimals • math test ks3 area • sample mathematics trivia • graphing the equation using 1st and 2nd derivative • Steps to Dividing Decimals • pre algebra/evaluating expressions • glencoe algebra 1 online text passwod • worksheets for eighth grade • fraction expression • 5th grade geography worksheets • mcdougal littell algebra 2 teachers book • math investigatory projects in geometry • quadratic inequation root • what is the least common multiple of 6, 11 and 18? • tutorials monomials adding • inequalities ppt • MATH TRIVIA • solving mixed log equations • what is simplified radical from • exponential and radical simplifying • x-y tables for algebra 1 explained • easy way to learn celsius • answers to page 77 in prentice hall algebra 1 math book • subtract fractions kids year six • Algebra question 0 minus 3 • algebra monomials homework help • homogenous second order differential equations • software algebra • Venn Diagram for algebraic equations, linear equations, quadratic equations, and cubic equations. • free and basic algebraic problems • Ohio Java Vector Calculator • online math tests yr 9 • free algebra fonts • online math simplifier • Radical solver • online algebra solvers • converting mixed numbers to decimals calculator • convert "differential equation" to "standard form" • general aptitude questions • how to change ti-83 to answer in radical • Where can i buy the answer key for Mcdougal littell geometry? • vital statistics logic worksheet answers • finite math for dummies • equation worksheet 6th grade • exponential equations with like bases and practice problems • casio algabra 2plus + programming • glencoe practice work books • factorising quadratic calculator • cheat sheet for multiple problems math • advanced mathematics precalculus mcdougall littell answer key • homework help show examples of poems • fractional exponents worksheet • algebra (2b)=(2b)c • grade 8 maths sample examination paper • algebra homework cheat • what's the least common denominator for the numbers: 9, 15, and 10? • steps in balancing large chemical equations • fun way to teach distributive property • adding and subtracting rational expressions calculator • Lagrange Optimization Function ti-83 plus • how to find roots on matlab • using factorization in real life • algebra homework on simplifying and expand • rotation worksheets middle school • algebra with pizzazz answers • factor polynomials calculator • solving inequalities ti-83 plus calculator • lesson plans, algebra, exponents • advanced factoring dividing polynomial • real world examples of subtracting negative numbers • distributive property online calculator • log base 2 on ti calculator • statistics maths worksheets • prentice hall algebra 2 exponential notation • free cost accounting software download • 3rd order polynomial • online calculator for expressions • complex analysis exam papers and solutions • prentice hall book answers • free worksheets on postive and negative numbers • download ks2 maths worksheets • free printable worksheets on function tables • algebra formulas transform homework help • free online calculator that does fractions • algebra quadratic equations regression • daily applications of square roots • greatest common factor • prealgebra practice • second edition college algebra answer sheet • free accounting books for download • units of conversation chart passport book Sixth grade • dividing integers worksheet • kumon worksheets • ontario Grade 8 math test • mathematics-bearings • how is algebra 2 important • fun math worksheets for first graders • free downloads 6th grade math • math worksheets calculating slope • algebra 1 chapter 4 form B worksheet • online math problem solver • Grade 8 exam papers? • kumon exercise • factor +ti-83 • holt algebra 2 books • cramers rule ti 89 • Least common denominator calculator • quadratic formula for dummies • Sixth Grade Math Division Worksheets • Mastering physics key • how do i store formulas on a ti-82 • find slope using given points calculator • solving three variable combination method • cvt, SAL, float • algebra calculator online free • how to solve subtracting of fractions with whole numbers • squre roots worksheet • history of math polynomial division • year 11 probability maths study sites • grade 9 math algebra and exponents tests • grade 10 math elimination • printable square root chart • rules of algebra calculator • free 8th grade math book (pre-algebra) • instant online algebra solver • prentice hall 6th grade math • expanding & factoring online examples • free online maths tests ks3 • first order linear differential equation solver • what is the least common multiple between 2,5, and 11? • Ascending Decimals • "multi choice" "discrete math" • y-intercept calculator • heath algebra 2 book • answers texas algebra 1 • Chapter 5 Modern Chemistry holt Rinehart Winston Summary • gedworksheets • 33`s common factors • completing the square + solving • algebra 2 calculator • aptitude book free downloads • finding the slope using a graph • graphing inequalities "number line" worksheets • multiple variable equation • how to factor on TI-83 • java fft contour • converting mixed numbers to decimals • +"solve algebra equations" +online • "college algebra pre test" • balance equations calculator • writing mathematical expressions as a single exponent • programing equations in TI-84 • ti-83+ mortgage program • 8 class model paper • glencoe mcgraw-hill practice (Algebra workbook answers) • practice worksheets for multiply and dividing integers • solving exponents calculators • simple simultaneous equation puzzle • how to pass the algebra 2 clep • scale factors, 7th grade math • holt answer key to work and power sheet • calculate root of a quad • basic math trivia • interactive online ti-84 calculator • c++ example 3rd grade polinomial • factorials equations with answers • mathematic algebra • online boolean calculator • 9th grade math division • division worksheets 9th grade • algebra tiles simplify expressions lesson plan • simple powerpoint show on quadratic equation • walter rudin answer • examples of math trivia mathematics • Doing math test/integers online • mcdougal littell pre-algebra practice workbook • How to Factor on TI 84 • workshet for add and subtract integers • prentice hall college algebra • mathematics trivia • the hardest math puzzle • square root properties addition • second order nonhomogeneous • how to do inverse log on ti-89 • Math formulas for GMAT • math books problems algebra 1a • calculator for fifth graders • mixed number calculator online for free • free factoring trinomials online • Solving Greatest Common Divisor using exponents • Solving Radical Functions • TEXAS INSTRUMENT TI83 INSTRUCTION MANUAL • greatest common factor calculator solver • "cubic factoring" javascript • fifth grade math inequalities • calculating sqrt in vba • solving non-linear equations + matlab • fraction calculator using variables and mixed numbers • subtracting integers worksheet • answers to mcdougal littell algebra 2 • C aptitude questions • elementary math trivia • balancing algebraic equations • ti 83 factors • order of operation worksheets for free • college algebra worksheets with examples • factoring polynomial calculator graph • find asymptotes SQUARE ROOT • addition and subtraction equations worksheets • algabra • online scientific calculator cube root • kumon in keller-tx • answers to algebra integrated algebra 1 • free online TI84 calculator • beginning algebra sixth edition templates • chapter test b mcdougal • how do you do cube root in your calculator • finding the LCD of equations • vb6 code for calculator • TI-89 polar instructions • Algebra 1 techers book answers • GRE permutation problems samples • mcdougall littell Algebra II tests • reduce to lowest terms worksheets • maths worksheet decimal fraction hard • mathemaics • 7th grade pre-algebra worksheets free • Scott Foresman math worksheet fractions 5-7 printing sixth grade 5-7 • real life linear equation graphing examples • quadratic factorization calculator • mcdougal little test generator 5.0 • 6th grade decimals lesson • mathmatical, absolute value • how to solve fractional multistep equations • fundamental alegebra • factor hard math • matlab=quadratic equation • fraction using fundamental operation • free square root worksheets • online simplifier math • suare root of 8? • pre-algebra evaluating algebraic expressions worksheet • T-83 calculator • CONVERT FRACTION TO DESIMAL • math problem solver • free maths for 7th Standard • applet solve simultaneous equations • Mcdougal Littell Middle School math book answers • saxon algebra 2 third edition ebook • fractions dividing adding and multiplying • Math Trivia with Answers • MATH GED WORK SHEETS • math solver online • how to Factor the sum or difference in two perfect cubes • TI-83 Plus binary hex calculator • pdf printable algebra quizzes and problems • combinations worksheets • square root conversions • www.how to use logarithm • linear equation standard form calculator • T1 83 Online Graphing Calculator • how to solve LCM • solving linear equations+power point • square roots of 89 • Quadratic Equations using tic-tac-toe • free ti 84 download • free homeschool worksheets for 11 th grade • scale factor made easy • multiple step algebra expression worksheet • equation substitution solver demo downloads • simplifying radicals worksheet • free online multivariable calculator • free grade 3 printable homework • fraction least to greatest • free math help online for dividing fractions by whole numbers • how to solve equations involving polynomials • SQUARE ROOT OF 519 INCHES • simplifying rational expressions worksheet with answer key • solve my math problem for free • solving your own linear equations pre-algebra • matlab differential equation solving • math test - adding and subtracting decimal point • exam paper for 11 years old • solving for variable exponents in an equation using substitution • example of java program that has sum answer • summation of i cubed • long division of polynomials in excel • convert quadratic functions to vertex form • decimals and mixed numbers • calculate fractions with exponents • exponents and square roots worksheets • solving negative square roots • how to solve permutation expressions • mcdougal littell world history chapter summaries • identify triangles worksheet • Help Solving Math Problems • math trivia with answers algebra • basic techniques in solving algebra for first year high school • algebrator software for free • how to multiply worksheet • dividing intigers • monomials solver • Math Tutor Software • chemical net equation calculator • how to find the sum in java • java number time • factorising cubed • summation notation algebra 2 prentice hill • decimal length to mixed numbers • the greatest common factor of 39,138, 93 is what • when you can't use vertex form quadratics • divide radical expressions • algebra answers saxon • radical multiplying calculator • Find Number Starting With Range Zero To Three Using Java • calculator program for factoring • graph of liner equation • free foundations of accounting books • java polynomial root finder • free game puzzle negative numbers • WORKING out 3 unknowns 1 equation • divide games • Prentice hall Conceptual Physics workbook • rational exponents solver • truth table reducer using java code • algebraic calculator • examples of simple Math equations in c language • differential equation calc • prentice hall algebra 2 answers • functions 5th grade algebra online game • trinomial fraction factoring calculator • you cannot have radical in denominator • math trivia questions • get rid of square root with two variables • graphing linear equalities • adding square roots in fractions • add and subtract fractions worksheet • how can you subtract and divide fractions • equation with roots solver • calculator that make fractions from least to greatest • systems of equations 3 variable worksheet • free addition variable worksheets • solving equalities and inequalities worksheets • answer for algebra 2 work • order • algebra math poems • free anwsers to math homeword • scientific calculator cubed root • online TI 84 plus • algebra learning aids • download yr 9 workbook free • linear programming+word problem • trivia about math mathematics • middle school math with pizzazz print • solving systems of differential equations in matlab • Algebra Solving Square Roots • 9th grade plotting translations worksheets • allowed graphic calculators in y9 sats • algerbra calculator • t1 84 calculator ONLINE • calculator with integer dividing • free printables math using math scale • simplifyradical expression calculator • algebra balancing worksheets • fraction decimal and percent chart • free math problem solver online • free maths exam yr 7 • real math practice workbook.answers • an integrated course in elementary japanese workbook answer download • how to convert mixed fractions to decimals • www.mathematicalslopes • free games for adding and subtracting integers • interpreting graphs of linear equations • consecutive integer printouts • glencoe mac 3 workbook +anwsers • difference of two square • 10 problems on multiplication and division of rational algebraic expression • simultaneous equations difficult solver nonlinear • finding domain of equation • solving equations with two variables powerpoint • signed numbers in equations worksheets • ti 89 PDF files • percentage equation • subtracting decimals worksheet grade 5 • answers to conceptual physics cp workbook • two step algebra problems middle school worksheet • find the inverse on a ti-83 plus • balancing equations with non integers • download free aptitudes book • sample test algebraic patterns • Glencoe Algebra 1 answer book • poem in math algebra • tests on c language , objective questions download • roots in fractions • radical expressions simplifier • how to solve third degree equations • solve equation of third degrees • simplify using positive exponents calculator • online t-83 calculator • TI-38 Texas instrument free emulator and rom • vertex algebra • polynominal • MATH TRIVIA AT ALGEBRA • fractional exponential expressions • solve Dividion Rational Expressions • compare mixed numbers, and decimals • Long multiplication worksheets ks2 • 7th grade scale factor problems for math • trivia algebraic expression • solving equations with fractional coefficients • how to find the standard form of a quadratic equation from a table • free maths games online grade7 • linear algebra substitution method • square numbers three root fourth • pre algebra simplifying radical expressions • download calculus made easy TI-89 • "nonlinear equations" Matlab • graphing systems of equations • download ti-83 rom image • Free GCSE Practice maths papers online • calculator with numbers to the square root of a radical • Answers to Dividing Polynomials • how to figure out linear systems by linear combinations • ti 83 instructions: polynomials • worksheets on multiplying and dividing by 7 • convert sqaure root to decimal • fraction formulas • algebra how to simplify equations • solve two equations with two unknowns ti-83 • solving equations with perfect square • what is the least common multiple of 14 and 15 • 11th matric common exam chemistry question paper with answer • solving fractions with fractional exponents • holt mathematics algebra transition workbook answers • creative publications pre algebra worksheets • solutions to second order nonhomogeneous differential equations • free online Integration - Applications - Connections teachers edition algebra 1 vol 1 • algebra equations with fractions calculator • how to find where equations are equal ti-89 • simultaneous nonlinear equations in matlab • solve equations vb code • math for kids converting decimals solving problems • how to change intercept form into vertex form • ks2 maths+mode+freesheets • graphing linear equations worksheet • equations with 3 variables ti83 • standard form algebrator • y-intercept games • solving radicals worksheets • elementary ratio worksheets • balancing math equations seventh grade 2 step • Divisibility Worksheets • saxon algebra 1 answers and work for problems • Solving Quadratic Equations Involving Fractions Maths Calculator • least common multiple worksheets • TI-85 calculator rom • Rational expression calculator • how to solve fractions • non algebraic solution of first grade equations • free download exercise general ability test singapore • adding and subtracting positive and negative numbers worksheet • how to generate quadratic sequence with excel • free algebra 2 answer key for mcdougal littell • "lowest common multiple" variable expressions • ged download coordinate plane worksheets • completing the square solver • houghton mifflin, algebra 1 answers • answers for 8th grade workbook glencoe pre-algebra • doing pre-algebra texas instruments algebra calculator • line equations worksheet • free algebra worksheets linear equations • cubed polynomials • Java Boolean Simplifier • why do we use factoring • algebra help me solve this problem • activities for learning least to greatest fractions • difference of squares calculator • "matric calculator" • online t1 82 calculator • calclator for rational expression • 8th grade scale factor free worksheets • simplify roots and radicals • "divide" +"factor" +calculator +free +"online calculator" • correlation also exists between electron configuration and chemical behavior • equation chart 5th grade • permutations for dummies • free online radical simplifier • vertex form • calulator functions for addition,subraction,multiplication,trigonometric in example c programs • simplifying cubed roots • ti 84 emulator download • balancing chemical equations concepts • how graph ellipse in calculator polar • "free answers" basic college mathematics seventh edition.com • ti89 complex mode solve • evaluating expressions with exponents online caculator • ks2 science worksheets • algebra power of 4 • Sample Lesson Plan Computer Lessons in Elementary Algebra • solving a system of linear equations using TI 83 • algebra calculator root • algebra software algabrator • simplifying complex equations • trigonometry sample problems • third order polynomal • system analysis question and answer download • integers worksheet • factoring quadratic expression calculator • give 10 problems in multiplication and division of rational expression • learn to program a TI 84 • Boolean algebra solver online • free rational expression calculator • probability aptitude questions • math trivia about algebraic expression • ode45 - 2nd order system • passport to ALGEBRA AND GEOMETRY SOLUTIONS • 3rd grade Math Expressions and equations • nonhomogeneous second order differential equation • Beginner Probability math diagram and algebra I • 6th Grade Math Tests • free Algebra Sample Test • slope intercept worksheets • formula decimal to fraction • kumon technique+sample solution • rules for addition in solving equations • conceptual physics third edition book answers • free online ti 83 calculator graph • associative properties free worksheets • cheats on finding the LCM • root solver • second order differential equation two first order equations • math slope poems • algebra 2 homework answers • heaviside function calculator • combinations on ti 84 plus • integration by substitution calculator • give me algebra soluion • algebrator softmath • quadratic graphing games online • free worksheets on Permutation • how to solve 2nd order differential equation numerically matlab • polynomial factorization online • mixed fractions to decimal • Fix a glitch in cognitive tutor • expression simplification in java programs • cubed factors equations • percent discount worksheets • 6th grade ratio calculator • free algebra for 8 year olds • interpreting the quotient worksheets • algebra expressions worksheets for third graders • Least Common multiple problem solving question • how do i do third root • rudin solutions "chapter 9" 13 • free online algebrator • ti-83+ polynomial solver • online scientific calculator can turn decimals in fractions • intermediate Algebra DIctionary • prentice hall pre-algebra tutorials • logarithm equation calculator • java ignore punctuation • formulas of arithmetic and algebra required for CAT exam • multiplying and dividing fraction lessons • math problemsolver.com • download Handbook de TI • interesting property of the square root of two • "multiples games" for classroom • How to do algebra • How do I scale factor? • printable math worksheets for 10th graders • casio T83 • algebra with pizzazz 167 worksheet • ti 84 emulation • get ti84+ rom • add subtract multi divide fractions • math cheater for radicals and square roots • c aptitude questions • online differential equation calculator • adding subtracting multiplying dividing fraction practice test • practice workbook answer sheet nine grade • glencoe algebra 1 answer book • Mark Dugopolski Trigonometry Teaching manual • free download to create your own math Algebra worksheets • adding and subtracting integers • decimal converted to mixed number calculator • worksheet for quadratic sequence • use the definition of a parabola and the distance formula to find the equation • find the number of char string in java • convert decimals to mixed numbers • 9th grade algebra proportions ratios video • answers to holt science puzzles and teasers • simplify fractions beginning algebra • how to do algebra • worksheets fraction to a decimal explanation • solve systems of linear equations that contain fractions • algebra worksheets for binary operations • graphing calculater • free logarithm worksheets • numeric pattern worksheets • how to solve mixed integers
{"url":"https://www.softmath.com/math-com-calculator/adding-functions/the-procedure-to-find-the.html","timestamp":"2024-11-10T06:33:33Z","content_type":"text/html","content_length":"163842","record_id":"<urn:uuid:9d3adc2f-6625-4eb2-80ab-8eeead48f5c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00717.warc.gz"}
How To Create Line Chart With Multiple Lines 2024 - Multiplication Chart Printable How To Create Line Chart With Multiple Lines How To Create Line Chart With Multiple Lines – The Multiplication Graph or chart Range may help your individuals creatively symbolize different early on mathematics principles. It must be used as a teaching aid only and should not be confused with the Multiplication Table, however. The graph is available in a few types: the colored version is useful once your university student is paying attention on a single occasions kitchen table at the same time. The vertical and horizontal versions are compatible with kids who definitely are continue to discovering their instances furniture. If you prefer, in addition to the colored version, you can also purchase a blank multiplication chart. How To Create Line Chart With Multiple Lines. Multiples of 4 are 4 away from the other person The design for determining multiples of 4 is usually to add each and every number to on its own and look for its other numerous. As an illustration, the 1st 5 multiples of 4 are: 12, 8, 16 and 4 and 20. And they are four away from each other on the multiplication chart line, this trick works because all multiples of a number are even. In addition, multiples of a number of are even amounts in Multiples of 5 are even If they end in or 5, You’ll find multiples of 5 on the multiplication chart line only. Quite simply, you can’t grow a quantity by two or three to have an even quantity. You can only find a multiple of five if the number ends in five or ! Luckily, there are strategies that will make getting multiples of five even easier, like using the multiplication graph range to obtain the a number of of Multiples of 8 are 8 far from the other The routine is obvious: all multiples of 8 are two-digit figures and all multiples of a number of-digit phone numbers are two-digit figures. Every single array of 10 includes a numerous of 8. 8 is even, so all its multiples are two-digit phone numbers. Its style carries on as much as 119. The next time the truth is a amount, ensure you look for a a number of of 8-10 from the beginning. Multiples of 12 are 12 clear of the other person The quantity twelve has endless multiples, and you could flourish any complete amount by it to help make any amount, such as on its own. All multiples of a dozen are even numbers. Below are a few good examples. James likes to acquire pencils and organizes them into 8-10 packets of twelve. He presently has 96 writing instruments. James has among each kind of pen. In their workplace, he arranges them in the multiplication graph collection. Multiples of 20 are 20 from one another From the multiplication chart, multiples of fifteen are even. The multiple will be also even if you multiply one by another. If you have more than one factor, multiply both numbers by each other to find the factor. For example, if Oliver has 2000 notebooks, then he can group them equally. A similar is applicable to erasers and pencils. You could buy one in a load up of about three or perhaps a load up of six. Multiples of 30 are 30 far from each other In multiplication, the term “component combine” identifies a small group of numbers that type a definite quantity. If the number ’30’ is written as a product of five and six, that number is 30 away from each other on a multiplication chart line, for example. This is also true for a quantity from the collection ‘1’ to ’10’. To put it differently, any number could be composed as the item of 1 and on its own. Multiples of 40 are 40 from each other Do you know how to find them, though you may know that there are multiples of 40 on a multiplication chart line? To accomplish this, you could add from the outside-in. By way of example, 10 12 14 = 40, and so forth. In the same manner, twenty 8-10 = 20. In this instance, the telephone number in the remaining of 10 is definitely an even quantity, whilst the one particular about the appropriate is surely an odd quantity. Multiples of 50 are 50 away from each other While using multiplication graph or chart line to ascertain the amount of two numbers, multiples of fifty are similar distance away from each other in the multiplication graph. They have two prime factors, 80 and 50. Most of the time, each expression can vary by 50. One other element is 50 on its own. Listed below are the typical multiples of 50. A common several is definitely the several of any given quantity by 50. Multiples of 100 are 100 from the other person Listed here are the many figures which are multiples of 100. A positive combine can be a a number of of just one one hundred, when a poor set is a multiple of 10. These 2 kinds of phone numbers are not the same in many ways. The very first method is to separate the number by successive integers. In this case, the quantity of multiples is a, twenty, ten and thirty and forty. Gallery of How To Create Line Chart With Multiple Lines How To Make A Line Graph In Excel With Multiple Lines On Mac Excel 2010 Tutorial For Beginners 13 Charts Pt 4 Multi Series Line How To Make A Multiple Line Chart In Excel Chart Walls Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/how-to-create-line-chart-with-multiple-lines/","timestamp":"2024-11-13T21:52:53Z","content_type":"text/html","content_length":"56389","record_id":"<urn:uuid:256073bb-3102-4e78-9443-77d3b1385c35>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00328.warc.gz"}
Create An Icy text Effect in Photoshop - DesignFestival Creating realistic effects in Photoshop is an essential skill for any designer. Creating seasonal themes and convincing imagery is something that we do every day. With the right combination of textures and filters, you can simulate just about any surface realistically. Whether you are creating a bold headline for an article or a text effect for for a winter flyer, now is a great time to create an icy text effect in Photoshop to use in your work. Let’s get started! Create a new document in Photoshop. I created a document that is 1024px by 768px. Select a light blue (#008aff) and fill the background layer with this color. Select the text tool and choose a bold typeface. I chose Frutiger LT 87 Extra Black. Center it and position the type in the middle of your document. We need to duplicate our text layer for use later. Hit Command/Ctrl + “J” to duplicate the text layer. Hide the top text layer for now and create a new layer above the bottom text layer. Hit the “D” key to restore your foreground and background colors to the default black and white. Hit Command/Ctrl + Delete to fill the new blank layer with either black or white. It really doesn’t matter which color you choose. Next Go to “Filter” > “Render” > “Fibers.” Set Variance to 15 and Strength to 5. If this doesn’t give you the effect that you want, you can always click on Randomize. This will generate a new, random variation of fibers based on a mix of the foreground and background colors. The result is shown below. Next, hit Command/Ctrl + “J” to duplicate the layer and then hit Command/Ctrl + “T” to transform the duplicate. Right-click on the transformable object and choose “Flip Horizontal.” Position this layer to overlap the other one, as shown below. With the right combination of filters and blend modes, we were able to achieve a realistic icy effect in Photoshop to make our text stand out and form a strong impression. You could use this effect for a bold headline or to illustrate the concept of extreme cold. The randomized fiber texture, combined with the right texture in our bevel and emboss layer style, enabled us to create a realistic varied effect that we needed to create a convincing icy texture. With the right approach, you can recreate any texture or surface in Photoshop. How did your icy text effect in Photoshop turn out? What application would you use this effect for? Feel free to share your thoughts in the comments section below. Frequently Asked Questions about Creating an Icy Text Effect in Photoshop How can I add more realistic ice texture to my text in Photoshop? To add a more realistic ice texture to your text, you can use the “Bevel and Emboss” feature in Photoshop. This feature allows you to add depth and dimension to your text, making it look more like real ice. You can adjust the depth, size, and direction of the bevel to achieve the desired effect. Additionally, you can use the “Texture” feature to add a specific ice texture to your text. You can find various ice textures online, or you can create your own using Photoshop’s various tools and filters. Can I animate my icy text effect in Photoshop? Yes, you can animate your icy text effect in Photoshop using the Timeline panel. You can create a new video timeline, then add keyframes to animate the properties of your text, such as its position, opacity, or style. You can also use the “Tween” feature to automatically generate frames between your keyframes, creating a smooth animation. However, keep in mind that animation in Photoshop is quite basic compared to dedicated animation software. How can I create a frosted glass effect on my icy text in Photoshop? To create a frosted glass effect on your icy text, you can use the “Glass” filter in Photoshop. This filter allows you to distort your text as if it’s seen through a glass, adding an extra layer of realism to your icy text effect. You can adjust the distortion, smoothness, and scaling of the glass effect to achieve the desired result. Additionally, you can use the “Frost” texture in the filter to make your text look like it’s covered in frost. Can I add a reflection to my icy text in Photoshop? Yes, you can add a reflection to your icy text in Photoshop using the “Transform” tool. You can duplicate your text layer, then flip it vertically to create a reflection. You can then adjust the opacity and blending mode of the reflection layer to make it look more realistic. You can also use the “Distort” feature in the Transform tool to adjust the perspective of the reflection. How can I add a glow to my icy text in Photoshop? To add a glow to your icy text, you can use the “Outer Glow” feature in Photoshop. This feature allows you to add a soft glow around your text, making it look like it’s illuminated from within. You can adjust the color, size, and spread of the glow to achieve the desired effect. Additionally, you can use the “Blend Mode” option to control how the glow interacts with the colors of your text. Can I use custom fonts for my icy text effect in Photoshop? Yes, you can use custom fonts for your icy text effect in Photoshop. You can import any TrueType or OpenType font into Photoshop, then use it for your text. You can also adjust the size, spacing, and alignment of your text using the Character panel. However, keep in mind that some fonts may not work well with the icy text effect, especially if they have thin or intricate details. How can I add a background to my icy text in Photoshop? To add a background to your icy text, you can create a new layer below your text layer, then fill it with a color, gradient, or image. You can also use the “Layer Styles” feature to add effects to your background, such as a drop shadow or a gradient overlay. Additionally, you can use the “Blend Mode” option to control how your background interacts with your text. Can I save my icy text effect as a preset in Photoshop? Yes, you can save your icy text effect as a preset in Photoshop. You can save your layer styles as a new style, then apply it to other text layers with a single click. You can also save your entire text effect as a PSD file, then import it into other projects. However, keep in mind that some effects may not transfer perfectly between different text layers or projects. How can I make my icy text effect look more 3D in Photoshop? To make your icy text effect look more 3D, you can use the “3D” feature in Photoshop. This feature allows you to extrude your text into 3D space, then adjust its perspective, lighting, and materials to create a realistic 3D effect. You can also use the “Bevel and Emboss” feature to add depth and dimension to your text. However, keep in mind that creating 3D effects in Photoshop requires a good understanding of 3D concepts and tools. Can I print my icy text effect in high resolution in Photoshop? Yes, you can print your icy text effect in high resolution in Photoshop. You can adjust the resolution of your document in the Image Size dialog box, then print it using the Print command. You can also adjust the print settings, such as the paper size, orientation, and quality, to achieve the best print results. However, keep in mind that printing in high resolution requires a high-quality printer and paper. James George is a professional web developer and graphic designer. James is an expert in design, and a professional web developer, with a special interest in WordPress. Founder of Design Crawl, James has been a professional designer since 2005. DesignPhotoshop Tutorials & Articlestutorialtypography
{"url":"https://www.sitepoint.com/create-an-icy-text-effect-in-photoshop/","timestamp":"2024-11-06T02:39:36Z","content_type":"text/html","content_length":"228833","record_id":"<urn:uuid:ee92e0e2-1717-4040-a501-a19d4de52b32>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00828.warc.gz"}
Visual Tools Dice Rolling Probabilities If you’d like to visualize the probable outcomes of rolling some number of dice of a specified type, probability_plot is here for you! You can specify the number and type of dice and the number of times to roll that group. The median outcome is indicated by a dashed vertical line. # Make a probability plot for two, six-sided dice dndR::probability_plot(dice = "2d6", roll_num = 499) Just for fun, the graph colors are decided by the type of dice you specify and correspond to the hex logo of this R package! Assessing Party Abilities It can be useful as a DM to know where your players’ strengths and weaknesses lie across the whole party. party_diagram allows DMs to visualize the ability scores of every player in a party either grouped by player or by ability score. The function supports both interactive (abilities entered via the R Console) and non-interactive (abilities given as a list) entries. Thank you to Tim Schatto-Eckrodt for contributing this function! Due to the static nature of a vignette, we’ll use the non-interactive path by assembling the party score list and then invoking this function. # Create named list of PCs and their scores party_list <- list(Vax = list(STR = "10", DEX = "13", CON = "14", INT = "15", WIS = "16", CHA = "12"), Beldra = list(STR = "20", DEX = "15", CON = "10", INT = "10", WIS = "11", CHA = "12"), Rook = list(STR = "10", DEX = "10", CON = "18", INT = "9", WIS = "11", CHA = "16")) # Create a party diagram using that list (by player) dndR::party_diagram(by = "player", pc_stats = party_list, quiet = TRUE) You can also group the diagram by ability score if that is of interest instead.
{"url":"https://cran.rstudio.org/web/packages/dndR/vignettes/dndr_05_visuals.html","timestamp":"2024-11-03T20:25:47Z","content_type":"text/html","content_length":"188601","record_id":"<urn:uuid:143aa552-3bb0-4e14-8faa-1d08075eb0d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00858.warc.gz"}
Markov Chain and HMM Markov models are probabilistic (or stochastic) models that were developed to model sequential processes. In a Markov process, it is usually assumed that the probability of each event (or state) depends only on the probability of the previous event. This simplifying assumption is a special case which is known as the Markovian, one-Markov and the first-order Markov assumption. The following lecture will introduce you to Markov processes more formally. Let’s summarise the theory of Markov processes and HMMs. A Markov chain is used to represent a process which performs a transition from one state to other. This transition makes an assumption that the probability of transitioning to the next state is dependent solely on the current state. Consider the figure below: Here, ‘a’, ‘p’, ‘i’, ‘t’, ‘e’, ‘h’ are the states and the numbers mentioned on the lines are transition probabilities. For e.g. the probabilities of transitioning from the state ‘t’ to the states ‘i’, ‘a’ and ‘h’ are 0.3, 0.3, 0.4 respectively. The start state is a special state which represents the initial state of the process (e.g. the start of a sentence). Markov processes are commonly used to model sequential data, such as text and speech. For e.g., say you want to build an application which predicts the next word in a sentence. You can represent each word in a sentence as a state. The transition probabilities (which can be learnt from some corpus, more on that later) would represent the probability that the process moves from the current word to the next word. For e.g. the transition probability from the state ‘San’ to ‘Franciso’ will be higher than to the state ‘Delhi’. The Hidden Markov Model (HMM) is an extension to the Markov process which is used to model phenomena where the states are hidden(or latent) and they emit observations. For example, in a speech recognition system (a speech-to-text converter), the states represent the actual text words which you want to predict, but you do not directly observe them (i.e. the states are hidden). Rather, you only observe the speech (audio) signals corresponding to each word, and you need to infer the states using the observations. Similarly, in POS tagging, what you observe are the words in a sentence, while the POS tags themselves are hidden. Thus, you can model the POS tagging task as an HMM with the hidden states representing POS tags which emit observations, i.e. words. The hidden states emit observations with a certain probability. Therefore, along with the transition and initial state probabilities, Hidden Markov Models also have emission probabilities which represent the probability that an observation is emitted by a particular state. The figure below illustrates the emission and transition probabilities for a hidden Markov process having three hidden states and four observations. In the previous segment, you had used the transition and the emission probabilities for finding the most probable tag sequence for the sentence “The high cost”. The probabilities P(NN|JJ), P(JJ|DT) etc. are transition probabilities, while the P(high|JJ), P(cost|NN) etc. are the emission probabilities. You’ll learn to compute these probabilities from a tagged corpus in a later segment.
{"url":"https://www.internetknowledgehub.com/markov-chain-and-hmm/","timestamp":"2024-11-08T05:13:43Z","content_type":"text/html","content_length":"81314","record_id":"<urn:uuid:fdb48434-3578-4959-b182-0a65b672fcf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00557.warc.gz"}
Scripting The Monty Hall Problem If like me your eyes glisten and your ears perk up when you come across deceptive math puzzles, you are sure to find some value in this piece. Here, I introduce the Monty Hall problem, discuss a useful way to think about the solution, and present a python script I wrote to validate it. The Monty Hall Problem The Monty Hall problem (named after Monty Hall, a game show host) is a rather deceptive brain teaser that became somewhat popular towards the end of the 20th Century. It is not a difficult problem to understand as it contains very simple premises but it is, nevertheless, pretty tricky to solve. Like the Ball And Bat Problem, psychologists often use the Monty Hall problem to illustrate how easily humans can fail to grasp the "math" or the "rational", opting instead for the "immediately obvious" or "intuitive" response. They call out our tendency to introduce bias and other forms of heuristics when making decisions—preferring easier, less laborious, and inaccurate approaches to more tasking, and more accurate ones. What I find most amusing and fascinating about such problems that is that we are usually very aware that they are tricky problems—tricky problems that we are probably hearing about because they've most likely conquered the minds of many people. Yet, we dive in to answer them, optimistic that we'd give a correct response. But of course, we are usually wrong—like I and most people were with the Monty Hall Problem. And even when we are presented with the correct answer, it could be pretty difficult to wrap our heads around the right answer. The Monty Hall problem tests our understanding of probabilities in some sort of illusory manner. Here is my variant of the problem: Say you are Luffy, a pirate searching for the grandest treasure—The One Piece. Say there are three identical maps, one of which leads you to the treasure. The other two maps lead you to a wasteland filled with sand. Of course you want the treasure, but you don't know which of the three maps has the treasure. You can't differentiate one from the other. Say the maps have a custodian, Monty Hall, and he knows what map leads where. In other words, Monty Hall knows which maps lead you to a wasteland and the one that can guide you to the treasure. He decides to play a game with you as follows; • You are to pick any one of the three maps (at this point, you don't know if your chosen map leads you to the treasure or to a wasteland) • Next, of the other two maps, Monty Hall picks one map that leads to a wasteland. Remember, he knows where each map leads to and he always picks a wasteland map. • At this point you have picked one map (that may or may not lead to the treasure) and Monty Hall has picked a map that definitely leads to the wasteland. Monty's map needn't be considered anymore, but there is a third map that neither of you chose. • Next, Monty Hall asks you; do you want to stick with your initial selection or do you want to switch your selection to the third map? That's the problem statement. You are given a choice to stick with your initial map selection or switch to the third map that neither of you chose. Will you stick or switch? The Answer With little or no deliberation, we are tempted to choose to stick with our original decision. This is because we believe that switching to the third map would not change the odds that we chose the right map. Our mind is heavily focused on the probability that our first choice was correct, and switching seems like an unnecessary risk. We could also think, perhaps, that after that after Monty Hall chooses the second map, the probability that our original selection is the treasure map is equal to the probability that the third map is the treasure map, i.e. 1/2 each since there are now two options. So there is no point switching. But that is not so! It may take some time to sink in, but here is what I consider a useful approach to analysing the problem; At the start, when you make the first selection • the probability of picking a treasure map is 1/3; • and the probability of picking a wasteland map is 2/3 In other words, you are more likely to pick a wasteland map than a treasure map. Next, regardless of your choice, Monty Hall will always pick a wasteland map. After Monty picks, there are two possible scenarios; either the third unselected map is the treasure map or it is another wasteland map. And these two scenarios play out probabilistically in relation to your first choice in the following ways; • if the third unselected map is a wasteland map, it must mean that your first selection was the treasure map. This happens one-third of the time (1/3) as stated above. And therefore, you'd win if you don't switch • if the third unselected map is the treasure map, it must mean that your first selection was a wasteland map. This happens two-thirds of the time (2/3) as stated above. And therefore, you'd win if you switch As such, since your first selection is likely to be a wasteland map than the treasure map, it makes statistical sense to always switch to the third map. Of course, winning is not guaranteed by switching. But you are twice as likely to win when you switch than when you stick to your first selection. So always switch! The Python Script: Some Evidence It is normal to still have doubts about the solution to the Monty Hall problem. A useful way to perhaps, cement your belief in and acceptance of the answer is to simulate the problem several times. This is exactly what I did with using a Python Script. You can find the gist here and the repo here. Let's walk through the code; First we import the random module. We define a function called single run that initializes our map trio as an array. We select a random map to contain our One Piece treasure and the function returns the map array. # The Monty Hall Problem import random # Instantiating a random treasure map def single_run(): maps = ['wasteland', 'wasteland', 'wasteland'] treasure_index = random.randint(0, 2) maps[treasure_index] = 'one piece' return maps Next, we define a function that returns Luffy's random first choice # Luffy's first choice def luffy(): luffy_first_choice = random.randint(0, 2) return luffy_first_choice Next, we define a function that returns Monty's choice of a wasteland map. We use a while loop to ensure that Monty's chosen map is neither Luffy's first choice nor the treasure map. # Monty's choice of a location that is neither Luffy's choice nor the treasure location def monty(maps, luffy_first_choice): monty_choice = 0 while monty_choice == luffy_first_choice or maps[monty_choice] == 'one piece': monty_choice += 1 return monty_choice Next we define a function that returns Luffy's switched choice. Note that this function is not necessary for the simulation. I only included it for the sake of completeness. # switch Luffy's choice def luffy_switch(luffy_first_choice, monty_choice): luffy_switch_choice = 0 while luffy_switch_choice == luffy_first_choice or luffy_switch_choice == monty_choice: luffy_switch_choice += 1 return luffy_switch_choice Next, we define a function that returns the output. It accepts as input, the number of times switching yielded the treasure map, the number of times sticking yielded the treasure map, and the number of trials. It returns the percentages for both decisions as part of a string. # output to be displayed def output(stick, switch, trials): stick_percent = round((stick/trials) * 100) switch_percent = round((switch/trials) * 100) print(f'Luffy found One Piece {stick_percent} % of the time when he decided to stick to his initial choice ') print(f'Luffy found One Piece {switch_percent} % of the time when he decided to switch his initial choice') And we have the body of the script. It accepts the number of trials as input, i.e. the number of simulations you want to run. We also initialize the number of times sticking and switching yield the treasure map to zero. We run our loop as many times as the number of trials. Inside the loop, we use are already defined functions to; • get the maps array • randomly make Luffy's initial map choice • make Monty choose a wasteland map • switch Luffy's initial choice • increment the stick_count if the initial choice has the One Piece treasure • increment the switch_count if the switched choice has the One Piece treasure print('The Monty Hall Problem') trials = int(input('Enter the number of trials: ')) # Luffy sticks stick_count = 0 # Luffy switches switch_count = 0 for i in range(trials): maps = single_run() luffy_first_choice = luffy() monty_choice = monty(maps, luffy_first_choice) luffy_switch_choice = luffy_switch(luffy_first_choice, monty_choice) if maps[luffy_first_choice] == 'one piece': stick_count += 1 elif maps[luffy_switch_choice] == 'one piece': switch_count += 1 output(stick_count, switch_count, trials) Note, as mentioned earlier, that we don't need the luffy_switch function, and by implication we can do without incrementing the switch_count as we have done. That's because we are sure that if Luffy's initial choice was the treasure map, then the third choice is definitely not the treasure map. In the same vein, if Luffy's initial choice wasn't the treasure map, then the third choice that could have been switched to would be the treasure map. In other words, the treasure map is either the original choice or the switched choice. The incidences of both (stick_count and switch_count ) necessarily add up to the number of trials. So we can use simple arithmetic to evaluate one if we have the other; switch_count = trials - stick_count. That's definitely a more concise way of writing this script. I only wrote it this way for the sake of The result of running the Python Script (several times) proves that Luffy gets the treasure map about two-thirds (2/3) of the time when he switches as opposed to one-third (1/3) of the time when he sticks with his original choice. And that's The Monty Hall Problem. The key thing to remember is to always switch maps after Monty Hall chooses a wasteland map because your initial choice was more likely wrong than correct. As such, switching improves the chances that you'd win the treasure map. And when you find the treasure, remember to share it with me! A comment, a like, a repost, some feedback—anything will do! Thanks for reading! Also connect with me: Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/wolemercy/scripting-the-monty-hall-problem-5d5g","timestamp":"2024-11-13T16:25:38Z","content_type":"text/html","content_length":"79265","record_id":"<urn:uuid:4664046b-46cf-4258-aec8-fdfea5b7a546>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00682.warc.gz"}
NCERT Solutions for Class 12 Maths Chapter 1 Exercise 1.4 NCERT Solutions for Class 12 Maths Chapter 1 Exercise 1.4 Relations and Functions in Hindi and English Medium for CBSE and State boards. Welcome to our comprehensive guide on Class 12 Maths Chapter 1 Exercise 1.4. This segment of the curriculum, focusing on Relations and Functions, is pivotal for students aiming to excel in their board exams and various competitive tests. Our solutions are meticulously designed to enhance your understanding and provide clarity on each problem in this exercise. Class 12 Maths Exercise 1.4 Solutions in Hindi and English Medium NCERT Solutions for Class 12 Maths Chapter 1 Exercise 1.4 Grade XII Mathematics exercise 1.4 solutions CBSE students as well as UP Board, MP Board, Bihar, Uttarakhand Board for 2024-25. NCERT Textbook solutions and Online Offline apps are for the students of boards whoever following the Updated NCERT Books for their board exams. Deep Dive into Relations and Functions The concept of Relations and Functions forms the bedrock of many advanced mathematical topics. In this chapter, you’ll explore the intricate relationship between sets of numbers and how they interact. Exercise 1.4 is crafted to test your grasp of these concepts, presenting challenges that require both theoretical knowledge and practical application. Our step-by-step solutions aim to demystify each problem, ensuring a comprehensive understanding of the methods involved. Class: 12 Mathematics Chapter 1: Exercise 1.4 Topic Name: Relations and Functions Content: Exercise and Extra Questions Content Type: Videos and Text Format Medium: English and Hindi Medium 12th Maths Exercise 1.4 Solutions NCERT Solutions for Class 12 Maths Chapter 1 Exercise 1.4 Relations and Functions in English medium free to download as well as use it online for current academic session. All NCERT solutions are updated as per the current & latest CBSE Syllabus. Download NCERT Books 2024-25 based on latest CBSE Curriculum. Join the discussion forum to ask your questions related to NIOS or CBSE Board. Effective Problem-Solving Strategies Beyond just solutions, we offer strategic insights to tackle mathematical problems efficiently. These tips are invaluable for enhancing your problem-solving speed and accuracy, crucial skills for any timed exam. Our approach is not just about finding the right answers but also about understanding the logic and reasoning behind each step. Class 12 Maths Exercise 1.4 Solution in Videos Conclusion and Encouragement Our primary goal is to make learning an enjoyable and enriching experience. With our detailed solutions and additional practice problems, we hope to build your confidence and mastery in Class 12 Maths. Remember, the journey through mathematics is about more than just passing exams; it’s about developing a deep understanding and appreciation for the subject. Stay curious, practice regularly, and embrace the challenges that come your way. This content is structured to provide an engaging and informative overview of Class 12 Maths Chapter 1 Exercise 1.4, offering both solutions and study strategies to help students succeed. About 12 Maths Exercise 1.4 Class 12 Mathematics Chapter 1 Exercise 1.4 Relations and Functions contains the questions based on binary operation. All the questions are based on concepts only. These are easier to understand or explanation. Here we have to prove whether the Binary Operation is commutative or Associative and existence of identity elements. Questions in this exercise are little bit different what you have done in Exercise 1.1 or Exercise 1.3. 1. A binary operation ‘*’ defined on set A is a function from A × A→A. *(a, b) is denoted by a * b. 2. Binary operation * defined on set A is said to be commutative iff a * b = b *a ∀ a, b ∈ A. 3. Binary operation*defined on set A is called associative iff a*(b * c) = (a * b) * c ∀ a, b, c ∈ A. 4. If * is Binary operation on A, then an element e ∈ A (if exists) is said to be the identity element iff a*e = e*a = a ∀a ∈ A. Is exercise 1.4 of class 12th Maths complicated? Exercise 1.4 of class 12th Maths is not easy and not complicated. It lies in the mid of easy and complicated because some examples and questions of this exercise are easy, and some are complex. However, the difficulty level varies from child to child. So, exercise 1.4 of class 12th Maths is easy, or tough depends on children also. Some children find it difficult, some find it easy, and some find it in the middle of easy and difficult. How many days are needed to complete exercise 1.4 of grade 12th Maths? If students can give 2 hours per day to exercise 1.4 of class 12th Maths, they need 3-4 days to complete exercise 1.4 of 12th Maths. This time is an approximate time. This time can vary because no students can have the same working speed, efficiency, capability, etc. Which problems of exercise 1.4 of 12th Maths can come in Board Exams? Exercise 1.4 of class 12th Maths has 12 examples (examples 29 to 40) and 13 questions. All the problems of exercise 1.4 are equally important from the exam point of view. Students should practice all examples and questions of this exercise for the exams because any example and any question can come from this exercise in the board exams. On which concept questions of exercise 1.4 of class 12th Maths are based? Questions of exercise 1.4 of class 12th Maths are based on the concept named Binary Operations. In exercise 1.4, students study all about binary operations. This exercise is interesting. Last Edited: November 13, 2023
{"url":"https://www.tiwariacademy.com/ncert-solutions/class-12/maths/chapter-1/exercise-1-4-old/","timestamp":"2024-11-04T12:26:14Z","content_type":"text/html","content_length":"260253","record_id":"<urn:uuid:4655579b-afaf-4ef2-b262-3fbdffb90569>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00880.warc.gz"}
8. Does The Set {(5, 1), (4,8)} {c, 1), (4, 8)} Span R"? Justify Your Answer. ?? To determine if the set {(5, 1), (4, 8)} spans R², we need to check if every vector in R² can be expressed as a linear combination of these two vectors. Let's take an arbitrary vector (a, b) in R². To express (a, b) as a linear combination of {(5, 1), (4, 8)}, we need to find scalars x and y such that x(5, 1) + y(4, 8) = (a, b). Expanding the equation, we have: (5x + 4y, x + 8y) = (a, b). This gives us the following system of equations: 5x + 4y = a, x + 8y = b. Solving this system of equations, we can find the values of x and y. If a solution exists for all (a, b) in R², then the set spans R². In this case, the system of equations is consistent and has a solution for every (a, b) in R². Therefore, the set {(5, 1), (4, 8)} does span R². To learn more about linear combination visit: As t approaches infinity, becomes very large, and the population P approaches infinity. Therefore, the limiting value of the population is infinity. Approximately after 23.61 months, the population will be equal to one third of the limiting value. To solve the initial value problem for the population model, we need to find the limiting value of the population and determine the time when the population will be equal to one third of the limiting (a) To find the limiting value of the population, we need to solve the differential equation and determine the value of P as t approaches infinity. Let's solve the differential equation: dP/dt = P(104 - 10⁻¹¹P) Separating variables: dP / P(104 - 10⁻¹¹P) = dt Integrating both sides: ∫ dP / P(104 - 10⁻¹¹)P) = ∫ dt This integral is not easily solvable by elementary methods. However, we can make an approximation to determine the limiting value of the population. When P is large, the term 10^(-11)P becomes negligible compared to 104. So we can approximate the differential equation as: dP/dt ≈ P(104 - 0) dP/dt ≈ 104P Separating variables and integrating: ∫ dP / P = ∫ 104 dt ln|P| = 104t + C Using the initial condition P(0) = 100,000: ln|100,000| = 104(0) + C C = ln|100,000| ln|P| = 104t + ln|100,000| Applying the exponential function to both sides: |P| = ([tex]e^{(104t)[/tex]+ ln|100,000|) Considering the absolute value, we have two possible solutions: P = ([tex]e^{(104t)[/tex] + ln|100,000|) P = (-[tex]e^{(104t)\\[/tex] + ln|100,000|) However, since we are dealing with a population, P cannot be negative. Therefore, we can ignore the negative solution. Simplifying the expression: P = e^(104t) * 100,000 As t approaches infinity, becomes very large, and the population P approaches infinity. Therefore, the limiting value of the population is infinity. (b) We need to determine the time when the population will be equal to one third of the limiting value. Since the limiting value is infinity, we cannot directly determine an exact time. However, we can find an approximate time when the population is very close to one third of the limiting value. Let's substitute the limiting value into the population model equation and solve for t: P = [tex]e^{(104t)[/tex] * 100,000 1/3 of the limiting value: 1/3 * infinity ≈ [tex]e^{(104t)[/tex]* 100,000 Taking the natural logarithm of both sides: ln(1/3 * infinity) ≈ ln([tex]e^{(104t)[/tex]* 100,000) ln(1/3) + ln(infinity) ≈ ln([tex]e^{(104t)[/tex]) + ln(100,000) -ln(3) + ln(infinity) ≈ 104t + ln(100,000) Since ln(infinity) is undefined, we have: -ln(3) ≈ 104t + ln(100,000) Solving for t: 104t ≈ -ln(3) - ln(100,000) t ≈ (-ln(3) - ln(100,000)) / 104 Using a calculator, we can approximate this value: t ≈ 23.61 months Therefore, approximately after 23.61 months, the population will be equal to one third of the limiting value. To know more about population check the below link: Complete question: A model for the population P(t) in a suburb of a large city is given by the initial value problem dP/dt = P(10^-1 - 10^-7 P), P(0) = 5000, where t is measured in months. What is the limiting value of the population? At what time will the pop be equal to 1/2 of this limiting value?
{"url":"https://promo.mrdiy.com.my/scholar-solver/8-does-the-set-5-1-48-ca-1-4-8-span-r-justify-your-answer-pjll","timestamp":"2024-11-04T02:05:08Z","content_type":"text/html","content_length":"109960","record_id":"<urn:uuid:567f4689-4b09-4993-9c39-7e5b7c3c450b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00025.warc.gz"}
Ingredients 14743 - math word problem (14743) Ingredients 14743 Anna prepares for breakfast - buckwheat or millet porridge with one of three fruits flavored with honey or cocoa. How many different types of breakfast can you prepare from the listed ingredients? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! You need to know the following knowledge to solve this word math problem: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/14743","timestamp":"2024-11-10T03:22:11Z","content_type":"text/html","content_length":"47671","record_id":"<urn:uuid:625c4bd1-cbaf-43de-bd19-ff9afce62269>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00432.warc.gz"}
What is Deep Learning Triplet Loss? - reason.townWhat is Deep Learning Triplet Loss? What is Deep Learning Triplet Loss? A deep learning triplet loss is a type of loss function that is used in deep learning. This loss function is used to train a model to learn to map input vectors to output vectors. Checkout this video: What is Deep Learning? Deep learning is a subset of machine learning in artificial intelligence that has networks consisting of multiple layers to learn complex patterns in data. Deep learning is usually used to classify images, recognize objects, and detect humans in self-driving cars. What is Triplet Loss? Triplet loss is a type of deep learning algorithm that is used to learn similarities and relationships between data points. The triplet loss function is typically used to train embedding models, such as encoding data points into a continuous vector space. This type of loss function is also known as a contrastive loss function. The triplet loss function works by taking three data points, known as a triplet, and minimizing the distance between the vectors representing the data points. The goal is to have the vectors for similar data points be close together, and the vectors for dissimilar data points be far apart. This can be done by either maximizing the distance between similar vectors, or minimizing the distance between dissimilar vectors. One applications of triplet loss is in face recognition. By training a model using triplet loss, it can learn to encode different faces into a vector space in such a way that similar faces are close together, and dissimilar faces are far apart. This can then be used for tasks such as facial recognition and search. How can Deep Learning and Triplet Loss be used together? Deep Learning is a neural network that tries to learn features of data automatically. It can be used for various tasks such as image classification, object detection, and speech recognition. Triplet Loss is a type of loss function that is used in training deep learning models. The goal of Triplet Loss is to find an embedding (a mapping of data points to vectors) such that similar data points are close together and dissimilar data points are far apart. This can be done by Penalizing the model if it places two similar data points too far apart or if it places two dissimilar data points too close together. What are the benefits of using Deep Learning and Triplet Loss together? Deep Learning is a neural network architecture that has been designed to learn high-level features from data. Triplet Loss is a loss function that is used to train a Deep Learning model to learn the similarity between two objects. The benefits of using Deep Learning and Triplet Loss together are that it can learn complex relationships between objects, and it can learn these relationships faster than a traditional machine learning algorithm. How does Deep Learning Triplet Loss work? Deep learning triplet loss is a loss function that is used in deep learning networks to optimize models for comparison tasks. The triplet loss function is a generalization of the cross-entropy loss function that is used in classification tasks. Triplet loss is used to train models to be able to distinguish between different classes of data points. The triplet loss function is composed of three parts: anchor, positive, and negative. The anchor is the starting point for the comparison, and the positive and negative are the two data points that are being compared. The goal of the triplet loss function is to minimize the distance between the anchor and the positive while maximizing the distance between the anchor and the negative. What are some applications of Deep Learning Triplet Loss? Deep learning triplet loss is a loss function that comes from the field of metric learning. It is used to learn embeddings from data where there are relationships between data points. For example, you might want to usedeep learning triplet loss to learn an embedding for images of people where the distance between two images of the same person is small and the distance between two images of different people is large. Some applications of deep learning triplet loss are: – Learning an embedding for images so that similar images are close together and dissimilar images are far apart – Learning an embedding for user queries so that similar queries are close together and dissimilar queries are far apart – Learning an embedding for items so that similar items are close together and dissimilar items are far apart Are there any challenges associated with Deep Learning Triplet Loss? Despite the potential benefits of using Deep Learning Triplet Loss, there are some challenges associated with this approach. For example, it can be difficult to find appropriate training data, and the models can be computationally intensive. Additionally, Deep Learning Triplet Loss may not be suitable for all types of data. How can Deep Learning Triplet Loss be improved? Deep Learning Triplet Loss is a type of loss function that is used in supervised learning. It is a loss function that tries to Find the best possible model by Junping three different models. The first model is the base model which is used to find the second model. The second model is used to find the third model. Finally, the third model is used to improve the base model. This process can be repeated many times in order to find the best possible deep learning triplet loss function. What is the future of Deep Learning Triplet Loss? Deep learning triplet loss is a type of loss function that is used in training deep neural networks. The loss function is used to minimize the error between the predicted and actual output of the network. The triplet loss function is based on the Euclidean distance between two vectors. The loss function is also known as the optimization criterion. Deep learning Triplet loss is a neural network layer that learns to map input vectors to a triplet space such that the mapping preserves the semantic relationship between vectors. The triplet space is defined by a distance metric, such as Euclidean distance or cosine similarity. The Triplet loss function encourages the mapping to preserve the relative distances between vectors in the triplet space. For example, if two input vectors are similar (e.g., they have similar features), then the distance between their mapped vectors in the triplet space should be small. Similarly, if two input vectors are dissimilar (e.g., they have different features), then the distance between their mapped vectors in the triplet space should be large.
{"url":"https://reason.town/deep-learning-triplet-loss/","timestamp":"2024-11-07T13:15:37Z","content_type":"text/html","content_length":"94802","record_id":"<urn:uuid:54354e2a-61ef-4699-9e87-6ef8ab83840d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00432.warc.gz"}
Outline of the Method of Conducting a Trigonometrical Survey, for the Formation of Topographical Plans Outline of the Method of Conducting a Trigonometrical Survey, for the Formation of Topographical Plans: And Instructions for Filling-in the Interior Detail, Both by Measurement and Sketching : Military Reconnaissance, Leveling, &c., &c., with the Explanation and Solution of Some of the Most Useful Problems in Geodesy and Practical Astronomy, to which are Added a Few Formulę and Tables of General Utility for Facilitating Their Calculation From inside the book Results 1-5 of 46 Page vii ... Longitude are known - Convergence of Meridians - Variation of Compass - Projections of the Sphere ........ 109 CHAPTER X. PRACTICAL ASTRONOMY . Sextant and Repeating Circle - Definitions . PROBLEMS . I. To convert Sidereal Time into ... Page viii ... Longitude on the surface of the Earth ... 196 16. Corrections for Curvature and Refraction ...... .. 197 17. Reduction upon each Chain's Length for different Vertical Angles .. 198 18. Inclination of Slopes for different Vertical Angles ... Page 2 ... longitude of a number of the principal and most con- spicuous stations are determined by astronomical observations , and the distances between them calculated to enable their positions to be laid down as correctly as they can be ... Page 11 ... longitudes between two meridians , such as those of the observatories at Greenwich and Paris , and the measurement of an arc of the meridian to obtain the length of a degree in different latitudes , from whence to deduce the figure and ... Page 13 ... longitudes by signals , which will be explained hereafter . * It has been already stated , that the sides of. * It is also eminently calculated for those light - houses where powerful illumina- * This instrument was raised on a scaffold ... Popular passages Ocean, the first thing which strikes us is, that, the north-east and south-east monsoons, which are found the one on the north and the other on the south side of the... Wales," will be found all the details connected with the measurement of an arc of the meridian, extending from Dunnose in the Isle of Wight, to Clifton in Yorkshire. The calculations are resumed at page 354« of the third volume ; the length of one degree of the arc resulting from which, in latitude 52° 30... AB, aBA, the sum of the two refractions ; hence, supposing half that sum to be the true refraction, we have the following rule when the objects are reciprocally depressed. Subtract the sum of the two depressions from the contained arc, and half the remainder is the mean refraction : — If one of the points B, instead of being depressed, be elevated suppose to the point g, the angle of elevation being g AD, then * " Trigonometrical Survey," TOl. ip 175. See also, on the subject of refraction, Woodhouse's... ... indigo), till it nearly reaches to the necks of the bottles, which are then corked for the convenience of carriage. On setting the stand tolerably level by the eye, these corks are both withdrawn (-which must be done carefully and when the tube is nearly level, or the water will be ejected with violence) and the surface of the water in the bottles being necessarily on the same level, gives a horizontal line in whatever direction the tube is turned, by which the vane of the levelling-staff is... BA, the sum of the two refractions ; the rule for the mean refraction then in this case is, subtract the depression from the sum of the contained arc and the elevation, and half the remainder is the mean refraction* The refraction... Z S") when to the north below the pole. Perhaps the rule given by Professor Young for the two first cases is more simply expressed thus : — Call the zenith distance north or south, according as the zenith is north or south of the object. If it is of the same name with the declination, their sum will be the latitude; if of different names, their difference; the latitude being of the same name as the greater. EXAMPLE I. On April 25, 1838, longitude 2m 30' east, the meridional double altitude of the... When the boiling point at the upper station alone is observed, and for the lower the level of the sea, or the register of a distinct barometer is taken, then the barometric reading had better be converted into feet, by the usual method of subtracting its logarithm from 1-47712 (log. of 30 inches) and multiplying by '0006, as the differences in the column of " barometer " vary more rapidly than those in the ''''feet In the orthographic projection every point of the hemisphere is referred to its diametral plane or base, by a perpendicular let fall on it, so that its representation, thus mapped on its base, is such as it would actually appear to an eye placed at an infinite distance from it. ... correction, with its proper sign. If the sign be +, the correction must be added to the reduced altitude; but if it be — , it must be subtracted : in either case the result will give an Approximate Latitude. With the Altitude and Sidereal Time of observation, take out the second correction, and with the day of the month and the same Sidereal time, take out the third correction. When the thermometer has been boiled at the foot and at the summit of a mountain, nothing more is necessary than to deduct the number in the column of feet opposite the boiling point below, from the same of the boiling point above : this gives an approximate height, to be multiplied by the number opposite the mean temperature of the air in Table II., for the correct altitude. Bibliographic information
{"url":"https://books.google.co.ls/books?id=2FRDAAAAIAAJ&q=longitude&dq=editions:ISBN1107497817&lr=&output=html_text&source=gbs_word_cloud_r&cad=5","timestamp":"2024-11-13T01:32:07Z","content_type":"text/html","content_length":"70114","record_id":"<urn:uuid:3313b352-975a-45ce-b20e-8862692ffc74>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00350.warc.gz"}
Lattice walks at the Interface of Algebra, Analysis and Combinatorics Schedule for: 17w5090 - Lattice walks at the Interface of Algebra, Analysis and Combinatorics Beginning on Sunday, September 17 and ending Friday September 22, 2017 All times in Banff, Alberta time, MDT (UTC-6). Sunday, September 17 16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) Dinner ↓ 17:30 - 19:30 A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110)) Monday, September 18 Breakfast ↓ - Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) - Introduction and Welcome by BIRS Station Manager (TCPL 201) Manuel Kauers: Course #1: Some Lessons on Computer Algebra ↓ The field of computer algebra can be divided into several mutually related subfields. Some of these are more relevant to combinatorialists than others, but we believe that there are some which 09:00 should be known better. Therefore, for this overview talk, we have decided not only to discuss the most natural topics forming the subfield sometimes called symbolic combinatorics (featuring - algorithms for recurrences and differential equations) but also discuss some of the techniques belonging to two other subfields that may be less known: exact arithmetic (with fast 10:15 multiplication and working with homomorphic images) and Groebner basis (with techniques for reasoning about polynomial ideals). These topics are at the heart of computer algebra, and we believe that it will be handy for a computationally oriented combinatorialist to know about them. (TCPL 201) - Coffee Break (TCPL Foyer) Andrew Rechnitzer: Course #2: An introduction to the kernel method ↓ 10:45 The kernel method has become one of the standard tools for solving lattice path enumeration problems. I will start by introducing the method in the context of constrained 1-dimensional random - walks and simple polymer models. When we move up to 2 dimensions, the generating functions of random walk models can display a very broad range of analytic properties. In this context the 12:00 kernel method becomes much richer and I will demonstrate a few of the ways in which it may be applied. Along the way I will give a few "warm-up" exercises and finish with a few open problems concerning more interacting boundary conditions. (TCPL 201) - Lunch (Vistas Dining Room) Guided Tour of The Banff Centre (optional) ↓ - Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) Group Photo ↓ - Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the 14:20 official group photo! (TCPL Foyer) Charlotte Hardouin: Course #3: An overview of difference Galois theory ↓ When studying special functions of the complex variable, one would like to determine whether a function is algebraic over the field of rational functions or not. To refine this classification, 14:30 one could also be interested in the differential dependence of the function. Starting from the holonomic or D-finite functions, that is the ones that satisfy a linear differential equation, we - consider a new class of complexity, called differential transcendence, that corresponds to the functions that do not satisfy a polynomial relation with their derivatives. A celebrated example 15:45 is the Gamma function, which is differentially transcendental by a result of Hölder. When the special function is given by a linear functional equation, the difference Galois theory provides powerful and systematic tools that allows to determine such kind of relations. In this talk, I will try to give an overview of this Galois theory by focusing on examples and applications. (TCPL 201) - Coffee Break (TCPL Foyer) Kilian Raschel: Course #4: Probabilistic Tools for Lattice Path Enumeration ↓ In this lecture we will focus on techniques coming from probability theory and analysis to study models of walks confined to multidimensional cones, with arbitrary big steps and possibly with weights. To give an example of the fruitful interaction between the above domains, we will restrict our attention to the computation of critical exponents which appear in the asymptotic behavior of confined walks. For instance, what is the asymptotic behavior of the number of excursions, i.e., of the number of walks starting and ending at given points, remaining in a fixed 16:15 cone, as the number of steps goes to infinity ? In a first part, using an approximation of random walks by Brownian motion, we will present the seminal work of Denisov and Wachtel providing a - solution to the above problem in the case of excursions. We shall also present partial results and conjectures related to the total number of walks confined to a cone. We will show new results 17:30 concerning the non-D-finiteness of some series counting walks in the quarter plane. In a second part we shall be interested in discrete harmonic functions in cones. The generating function of these harmonic functions satisfies a functional equation, which happens to be closely related to the well-known functional equation that appears in the context of enumeration of confined walks. We shall explain the link between these harmonic functions and a one-parameter family of conformal mappings. These harmonic functions provide a second way to compute the critical exponents. We will present several conjectures. (TCPL 201) Dinner ↓ - A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) - Poster session (TCPL 201) Tuesday, September 19 - Breakfast (Vistas Dining Room) Bruno Salvy: Algorithmic Tools for the Asymptotics of Diagonals ↓ - TBA (TCPL 201) - Coffee Break (TCPL Foyer) Alin Bostan: Algorithmic proof for the transcendence of D-finite power series ↓ 10:30 Given a sequence represented by a linear recurrence with polynomial coefficients and sufficiently many initial terms, a natural question is whether the transcendence of its generating function - can be decided algorithmically. The question is non trivial even for sequences satisfying a recurrence of first order. An algorithm due to Michael Singer is sufficient, in principle, to answer 10:55 the general case. However, this algorithm suffers from too high a complexity to be effective in practice. We will present a recent method that we have used to treat a non-trivial combinatorial example. It reduces the question of transcendence to a (structured) linear algebra problem. (TCPL 201) Christoph Koutschan: Reduction-Based Creative Telescoping for D-Finite Functions ↓ Creative telescoping is a powerful technique to tackle summation and integration problems symbolically, but it can be computationally very costly. Many existing algorithms compute two objects, 11:00 called telescoper and certificate, but in many applications only the first one is of interest, while typically the second one is larger in size. In the past few years a new direction of - research was initiated, namely to develop creative telescoping algorithms that are based on Hermite-type reductions, which avoid the computation of the certificate and therefore can be more 11:25 efficient in practice. In our 2016 ISSAC paper, we have developed an algorithm for constructing minimal-order telescopers for algebraic functions, based on Trager's reduction and on a so-called polynomial reduction. Later we have extended this algorithm to fuchsian D-finite functions. This is joint work with Shaoshi Chen, Mark van Hoeij, and Manuel Kauers. (TCPL 201) Stephen Melczer: Lattice Path Enumeration and Analytic Combinatorics in Several Variables ↓ This talk focusses on the interaction between the kernel method, a powerful collection of techniques used extensively in the enumeration of lattice walks in restricted regions, and the 11:30 relatively new field of analytic combinatorics in several variables (ACSV). In particular, the kernel method often allows one to write the generating function for the number of lattice walks - restricted to certain regions as the diagonal of an explicit multivariate rational function, which can then be analyzed using the methods of ACSV. This pairing is powerful and flexible, 11:55 allowing for results which can be generalized to high (or even arbitrary) dimensions, weighted step sets, and the enumeration of walks returning to certain boundary regions of the domains under consideration. Several problems will be discussed, including joint work with Alin Bostan, Mireille Bousquet-Mélou, Julien Courtiel, Marni Mishna, Kilian Raschel, and Mark Wilson. (TCPL 201) - Lunch (Vistas Dining Room) Boris Adamczewski: Diagonals, congruences, and algebraic independence ↓ A very rich interplay between arithmetic, geometry, transcendence and combinatorics arises in the study of homogeneous linear differential equations and especially of those that “come from 14:00 geometry” and the related study of Siegel G-functions. A remarkable result is that, by adding variables, we can see many transcendental G-functions (and thus many generating series) as arising - in a natural way from much more elementary function, namely rational functions. This process, called diagonalization, can be thought of as a formal integration. I will discuss some properties 14:55 enjoy by diagonals of rational functions and connect them with Lucas'congruences for binomial coefficients and algebraic independence of power series. This corresponds to some joint works with Jason Bell and Eric Delaygue. (TCPL 201) - Coffee Break (TCPL Foyer) Jason Bell: S-units and D-finite power series ↓ 15:30 Let $K$ be a field of characteristic zero and let $G$ be a finitely generated subgroup of $K^*$. Given a $P$-recursive sequence $f(n)$ taking values in $K$, we study the problem of when $f(n)$ - takes values in $G$. We show that this problem can be interpreted purely dynamically and, when one does so, one can prove a much more general result about algebraic dynamical systems. Using 15:55 this framework, we can then show that the set of $n$ for which $f(n)\in G$ is a finite union of infinite arithmetic progressions along with a set of zero Banach density, which simultaneously generalizes a result of Methfessel and a separate result due to Bezivin. (TCPL 201) Shaoshi Chen: Power series with coefficients from a finite set ↓ 16:00 A D-finite power series satisfies a system of linear partial differential equations with polynomial coefficients of special type. This class of power series has been systematically investigated - by Stanley in his book Enumerative Combinatorics (Volume II). We prove that a multivariate D-finite power series with coefficients from a finite set is rational. This generalizes a rationality 16:25 theorem of van der Poorten and Shparlinski in 1996. As an application, we will show how this result can be used to study the nonnegative integer points on algebraic varieties. This is a joint work with Jason P. Bell. (TCPL 201) Torin Greenwood: Multivariate Algebraic Generating Functions: Asymptotics and Examples ↓ 16:30 We find a formula for the asymptotics of the coefficients of a generating function of the form, $H(z_1, z_1, ..., z_d)^{-\beta}$, as the indices approach infinity in a fixed ratio. Then, we - look at how this formula can be applied to generating functions that enumerate the possible structures into which RNA sequences can fold. This work relies on the techniques in multivariate 16:55 analytic combinatorics developed by Pemantle and Wilson. We combine the multivariate Cauchy integral formula with explicit contour deformations to compute the asymptotic formula. A challenge of using the formula is correctly identifying the points which contribute to asymptotics. (TCPL 201) Problem Solving Session ↓ - Participants are welcome to present open problems. (TCPL 201) - Dinner (Vistas Dining Room) Wednesday, September 20 - Breakfast (Vistas Dining Room) Timothy Budd: Winding angles of simple walks on Z^2 ↓ 09:00 A method will be described to determine generating functions for certain classes of simple walks on the square lattice, while keeping track of their winding angle around the origin. Together - with a reflection principle the method can be used to count certain simple walks in wedges of various opening angles, and this is shown to lead in particular to a new proof of the counting of 09:55 Gessel excursions. If time permits, I'll discuss a connection with the enumeration of planar maps and O(n) loop models. (TCPL 201) - Coffee Break (TCPL Foyer) Ira Gessel: Lattice walks on the half-line ↓ - I will discuss proofs of the algebraicity of generating functions for walks on the half-line with an arbitrary set of integer steps, with emphasis on Paul Monsky's approach that reduces the 10:55 problem to the case of Motzkin walks with noncommuting weights. (TCPL 201) Michael Drmota: Positive catalytic and non-catalytic polynomial systems of equations ↓ 11:00 Several combinatorial objects (including several types of random walks) have a recursive combinatorial description that leads to a (system of) functional equation(s) for the corresponding - counting generating function, where the right hand side of the equation has non-negative coefficients; sometimes there also appears a catalytic variable, for example for random walks restricted 11:25 to some region or for the enumeration of planar maps. The purpose of this talk to show that the positivity condition leads to universal asymptotic properties of the underlying counting problem. (TCPL 201) Julien Courtiel: Conjectures about central weightings ↓ Assigning weights to steps is a natural way to extend the usual questions that concern lattice walks enumeration, like finding asymptotic estimates or the D-finiteness of the generating 11:30 function. The assignments of weights we consider in this talk are not arbitrary, but still give enough generalization to cover all sorts of behavior. More precisely, we are going to introduce - what we call central weightings, namely an assignment of weights such that all paths with the same start point, end point, and length, share the same weight. After explaining why central 11:55 weighting constitute a "good" study framework for lattice walks enumeration, we present some conjectures about these central weightings, which may require your help. This is a joint work with Steve Melczer, Marni Mishna, and Kilian Raschel. (TCPL 201) - Lunch (Vistas Dining Room) - Free Afternoon (Banff National Park) - Dinner (Vistas Dining Room) Thursday, September 21 - Breakfast (Vistas Dining Room) Michael Singer: Walks, Difference Equations and Elliptic Curves ↓ - In the recent years, the nature of the generating series of the walks in the quarter plane has attracted the attention of many authors. The main questions are: are they algebraic, holonomic 09:55 (solutions of linear differential equations) or at least hyperalgebraic (solutions of algebraic differential equations)? (TCPL 201) - Coffee Break (TCPL Foyer) Lucia Di Vizio: On the direct problem in differential Galois theory ↓ - I will describe an algorithm (implemented in Maple) that produces a system of generators of the Lie algebra of an absolutely irreducible linear differential equation over the rational functions 10:55 with complex coefficients. This is a joint work with M. Barkatou, T. Cuzeau et J.-A. Weil. (TCPL 201) Amelie Trotignon: Simple walk in three quarter plane ↓ 11:00 In this talk, we consider the simple walk ($\textit{i.e.}$ walk with a set of steps {$\mathcal{S}=\{\text{W, N, E, S}\}$}) in the lattice plane. We constrain the walk to avoid the negative - quadrant. The objective is to compute the number of paths $c(i,j;n)$ of length $n$, starting at $(0,0)$ and ending at $(i,j)$, with $\left(i\geq 0 \text{ or } j\geq 0\right)$ and $n\geq 0$. A 11:25 way to achieve this goal is to cut the three quarters of the plane into two convex symmetric parts which will be three octants of the plane. (TCPL 201) Miklos Bona: Unimodality, Log-concavity, and Stack sorting ↓ - We will survey a series of intriguing conjectures about log-concavity and t-stack sortable permutations, as well as some recent enumeration results. Connections to lattice paths and labeled 11:55 plan trees will also be discussed. (TCPL 201) - Lunch (Vistas Dining Room) Aleks Owczarek: Counting shared sites of three friendly directed lattice paths and related problems ↓ 14:00 We study the enumeration of three friendly directed walks on the square lattice. We show how one can count different types of shared sites (interactions) by solving the associated functional - equations using the obstinate kernel method. However, we also highlight that the introduction of finer counting problems that introduce asymmetry can lead to as yet unsolvable functional 14:55 equations. We survey related results over the past few years counting extra features in multiple path problems. (TCPL 201) - Coffee Break (TCPL Foyer) Tony Guttmann: Counting Eulerian orientations ↓ Inspired by the paper of Bonichon, Bousquet-M\'elou, Dorbec and Pennarun, we give a system of functional equations for the ogfs for the number of planar Eulerian orientations counted by the 15:30 edges, $U(x),$ and the number of 4-valent planar Eulerian orientations counted by the number of vertices, $A(x)$. The latter problem is equivalent to the 6-vertex problem on a random lattice, - widely studied in mathematical physics. While unable to solve these functional equations, they immediately provide polynomial-time algorithms for the coefficients of the generating function. 15:55 From these algorithms we have obtained 100 terms for $U(x)$ and 90 terms for $A(x).$ Analysis of these series suggests that they both behave as $ \cdot (1 - \mu x)^2/\log(1 - \mu x),$ where we make the confident conjectures that $\mu = 4\pi$ for Eulerian orientations counted by edges and $\mu=4\sqrt{3}\pi$ for 4-valent Eulerian orientations counted by vertices. (Joint work with Andrew Elvey Price). (TCPL 201) Thomas Prellberg: Higher-order multi-critical points in two-dimensional lattice polygon models ↓ 16:00 We introduce a deformed version of Dyck paths (DDP), where additional to the steps allowed for Dyck paths, ‘jumps’ orthogonal to the preferred direction of the path are permitted. We consider - the generating function of DDP, weighted with respect to their half length, area and number of jumps. This represents the first example of a hierarchy of exactly solvable two-dimensional 16:25 lattice vesicle models showing higher-order multi-critical points with scaling functions expressible via generalized Airy functions, as conjectured by John Cardy. (TCPL 201) - update of problems (TCPL 201) - Dinner (Vistas Dining Room) Friday, September 22 07:00 - Breakfast (Vistas Dining Room) Igor Pak: The combinatorics and complexity of integer sequences ↓ 09:00 - I will give a broad review of classes of integer sequences arising in combinatorics. Beside the more traditional asymptotic results, I will also emphasize the complexity aspects. Many 09:55 examples will be presented. Some recent results will be mentioned as well. (TCPL 201) 10:00 - Coffee Break (TCPL Foyer) Christian Krattenthaler: A factorisation theorem for the number of rhombus tilings of a hexagon with triangular holes ↓ 10:30 - I shall present a curious factorisation theorem for the number of rhombus tilings of a hexagon with vertical and horizontal symmetry axis, with triangular holes along the latter axis. Its 10:55 proof is essentially based on (non-intersecting) lattice paths. This is joint work with Mihai Ciucu. (TCPL 201) 11:00 - Marni Mishna: Closing Remarks (TCPL 201) Checkout by Noon ↓ 11:30 - 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the 12:00 guest rooms by 12 noon. (Front Desk - Professional Development Centre) 12:00 - Lunch from 11:30 to 13:30 (Vistas Dining Room)
{"url":"http://www.birs.ca/events/2017/5-day-workshops/17w5090/schedule","timestamp":"2024-11-08T08:38:08Z","content_type":"application/xhtml+xml","content_length":"48781","record_id":"<urn:uuid:32e6026a-ef31-4a4a-9908-53e0f0e4e74a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00555.warc.gz"}
What is Comoving distance in physics? Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time. What is a comoving volume? The comoving volume VC is the volume measure in which number densities of non-evolving objects locked into Hubble flow are constant with redshift. It is the proper volume times three factors of the relative scale factor now to then, or (1 + z)3. How is Comoving distance calculated? Comoving distance is obtained by integrating the proper distances of nearby fundamental observers along the line of sight (LOS), whereas the proper distance is what a measurement at constant cosmic time would yield. What is Comoving distance astronomy? The comoving distance (line of sight) DC between two nearby objects in the Universe is the distance between them which remains constant with epoch if the two objects are moving with the Hubble flow. Are we comoving observers? We obtain that comoving observers observe the same light ray with the same frequency and direction, and so gravitational redshift effect is a particular case of Doppler effect. We also define a distance between an observer and the events that it observes, that coincides with the known affine distance. How do you find the correct length? Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth. Length contraction L is the shortening of the measured length of an object moving relative to the observer’s frame: L=L0√1−v2c2=L0γ L = L 0 1 − v 2 c 2 = L 0 γ . What is conformal time? Rather, the conformal time is the amount of time it would take a photon to travel from where we are located to the furthest observable distance, provided the universe ceased expanding. How do you calculate luminosity distance? with the Hubble parameter H = ˙a/a and h(z) = H(z)/H0. for the luminosity distance dL = a0(1 + z)fK(z) as a function of the redshift z. Are we Comoving observers? What is proper length and proper time? Proper distance is analogous to proper time. The difference is that the proper distance is defined between two spacelike-separated events (or along a spacelike path), while the proper time is defined between two timelike-separated events (or along a timelike path). What is proper length and correct? What is the formula of length in physics? How to convert wave number to wavelength in a calculator? How to convert wavelength to wave number. Below is a calculator to calculate wavelength from wave number. Insert wavenumber and your units and press calculate wavelength: The units will be the inverse of wavenumber, for example, if your units for wavenumber is nm-1, the units for wavelength is nm. Which is the correct formula for the wavenumber equation? Wavenumber equation is mathematically expressed as the number of the complete cycle of a wave over its wavelength, given by – \\(k=\\frac{2\\pi }{\\lambda }\\) Where, k is the wavenumber; 𝜆 is the wavelength of the wave; Measure using rad/m ; Wavenumber Definition. In theoretical physics: It is the number of radians present in the unit distance. Is the wave number the inverse of the wavelength? The units will be the inverse of wavelength, for example, if your units for wavelength is nm, the units for wavenumber is nm-1. How are wavenumbers used in spectroscopic spectroscopy? In spectroscopy. Note that the wavelength of light changes as it passes through different media, however, the spectroscopic wavenumber (i.e., frequency) remains constant. Conventionally, inverse centimeter (cm −1) units are used for , so often that such spatial frequencies are stated by some authors “in wavenumbers”,…
{"url":"https://diaridelsestudiants.com/what-is-comoving-distance-in-physics/","timestamp":"2024-11-13T02:10:49Z","content_type":"text/html","content_length":"47454","record_id":"<urn:uuid:41d6f457-bb77-4363-8bd7-279e40c9e135>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00824.warc.gz"}
A Method of Front-End Arithmetic Eddie Newton Carver Area High School l3l00 S. Doty Avenue Chicago IL 60627 Grades 3-l2 Upon completion of the Front-End Mathematics lesson, students will be able a. Transfer computational fatigue from the most significant to the least significant columns. b. Eliminate carry-overs and their errors. c. Use one or more of the fundamental laws of commutation, association and distribution, and the concepts of place value and regrouping. Multicultural Aspects: Many peoples contributed to the development of the modern system of numerals. Any society that uses Arabic numerals can use the Front-End Method of Arithmetic for calculating. Materials Needed: Notebook Paper for a class of 20 students. Pencils for the class. Twenty store receipts from supermarkets. Students will be given four shopping lists and asked to estimate the number of $20 bills needed to purchase the items on each list. Calculators are not allowed. Students will be asked to add the figures on the list to obtain the exact Students will be taught the Front-End Method of Arithmetic. Students will recalculate the shopping lists using front-end arithmetic without calculators. Example of front-end addition: The numbers are lined up as usual in their proper vertical columns. The first column is totalled and the subtotal, 3+6+9=18, written in its proper place under the line. At this stage we already know that the final sum will exceed $l80. The second column is summed, its subtotal, 7+3+7+8=25, is written in its proper place under the line. The first two subtotals add up to $205, a second approximation to the final answer. The third column subtotal, 24, is again written in its proper place under the line. The first three subtotals give the partial sum $207.4, a third approximation to the final answer. The fourth column subtotal, 22, is again written in its proper place under the line. Added to the previous partial sum, it yields the final answer $207.62. Performance Assessment: Students will be given a post-test consisting of ten front-end addition problems using supermarket receipts. Students will be expected to use this new method to compute the problems with l00% accuracy. Front-end arithmetic provides an interesting variation for and supplement to, the classical rear-end approach. Students are able to add columns of figures without carrying. This method has real-world application to consumer DeBethune, Andre J. A Method of Front-End Arithmetic, in Enrichment for the Grades, The National Council of Teachers of Mathematics, Inc. Washington, D.C., l963. Return to Mathematics Index
{"url":"https://smileprogram.info/ma9211.html","timestamp":"2024-11-07T19:28:05Z","content_type":"text/html","content_length":"4055","record_id":"<urn:uuid:027d3063-e03e-4267-b45e-a487e263e14e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00157.warc.gz"}
Define centripetal acceleration.Define centripetal acceleration. Sorry, you do not have permission to ask a question, You must login to ask a question. Define centripetal acceleration. Centripetal acceleration is the acceleration experienced by an object moving in a circular path. It is directed towards the center of the circle and its magnitude is given by the formula: (a_c = frac{v^2}{r}), where (v) is the linear velocity and (r) is the radius of the circular path. Centripetal acceleration is the acceleration experienced by an object moving in a circular path. It is directed towards the center of the circle and its magnitude is given by the formula: (a_c = frac{v^2}{r}), where (v) is the linear velocity and (r) is the radius of the circular path. See less You must login to add an answer.
{"url":"https://expertcivil.com/question/define-centripetal-acceleration/","timestamp":"2024-11-14T03:59:03Z","content_type":"text/html","content_length":"266386","record_id":"<urn:uuid:2fb669dc-e6c2-4da4-b219-4b2e148fe952>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00505.warc.gz"}
My summer doing math (?) research I started college as a physics major, but I switched to computer science my sophomore year when I realized I didn’t want to take more lab classes. After a year as a CS major, I realized I loved the theory classes but didn’t care as much about the practical and programming portions of the curriculum. Going into my junior year, I switched—for the last time—to major in math. During the winter of my junior year, I applied to a bunch of math Research Experience for Undergraduates programs at schools around the country. I was thrilled to be accepted to the program at Rochester Institute of Technology organized by Darren Narayan for the summer of 2007. I headed to Rochester starry eyed about spending the summer doing math research. On my first day, I found out I’d be working with the inimitable mathematician Stanisław P. Radziszowski on computational combinatorics with another student researcher, Evan Heidtmann. The first question Dr. Radziszowski had for us was “How are your programming skills?” This was not what I expected–or wanted–to hear at a math research program. We spent the summer investigating the Ramsey number $R_4(C_4)$. This is the smallest number $n$ such that a complete graph with $n$ vertices where the edges are colored by four colors is guaranteed to have a monochromatic cycle with 4 vertices. At the time, the number was known to be either 18 or 19. Due to combinatorial explosion, it’s not possible to enumerate every graph 4-coloring for graphs of this size. A graph with 18 vertices has 153 edges which could be colored in $4^{153}$ ways (ignoring isomorphisms). This is a very, very big number. Since the number was one of two options, one way to prove it had to be 19 would be to find a complete graph with 18 vertices where the edges are colored by four colors while not containing a 4-cycle subgraph in one color. Evan and I spent the summer hunting for this mythical graph coloring by writing C code to generate and check graphs. Before showing up in Rochester, I had no experience writing C code. Fortunately, my brilliant and patient research partner Evan did, and he quickly got me up to speed enough to manipulate and generate graphs using nauty. I had also never used version control, but Evan was running a version control (I don’t recall which one) platform on his home server, so we were able to easily version and collaborate on our code. We also had access to fifty Unix machines in the RIT computing lab, so this summer introduced me to the Unix command line, shell scripting, and distributed computing. I wrote bash scripts that would send our processing jobs out to these machines and aggregate results back to a host node. Well, we failed to solve our problem. It turns out the solution was actually published by Chinese mathematicians while we were working on it. $R_4(C_4)=18$, so the graph we looked for didn’t even Nonetheless, that experience ended up being one of the most important of my higher education. I learned some of the most fundamental skills of my career (version control, working in terminal, writing fast code, and persevering through computer challenges), and I learned to enjoy writing code. I didn’t write much code between 2007 and starting my operations research master’s degree research in 2011, but when I started again, the skills and tenacity I developed at that REU set me up for success in grad school and joining industry as a data scientist in 2012. I wouldn’t have picked that research project had I been given a choice; I’m so glad I wasn’t asked.
{"url":"https://tdhopper.com/blog/my-summer-doing-math-research/","timestamp":"2024-11-09T15:35:03Z","content_type":"text/html","content_length":"21126","record_id":"<urn:uuid:8ebbb5e9-14a8-4589-bad1-ba32d9a3c448>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00163.warc.gz"}
Trying to understand the production numbers I'm trying to understand the production numbers, but it doesn't add up. I'm starting from the beginning, so the iron production. The thread Calculating production is asking a similar question, but there's no clear answer. In my scenario, I have 7 electric miners and a cumulated smelting speed of 14 (4 steel furnaces + 6 stone furnaces). The first issue is that the production screen doesn't seem to be consistent. Over the last 10 minutes, the consumption of iron ore and production of iron plates is perfectly flat. The numbers for 10 minutes are 228 / min, for 1 minute 250 / min, for 5 seconds it oscillates between 300 and 350 per minute. The second thing is the production speed from an electric miner. What is the meaning of the 'mining power' = 3? This with the mining speed 0.5 doesn't combine into something meaningful. However, if I assume that 228 ores / min is correct, then I get 3.8 ores / sec, divided by 7 miners it's 0.54 (ore / sec / miner), which seems in line with the mining speed of 0.5. If that's correct, then one miner outputs one ore every 2 seconds. Now the iron ore smelting process is 3.5 (I assume seconds), so I have a smelting capacity of 14 / 3.5 sec / plate = 4 plates / sec, which is also the ore consumption capacity. It would make sense, since 4 is a bit over 3.8, and I can indeed see that the last miners on the belt do not work at full capacity. If that's correct, it means that the capacity of a single steel furnace is 2 (speed) / 3.5 sec = 0.57 plate / sec. And the theoretical ratio electric miner / steel furnace would be .5 / (2 / 3.5) = 7 / 8; dimension-wise that's miner output / furnace output = 7/8, or 7 furnaces = 8 miners. It makes sense in the end, so unless I overlooked something, it seems to me that the 5s tab in the production screen is not correct, and the 1 min and 10 min screens are off by more than what would be the rounding error. Re: Trying to understand the production numbers The 5s tab in the production screen is indeed buggy and not displaying correct numbers. I don't know what's the problem but the numbers are always too high. As for the production numbers. For assmblers/furnaces it's quite transparent: Every item has a base production time and every assembler has a base speed. To calculate the seconds per item you just divide the base production time with the base speed or for items per second you do the opposite. So for iron plates the base production time is 3.5 and the base speed of stone furnaces is 1 meaning 3.5 s production time per item or 0.2857 itmes/s. If you use steel or electric furnaces the production time is halved since both furnaces have a base speed of 2. 3.5/2 = 1.75 seconds /item (and 0.5333 items/s which is very close to your calculation of 0.54). For miners it's complicated and not clear. First of all mining power is not relevant at the moment (maybe it was in prior versions). Every mineable ressource has a mining hardness (which is comparable to the base production time) and iirc it's 0.9 for all mineable materials (iron/copper/stone/coal). The electric miner has a base speed of 0.5. This gives 0.9/0.5 = 1.85 s/item or 0.555 You can change this with the use of modules. These act as a mutiplier to the base speed of the assember/furnace but don't change the base production time (can't be changed afaik). For a perfect ratio you need about 6% more miners than furnaces (given you use the faster steel or electric furnaces with a base production time of 2). Why there is this discrepancy I don't know and in fact think it should be removed. But at least that's how it's calculated at the moment. Re: Trying to understand the production numbers Ah yes, it works better when taking the hardness of 0.9 into account! In this case, we get - miner output = .5 / .9 ore/s = .55 ore/s - smelter output = 2 / 3.5 plate/s = .57 plate/s With my 7 miners and 7 steel furnaces, I get consumption = 7 * .55 = 3.9 ore/s = 233 ore/min, which is in between the 10 min production tab (228/min) and the 1 min production tab (250/min). I guess that could be some rounding error. Thanks for the clarification! By the way, it almost looks like the devs are making it complicated on purpose Re: Trying to understand the production numbers tralala wrote: For miners it's complicated and not clear. First of all mining power is not relevant at the moment (maybe it was in prior versions). Every mineable ressource has a mining hardness (which is comparable to the base production time) and iirc it's 0.9 for all mineable materials (iron/copper/stone/coal). The electric miner has a base speed of 0.5. This gives 0.9/0.5 = 1.85 s/ item or 0.555 items/s. I'm afraid this is completely incorrect. The mining rate is not speed / hardness. Rather it is: Speed * (Power - Hardness) / Mining time. An electric drill has a power of 3 and speed of 0.5. Stone has hardness 0.4 and iron 0.9, both have mining time of 2. So this gives: Stone: 0.5 * (3-0.4) / 2 = 0.65 / s Iron: 0.5 * (3-0.9) / 2 = 0.525 / s If power is high hardness doesn't matter too much. But if your power is, say, 1, iron will be 6 times harder to mine than stone. Power is like damage per hit, speed is hits per second, hardness is armor, time is health. If you mod the power to be below the hardness, you'll get a negative mining rate Re: Trying to understand the production numbers Holy-Fire wrote: tralala wrote: For miners it's complicated and not clear. First of all mining power is not relevant at the moment (maybe it was in prior versions). Every mineable ressource has a mining hardness (which is comparable to the base production time) and iirc it's 0.9 for all mineable materials (iron/copper/stone/coal). The electric miner has a base speed of 0.5. This gives 0.9/ 0.5 = 1.85 s/item or 0.555 items/s. I'm afraid this is completely incorrect. The mining rate is not speed / hardness. Rather it is: Speed * (Power - Hardness) / Mining time. An electric drill has a power of 3 and speed of 0.5. Stone has hardness 0.4 and iron 0.9, both have mining time of 2. So this gives: Stone: 0.5 * (3-0.4) / 2 = 0.65 / s Iron: 0.5 * (3-0.9) / 2 = 0.525 / s If power is high hardness doesn't matter too much. But if your power is, say, 1, iron will be 6 times harder to mine than stone. Power is like damage per hit, speed is hits per second, hardness is armor, time is health. If you mod the power to be below the hardness, you'll get a negative mining rate Thanks Holy-Fire, you are correct. I checked after I had written my post and although stone is mined faster than ore it was nowhere near my formula. So now we now for sure! I wonder what the reason is to introduce hardness. It doesn't seem to play a major role in the game. As of now it's seems to me be one of those features which add conmplexity but not depth. Maybe it's more relevant for some mods. Re: Trying to understand the production numbers tralala wrote: Holy-Fire wrote: tralala wrote: ... Thanks Holy-Fire, you are correct. I checked after I had written my post and although stone is mined faster than ore it was nowhere near my formula. So now we now for sure! I wonder what the reason is to introduce hardness. It doesn't seem to play a major role in the game. As of now it's seems to me be one of those features which add conmplexity but not depth. Maybe it's more relevant for some mods. You are right, currently it doesn't matter a lot because the mining powers are too similar. I still think it adds depth, at least at the idea level. And it would be easy to make it more meaningful - e.g., hardness of copper could be increased, so that you're barely able to scrape copper with burner drills, and have to move on to electric as soon as possible. There could also be more advanced resources with high hardness so that you'll need new, more powerful drills to mine effectively.
{"url":"https://forums.factorio.com/viewtopic.php?f=18&t=4059&p=30260","timestamp":"2024-11-08T02:39:51Z","content_type":"text/html","content_length":"63635","record_id":"<urn:uuid:60bfb01a-63b7-4674-abc1-e9cc2b6b0d28>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00415.warc.gz"}
Data + Design There are several different basic data types and it’s important to know what you can do with each of them so you can collect your data in the most appropriate form for your needs. People describe data types in many ways, but we’ll primarily be using the levels of measurement known as nominal, ordinal, interval, and ratio. Levels of Measurement Let’s say you’re on a trip to the grocery store. You move between sections of the store, placing items into your basket as you go. You grab some fresh produce, dairy, frozen foods, and canned goods. If you were to make a list that included what section of the store each item came from, this data would fall into the nominal type. The term nominal is related to the Latin word “nomen,” which means “pertaining to names;” we call this data nominal data because it consists of named categories into which the data fall. Nominal data is inherently unordered; produce as a general category isn’t mathematically greater or less than dairy. Nominal data can be counted and used to calculate percents, but you can’t take the average of nominal data. It makes sense to talk about how many items in your basket are from the dairy section or what percent is produce, but you can’t calculate the average grocery section of your basket. When there are only two categories available, the data is referred to as dichotomous. The answers to yes/no questions are dichotomous data. If, while shopping, you collected data about whether an item was on sale or not, it would be dichotomous. At last, you get to the checkout and try to decide which line will get you out of the store the quickest. Without actually counting how many people are in each queue, you roughly break them down in your mind into short lines, medium lines, and long lines. Because data like this has a natural ordering to the categories, it’s called ordinal data. Survey questions that have answer scales like “strongly disagree,” “disagree,” “neutral,” “agree,” “strongly agree” are collecting ordinal data. No category on an ordinal scale has a true mathematical value. Numbers are often assigned to the categories to make data entry or analysis easier (e.g. 1 = strongly disagree, 5 = strongly agree), but these assignments are arbitrary and you could choose any set of ordered numbers to represent the groups. For instance, you could just as easily decide to have 5 represent “strongly disagree” and 1 represent “strongly agree.” Like nominal data, you can count ordinal data and use them to calculate percents, but there is some disagreement about whether you can average ordinal data. On the one hand, you can’t average named categories like “strongly agree” and even if you assign numeric values, they don’t have a true mathematical meaning. Each numeric value represents a particular category, rather than a count of On the other hand, if the difference in degree between consecutive categories on the scale is assumed to be approximately equal (e.g. the difference between strongly disagree and disagree is the same as between disagree and neutral, and so on) and consecutive numbers are used to represent the categories, then the average of the responses can also be interpreted with regard to that same scale. Enough ordinal data for the moment… back to the store! You’ve been waiting in line for what seems like a while now, and you check your watch for the time. You got in line at 11:15am and it’s now 11:30. Time of day falls into the class of data called interval data, so named because the interval between each consecutive point of measurement is equal to every other. Because every minute is sixty seconds, the difference between 11:15 and 11:30 has the exact same value as the difference between 12:00 and 12:15. Interval data is numeric and you can do mathematical operations on it, but it doesn’t have a “meaningful” zero point – that is, the value of zero doesn’t indicate the absence of the thing you’re measuring. 0:00 am isn’t the absence of time, it just means it’s the start of a new day. Other interval data that you encounter in everyday life are calendar years and temperature. A value of zero for years doesn’t mean that time didn’t exist before that, and a temperature of zero (when measured in C or F) doesn’t mean there’s no heat. Seeing that the time is 11:30, you think to yourself, “I’ve been in line for fifteen minutes already…???” When you start thinking about the time this way, it’s considered ratio data. Ratio data is numeric and a lot like interval data, except it does have a meaningful zero point. In ratio data, a value of zero indicates an absence of whatever you’re measuring—zero minutes, zero people in line, zero dairy products in your basket. In all these cases, zero actually means you don’t have any of that thing, which differs from the data we discussed in the interval section. Some other frequently encountered variables that are often recorded as ratio data are height, weight, age, and money. Interval and ratio data can be either discrete or continuous. Discrete means that you can only have specific amounts of the thing you are measuring (typically integers) and no values in between those amounts. There have to be a whole number of people in line; there can’t be a third of a person. You can have an average of, say, 4.25 people per line, but the actual count of people has to be a whole number. Continuous means that the data can be any value along the scale. You can buy 1.25 lbs of cheese or be in line for 7.75 minutes. This doesn’t mean that the data have to be able to take all possible numerical values – only all the values within the bounds of the scale. You can’t be in line for a negative amount of time and you can’t buy negative lbs of cheese, but these are still To review, let’s take a look at a receipt from the store. Can you identify which pieces of information are measured at each level (nominal, ordinal, interval, and ratio)? Date: 06/01/2014 Time: 11:32am Item Section Aisle Quantity Cost (US$) Oranges—Lbs Produce 4 2 2.58 Apples—Lbs Produce 4 1 1.29 Mozzarella—Lbs Dairy 7 1 3.49 Milk—Skim—Gallon Dairy 8 1 4.29 Peas—Bag Frozen 15 1 0.99 Green Beans—Bag Frozen 15 3 1.77 Tomatoes Canned 2 4 3.92 Potatoes Canned 3 2 2.38 Mushrooms Canned 2 5 2.95 Variable Type Vs. Data Type If you look around the internet or in textbooks for info about data, you’ll often find variables described as being one of the data types listed above. Be aware that many variables aren’t exclusively one data type or another. What often determines the data type is how the data are collected. Consider the variable age. Age is frequently collected as ratio data, but can also be collected as ordinal data. This happens on surveys when they ask, “What age group do you fall in?” There, you wouldn’t have data on your respondent’s individual ages – you’d only know how many were between 18-24, 25-34, etc. You might collect actual cholesterol measurements from participants for a health study, or you may simply ask if their cholesterol is high. Again, this is a single variable with two different data collection methods and two different data types. There general rule is that you can go down in level of measurement but not up. If it’s possible to collect the variable as interval or ratio data, you can also collect it as nominal or ordinal data, but if the variable is inherently only nominal in nature, like grocery store section, you can’t capture it as ordinal, interval or ratio data. Variables that are naturally ordinal can’t be captured as interval or ratio data, but can be captured as nominal. However, many variables that get captured as ordinal have a similar variable that can be captured as interval or ratio data, if you so Ordinal Level Type Corresponding Interval/Ratio Level Measure Example Ranking Measurement that ranking is based on Record runners’ marathon times instead of what place they finish Grouped scale Measurement itself Record exact age instead of age category Substitute scale Original measurement the scale was created from Record exact test score instead of letter grade It’s important to remember that the general rule of “you can go down, but not up” also applies during analysis and visualization of your data. If you collect a variable as ratio data, you can always decide later to group the data for display if that makes sense for your work. If you collect it as a lower level of measurement, you can’t go back up later on without collecting more data. For example, if you do decide to collect age as ordinal data, you can’t calculate the average age later on and your visualization will be limited to displaying age by groups; you won’t have the option to display it as continuous data. When it doesn’t increase the burden of data collection, you should collect the data at the highest level of measurement that you think you might want available later on. There’s little as disappointing in data work as going to do a graph or calculation only to realize you didn’t collect the data in a way that allows you to generate what you need! Other Important Terms There are some other terms that are frequently used to talk about types of data. We are choosing not to use them here because there is some disagreement about their meanings, but you should be aware of them and what their possible definitions are in case you encounter them in other resources. Categorical Data We talked about both nominal and ordinal data above as splitting data into categories. Some texts consider both to be types of categorical data, with nominal being unordered categorical data and ordinal being ordered categorical data. Others only call nominal data categorical, and use the terms “nominal data” and “categorical data” interchangeably. These texts just call ordinal data “ordinal data” and consider it to be a separate group altogether. Qualitative and Quantitative Data Qualitative data, roughly speaking, refers to non-numeric data, while quantitative data is typically data that is numeric and hence quantifiable. There is some consensus with regard to these terms. Certain data are always considered qualitative, as they require pre-processing or different methods than quantitative data to analyze. Examples are recordings of direct observation or transcripts of interviews. In a similar way, interval and ratio data are always considered to be quantitative, as they are only ever numeric. The disagreement comes in with the nominal and ordinal data types. Some consider them to be qualitative, since their categories are descriptive and not truly numeric. However, since these data can be counted and used to calculate percentages, some consider them to be quantitative, since they are in that way quantifiable. To avoid confusion, we’ll be sticking with the level of measurement terms above throughout the rest of this book, except in our discussion of long-form qualitative data in the survey design chapter. If you come across terms “categorical,” “qualitative data,” or “quantitative data” in other resources or in your work, make sure you know which definition is being used and don’t just assume!
{"url":"https://trinachi.github.io/data-design-builds/ch01.html","timestamp":"2024-11-08T00:49:31Z","content_type":"application/xhtml+xml","content_length":"19773","record_id":"<urn:uuid:1d5922a0-71a9-44a5-b5aa-70819d355bd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00723.warc.gz"}
[In Depth] Principal Components Analysis: Concepts And Application | Neuraldemy In the previous post, we learned about SVD and how to use SVD for low-rank approximation. Building upon the concepts of SVD now let’s learn about principal component analysis and how to use this advanced tool for machine learning problems. Imagine you are working with a dataset with many features or a higher-dimensional dataset. The first problem you will face is figuring out what these features say about the data and how to find the important features. Additionally, It becomes really challenging to visualize such features since they exist in a higher dimension. That’s where the PCA comes in to help us. One thing you should note here since we are working with features only, not the target this makes PCA an unsupervised learning algorithm. Here are some definitions that will help you understand what it is. But before we start let’s see what you need to know before you learn about PCA. 2 Weeks to Learn Principal Component Analysis Linear Algebra Basics Matrix Operations Statistical Concepts Python & Scikit-Learn What You Will Learn • Concept of Principal Component Analysis (PCA) • How to perform PCA on datasets • Dimensionality Reduction and Feature Extraction • PCA Implementation Using Scikit-Learn Tutorial Outcomes By the end of this tutorial on Principal Component Analysis (PCA), you’ll have a solid understanding of how to reduce dimensionality of data and extract important features. You’ll be equipped to apply PCA to real-world datasets and gain insights through data visualization. This Tutorial Includes Certificate of completion Concepts & Explanation Hands-On Examples Jupyter Notebook Table of Contents What is Principal Component Analysis? Here are some definitions that will help you get an idea of what PCA is: Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. – Wikipedia^1 Principal component analysis (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. – Herve Abdi ´and Lynne J. Williams^2 Principal components analysis (PCA) is one of a family of techniques for taking high-dimensional data and using the dependencies between the variables to represent it in a more tractable, lower-dimensional form, without losing too much information. PCA is one of the simplest and most robust ways of doing such dimensionality reduction. It is also one of the oldest and has been rediscovered many times in many fields, so it is also known as the Karhunen-Loève transformation, the Hotelling transformation, the method of empirical orthogonal functions, and singular value – Cosma Shalizi^3 Principal component analysis (PCA) has been called one of the most valuable results from applied linear algebra. PCA is used abundantly in all forms of analysis – from neuroscience to computer graphics – because it is a simple, non-parametric method of extracting relevant information from confusing data sets. With minimal additional effort, PCA provides a roadmap for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden, simplified structure that often underlies it. – Jonathon Shlens^4 I hope the above definitions gave you an idea of what PCA is all about. Now let’s move on to learning more about PCA starting with some basic concepts and then moving on to the complex ones and finally applying PCA to our datasets. Goals Of PCA: • Extract the most important information from the data table; • Compress the size of the data set by keeping only important information • Simplify the description of the data set; and • Analyze the structure of the observations and the variables. The Concept Of Covariance Matrix (Σ) To understand the covariance matrix you need to know these basic terms: Variance: variance measures the spread or dispersion of a set of values in a dataset. For a one-dimensional dataset, the variance is calculated as the average of the squared differences between each data point and the mean of the dataset. Var(X) = Σ (Xᵢ - X̄)² / n Code language: PHP (php) Covariance: Covariance is a measure of how much two random variables change together. The sign of the covariance shows the tendency of the linear relationship between variables. Cov(X, Y) = Σ (Xᵢ - X̄)(Yᵢ - Ȳ) / n Also, you need to note one most important thing that the covariance of one variable with itself is nothing but its variance. You can try putting it in the equation, you will get the formula of cov(x, x) = var(x) Code language: JavaScript (javascript) Additionally, you need to note this as well: cov(x, y) = cov(y, x) Correlation: Correlation on the other hand is also a measure of how two variables change together, but it’s more standardized and provides information about the strength and direction of the relationship. The correlation coefficient always ranges between -1 and 1. A correlation of 1 indicates a perfect positive linear relationship, -1 indicates a perfect negative linear relationship, and 0 indicates no linear relationship. Just know that they only talk about linear relationships, nothing else. correlation (X, Y) = covariance (X, Y) / (std_dev (X) * std_dev (Y)) Difference Between Covariance And Correlation: You need to understand the difference between these two terms before we move further: • Covariance: Covariance doesn’t have a clear interpretation of strength or direction. A positive covariance indicates a positive relationship, while a negative covariance indicates a negative relationship. However, the magnitude is not standardized, making it challenging to compare the strengths of relationships between different pairs of variables. • Correlation: The correlation coefficient provides a standardized measure of both the strength and direction of a linear relationship. A correlation of 0 means no linear relationship, 1 indicates a perfect positive linear relationship, and -1 indicates a perfect negative linear relationship. • Covariance: The units of covariance are the product of the units of the two variables. This makes it difficult to compare covariances between variables measured in different units. • Correlation: Being a dimensionless quantity, correlation is not affected by the units of measurement of the variables. This allows for more straightforward comparisons between different pairs of • Covariance: The range of covariance is unbounded. • Correlation: The range of correlation is always between -1 and 1. • Covariance: Covariance can be influenced by the scale of the variables. Therefore, it may not be a suitable measure for comparing the strength of relationships between different pairs of • Correlation: Correlation is not influenced by the scale of the variables, making it a more robust measure for comparing the strength and direction of relationships between different pairs of Correlation is often preferred in practice due to its normalization and ease of interpretation. Now that we have an idea of what these terms are let’s build our intuition for higher dimensional datasets so that we can understand the covariance matrix: Imagine a vector v[1 ]of size n * 1 ( means it has n rows and 1 column) whose elements are given by v[11] v[21] v[31] v[41] … v[n1]. Similarly, you can visualize p such vectors and their elements. Now, let’s consider a dataset (here p =3, n = 4) and see how they fit: v[1] v[2] v[3] 0 1.764052 0.400157 0.978738 1 2.240893 1.867558 -0.977278 2 0.950088 -0.151357 -0.103219 3 0.410598 0.144044 1.454274 The columns (or features) of a dataset can be imagined as vectors ( v[1], v[2] and v[3]) and their elements (v[11] v[21] v[31] v[41], v[12] v[22] v[32] v[42], v[13] v[23] v[33] v[43]) as rows for each column. When we are dealing with such a high-dimensional dataset we need a more simplified way to represent their variance and covariance, that’s where the concepts of covariance matrix appear. If we want to calculate the covariance of the above dataset we can perform the calculation for each element and represent them using a sample covariance matrix. For above dataset, v[1] V[2] v[3] v[1 ] 1.571125 0.788607 -0.609854 v[2] 0.788607 1.434861 -0.242534 v[3] -0.609854 -0.242534 1.160703 Notice that in the above covariance matrix: • The diagonal elements (e.g., 1.571125, 1.434861, 1.160703) represent the variances of individual variables. • The off-diagonal elements (e.g., 0.788607, -0.609854, etc.) represent the covariances between pairs of variables. The diagonal elements will always give you the variances of individual variables because they are nothing but covariance between the same elements. Now, if we have two variables entirely uncorrelated, in that case, we will have a covariance matrix with non-diagonal elements as 0 and diagonal elements will be their variances. This is what it will look like, you can extend it for the p variables and n rows: Symbolic Covariance Matrix for Uncorrelated Variables: σ1² 0 0 0 σ2² 0 0 0 σ3² Now that we have the basic intuition of a higher dimensional dataset and covariance matrix. Let’s define it formally: Definition of Covariance Matrix The covariance matrix or variance-covariance Matrix is a symmetric and positive semi-definite matrix that summarizes the variances and covariances between different variables in a multivariate dataset. It is often denoted by Σ and has dimensions p × p, where p is the number of variables. For a set of n observations on p variables, the elements of the covariance matrix Σ are calculated as follows: 1. Diagonal Elements (i = j): □ Σ[ii]: Represents the variance of the i^th variable. 2. Off-diagonal Elements (i ≠ j ): □ Σ[ij]: Represents the covariance between the i^th and j^th variables. • represents the variance of the ^th variable. • for represents the covariance between the ^th and ^th variables. Properties of Covariance Matrix: • Property: Σ = Σ^T (The matrix is symmetric). • Reason: The covariance between variables X[i] and X[j] is the same as the covariance between X[j] and X[i] . This symmetry arises from the definition of covariance as the average product of deviations from the means. Diagonal Elements: • Property: Diagonal elements represent variances. • Reason: The diagonal elements represent the covariance of a variable with itself, which is equivalent to its variance. Covariance with itself captures the spread or variability of each variable. Non-Negativity of Off-Diagonal Elements: • Property: Off-diagonal elements are non-negative means they are greater than or equal to zero, never negative. Invertibility (Under Certain Conditions): • Property: Σ is invertible (non-singular) if variables are linearly independent. • Reason: Invertibility ensures that there is no perfect linear relationship between variables. If the variables are linearly dependent, the matrix becomes singular and non-invertible. • Property: The correlation matrix (ρ) is obtained by normalizing the covariance matrix. • Reason: Normalizing by the standard deviations converts covariances to correlations, making the values unitless and facilitating comparison across different scales. Orthogonality Of Eigenvectors: • For a symmetric matrix (including a symmetric covariance matrix), the eigenvectors corresponding to distinct eigenvalues are orthogonal. We will use this property in PCA!! Positive Semidefinite Property: • The covariance matrix Σ is positive semidefinite, denoted as Σ ⪰ 0, which means that for any non-zero vector v, the expression v^TΣv is nonnegative. • The positive semidefinite property ensures that the covariance matrix is a non-negative, symmetric matrix. Each entry in the covariance matrix contributes positively to the overall measure of • This means it guarantees that the covariances and variances in the matrix are consistent with the inherent variability and relationships among the variables. • In the context of statistical analysis and machine learning, this property contributes to the stability of algorithms involving covariance matrices such as PCA that we are about to learn. • All eigenvalues of a positive semidefinite matrix are nonnegative, providing insights into the variability along different dimensions. Note: If you are a beginner you may ask but the elements (covariances) of the covariance matrix can be negative too so what does it mean by positive semidefinite? Just know that this property doesn’t imply that the covariances of the covariance matrix are nonnegative or can’t be negative; rather, it refers to its mathematical characteristics. A matrix Σ is positive semidefinite if and only if all its eigenvalues are nonnegative. This is equivalent to saying that, when the matrix is used in a quadratic form, the result is nonnegative for any non-zero vector v. The elements represent covariances and variances. Covariances can indeed be negative, indicating a negative linear relationship between variables. Variances (diagonal elements) are nonnegative. The term “semidefinite” is used because positive semidefinite matrices allow for the possibility of having zero eigenvalues. If all eigenvalues are strictly greater than zero, the matrix is called positive definite. If some or all eigenvalues are zero, it is positive semidefinite. So, when we say a covariance matrix is positive semidefinite, we mean that its mathematical properties guarantee certain stability and well-behavedness in statistical and machine learning contexts, and it doesn’t imply that all elements are nonnegative. I hope this was clear to you. Now, let’s see what is a correlation matrix. We use the correlation matrix in PCA as well. Difference Between Normalization And Standardization 1. Normalization is the process of scaling individual samples to have a unit norm. This is often done using the L2 norm, also known as the Euclidean norm. To calculate – simply divide the vector by its L2 norm. 2. Standardization, also known as Z-score normalization, involves scaling the features so that they have a mean of 0 and a standard deviation of 1. To calculate – first subtract the mean of the vector from the vector and then divide the output by the standard deviation of the vector. Correlation Matrix (ρ): A correlation matrix is nothing but the standardized version of the covariance matrix that provides a measure of the linear relationship between variables while accounting for differences in their Properties Of (ρ): 1. Symmetry: □ The correlation matrix is symmetric, similar to the covariance matrix. 2. Diagonal Elements: □ The diagonal elements are always equal to 1, as a variable has a perfect correlation with itself. 3. Range of Values: □ Correlation coefficients lie in the range[−1,1], just like the covariance matrix. 4. Unit Diagonal Matrix: □ The correlation matrix has ones on the diagonal and correlation coefficients off the diagonal, making it a unit diagonal matrix. 5. Positive Semidefinite: The correlation matrix is positive semidefinite. 6. Eigenvalues: All eigenvalues of a correlation matrix are non-negative. With these basic concepts in mind, we can now proceed further to understand principal component analysis. Please make sure to understand the above concepts clearly before moving ahead. You can ask your questions in the forum if any. Key Terms 💡 Semidefinite Matrix: A symmetric n×n real matrix M for which z^TMz ≥ 0 for all non-zero vectors z with real entries. Positive Semidefinite Matrix: A semidefinite matrix with non-negative eigenvalues. Negative Semidefinite Matrix: A semidefinite matrix with non-positive eigenvalues. Invertible Matrix: A square matrix A for which there exists another matrix B such that AB = BA = I, where I is the identity matrix. Singular Matrix: A square matrix that is not invertible, i.e., its determinant is zero. Positive Definite Matrix: A symmetric n×n real matrix M for which z^TMz > 0 for all non-zero vectors z with real entries. Negative Definite Matrix: A symmetric n×n real matrix M for which z^TMz < 0 for all non-zero vectors z with real entries. Scalars λ for which the equation Av = λv has a non-zero solution v, where A is a square matrix. Non-zero vectors v that satisfy the equation Av = λv, where A is a square matrix and λ is an eigenvalue. Characteristic Equation: The equation det(A – λI) = 0, used to find the eigenvalues of a matrix A. The process of finding a diagonal matrix similar to a given matrix. Linearly Independent: A set of vectors {v[1], …, v[n]} is linearly independent if the equation c[1]v[1] + … + c[n]v[n] = 0 is only satisfied when all c[i] = 0. Linearly Dependent: A set of vectors that is not linearly independent. A linearly independent set of vectors that spans a vector space. The set of all linear combinations of a set of vectors. A preprocessing technique that transforms features to have a mean (μ) of 0 and a standard deviation (σ) of 1, using the formula: z = (x – μ) / σ. Z-score Normalization: Another term for standardization, referring to the resulting standard normal distribution. A preprocessing technique that scales features to a fixed range, typically [0, 1], using the formula: x’ = (x – min(x)) / (max(x) – min(x)). Principal Component Analysis: Derivation And Concepts We use the principal component analysis technique to primarily work with higher dimensional datasets. We project the data onto a lower dimension by preserving as much information as possible. We will lose some information in the process and this difference is called reconstruction error which means if we wish to do this we need to minimize the amount of information we are losing or the reconstruction error. In other words, we need to maximize the variance of the data. So, here we have two ways to approach this problem: 1. Minimize the reconstruction error. 2. Maximize the variance. Notice that we don’t need to prove both the points above separately because both are the same thing. Additionally, we can also derive PCA using other methods that I will talk about shortly but let’s see an example image: 3D points projected to 2D by PCA^5 Notice how 3D points are projected to 2D. The first and second principal components have the maximum information or variation. Any matrix A can be expressed as a linear combination of its standard basis vectors in this way: A = a[11]⋅I[1]+a[12]⋅I[2]+a[21]⋅I[3]+a[22]⋅I[4]. The main goal behind PCA is to find another basis, which is a linear combination of the original basis that will best represent our dataset. This is what we are set to find out. Let X be an m × n matrix representing our dataset and P be a matrix that transforms X into Y (Y = PX). The rows of P, {p1, . . . , pm}, are a set of new basis vectors for expressing the columns of X. Here the rows of P are a new set of basis vectors for representing columns of X because this equation is nothing but projection transformation. The row vectors {p1, . . . , pm} in this transformation will become the principal components of X. Now, once we have transformed the dataset to the new basis, what we want here is to maximize the variation captured. PCA assumes that all basis vectors {p1, . . . , pm} are orthonormal and the direction in which variation is maximized is the most important. So, here is how it goes: 1. Choose a unit vector in an m-dimensional space that maximizes the variance in X. Save this vector as p1. 2. Find another direction in which variance is maximized but restrict the search to all directions perpendicular to all previously selected directions. 3. and keep going likewise. This is the basic algorithm we use for PCA. Principal components are the key vectors obtained through Principal Component Analysis. Each principal component is a linear combination of the original variables, capturing the maximum variance in the data. The first principal component (PC1) captures the maximum variance present in the data. Subsequent components (PC2, PC3, etc.) capture decreasing amounts of variance, ensuring an ordered representation of variability. The sum of the variances of all principal components equals the total variance in the data. This conservation property ensures that no information is lost during the transformation. Assumptions Of PCA for calculation Simplicity: PCA assumes the following things for the sake of simplicity and to utilize the linear algebra tricks best: 1. Variance and mean sufficient statistics – only hold for Gaussian distribution. 2. Linearity – can be extended to non-linearity using kernel PCA 3. Large variances have important dynamics. 4. The principal components are orthogonal. Why Orthogonality Is Preferred Between All The Principal Components? There are various reasons for choosing orthogonality: 1. Variation captured by one principal component is not duplicated by others. 2. Independence simplifies the interpretation of each principal component. 3. In the absence of orthogonality, the variance captured by one component would be redundant or correlated with the variance captured by another component. 4. We will see that PCA involves the computation of the covariance matrix of the standardized data. Diagonalizing the covariance matrix (making it diagonal) results in orthogonal eigenvectors, which become the principal components. This process ensures that the principal components are uncorrelated and orthogonal. 5. Orthogonal basis vectors simplify mathematical computations, such as the eigenvalue decomposition of the covariance matrix. 6. Orthogonality helps mitigate multicollinearity issues that can arise when variables are highly correlated. In the principal component space, each component is uncorrelated, reducing the risk of 7. Orthogonal matrices are computationally efficient. Eigen-decomposition of orthogonal matrices involves simpler algebraic operations compared to non-orthogonal matrices. Derivation Of PCA In this section, we will derive PCA and show you how you can calculate the principal components. One thing you should keep in mind is we first centre the data before doing any calculation and if the scales of data are too different then we can even standardize the dataset as well. Why Do We Centre The Data? 1. Centering the data involves subtracting the mean of each variable from the corresponding observations. This centres the data around the origin. By centring the data, we ensure that the origin (0, 0, …, 0) is the mean of the data. This is important because PCA is sensitive to the location of the data points. 2. If the data is not centred, the origin may not coincide with the mean of the data. In such cases, the principal components might be influenced more by the location of the data points than by their dispersion. 3. The covariance between two variables involves the product of deviations from their means. If the data is not centred, these deviations include the overall location of the data, affecting the 4. Centring removes translation effects, allowing the principal components to capture the intrinsic variability and relationships among variables, rather than being influenced by the overall position of the data. 5. Centring ensures that the mean of each variable is zero. If the data is not centred, the means of variables can introduce biases in the covariance matrix, affecting the orthogonality of the principal components. 6. Centering allows for meaningful comparisons across datasets. If datasets have different means, their covariance structures might differ, making it challenging to compare principal components PCA Derivation Using Covariance Matrix Or By Diagonalizing The Covariance Matrix Our goal is to find a new basis by maximizing the variance. We already know that the covariance matrix contains information about the direction of maximal variance. So, we will make a non-diagonal covariance matrix diagonal by rotating the coordinate system which is done using diagonalization. The eigenvectors of the covariance matrix align with the directions of both maximum and minimum variance. The corresponding eigenvalues represent the variances along these directions. Optimal linear dimensionality reduction is achieved by projecting the data onto the eigenvectors with the largest eigenvalues. Here is the detailed derivation using the covariance matrix. Please read and understand carefully and ask your queries in the forums. PCA Derivation Using Reconstruction or Projection Error: We can also derive PCA in another way. When we project our data our goal here is to minimize the projection error. Here is the intuition for this: Here X is the original data and X[||] is the projected. We try to minimize the reconstruction error which is the average distance of the original 2D-positions from the one-dimensional subspace or length of the projection arrows. For mathematical convenience one actually takes the average squared distance.^6 Focus just on the one point x and its projection x[||]. d is the distance of x from the origin, r is the distance of x from x[||] in the subspace, and v is the distance of x[|| ]from the origin. r and v depend on the direction of the subspace while d does not. Interestingly, since the triangles between x, x[||], and the origin are right-angled, r and v are related by Pythagoras’ theorem, i.e. r^2 + v ^2 = d^2 We know that r^2 contributes to the reconstruction error. v^2 on the other hand contributes to the variance of the projected data within the subspace. Thus we see that the sum over the reconstruction error plus the variance of the projected data is constant and equals the variance of the original data. Therefore, minimizing the reconstruction error is equivalent to maximizing the variance of the projected data. Here is the derivation: PCA Vs Regression: Least-squares linear regression vs. PCA. In linear regression, the projection direction is always vertical; whereas in PCA, the projection direction is orthogonal to the projection hyperplane. In both methods, however, we minimize the sum of the squares of the projection distances.^7 PCA Derivation Using SVD And Why Principal Components Analysis (PCA) Is A Special Case Of The SVD Here is why: If you want to know more about SVD, check out my previous post on SVD. Things To Know: 1. The total variance of the data is the sum of the eigenvalues of its covariance matrix. 2. The I^th principal component accounts for a proportion of variation, in the original data. 3. You can also perform spectral decomposition on the covariance matrix to know the contribution of respective PCs. 4. There is no one way to choose the important PCs but researchers usually suggest discarding less than one and selecting only those PCs that can account for a large proportion of the total variance. This is the serious drawback of this method. We will see how we can select important PCs in practice shortly. 5. The modern tendency is to view Principal Component Analysis as a mathematical technique with no underlying statistical mode. 6. Principal components are artificial variables and often it is not possible to assign physical meaning to them. 7. If the original variables are uncorrelated, then there is no point in carrying out the Principal Components Analysis. 8. Principal components depend on the scale of measurement. A conventional way of getting rid of this problem is to use standardized variables with unit variances. Scaling Problem In PCA: Principal components are generally changed by scaling therefore, not a unique characteristic of the data. If one variable has way more variability than others, it will dominate the first principal component in the covariance matrix, no matter the correlation structure. But, if we scale all variables to have the same variance, the first principal component becomes quite different. Because of these issues, PCA is often seen as less important unless variables have roughly similar variances, like when they are percentages or measured in the same units. To tackle scaling problems, a common approach is to look at the correlation matrix instead of the covariance matrix. Scaling in Principal Component Analysis (PCA) can still be a bit arbitrary and depends on the data. It doesn’t completely solve the scaling issue but helps avoid it. If the variables aren’t considered equally important, it’s not recommended to use the correlation matrix for analysis. Using the correlation matrix makes it harder to compare PCA results between different samples. When you transform variables back to their original form, the principal components of the correlation matrix won’t be perpendicular. This happens because changing two lines at right angles in Euclidean space doesn’t always result in two new perpendicular lines. This is part of why the scaling problem shows up in the first place. PCA Applications: What We Can Do With It? There are so many things we can do with PCA. Here are a few: 1. One can then look for outliers or groups or ‘clusters’ of individuals. This is an important use of Principal Component Analysis and often groupings of variables which would not be found by other 2. When you have more than three variables, making plots gets tricky. However, if the first two components grab a big chunk of the overall variation, it’s handy to plot the scores of these two components for each individual. This is called a Biplot. A Biplot is like a scatter plot showing both the row and column factors of two-way data on the same graph. 3. Multiple regression can be dangerous if the so-called independent variables are highly correlated. In this case, we regress on principal components which is known as principal components 4. Cutting down the number of things we’re looking at (dimensionality reduction) can be super useful in discriminant analysis, especially when we have a bunch of related things we’re measuring (p variables) but not many instances of them (n observations, where n is less than p). Having fewer observations than variables can cause a problem, but if we shrink the number of things we’re measuring a lot (thanks to Principal Component Analysis), it can fix that issue. Practical Applications: PCA is a useful tool for handling complex data across different fields. For image compression, it helps shrink image files while keeping important details, which makes them easier to store and share. In face recognition, PCA picks out the key facial features that make each face unique, speeding up and improving the accuracy of identifying people. In bioinformatics, PCA simplifies data from gene or protein studies by highlighting important patterns, helping researchers understand biological processes better. For speech recognition, PCA makes it easier for systems to process and understand spoken words by focusing on the most relevant features of the speech signal. In financial analysis, PCA helps to make sense of complex market data by identifying the main factors that drive trends, which makes financial models more effective. In chemistry, PCA is used to analyze complicated data from experiments, making it easier to understand and classify different substances. In remote sensing, PCA reduces the amount of data needed to interpret satellite or aerial images, helping to find important patterns in the data more quickly. In manufacturing, PCA helps to monitor and improve production processes by identifying key factors that affect product quality, making it easier to spot and fix problems. In brain imaging, PCA helps to simplify data from studies on brain activity, making it easier to understand how the brain works. Finally, in marketing, PCA helps businesses understand customer behavior by focusing on the most important factors, which helps in targeting marketing efforts and segmenting customers effectively. and the list goes on! PCA Implementation Using SkLearn And Numpy I have added the codes here. You will have to type these codes in your notebook to see the results and read my explanation accordingly. I would recommend writing the codes but if you want to see the outputs, you can visit my notebook below as well. Example 1 – Basic Calculation Of PCA using Numpy # create a dataset import numpy as np X = np.array([[1, 3, 5, 7, 9, 13, 20, 20, 21, 24, 26], [5, 7, 11, 14, 15, 17, 18, 19, 21, 22, 26]]) X.shapeCode language: PHP (php) # let's take the transpose to make our dataset X = X.T Code language: PHP (php) # Goal - Apply PCA and reduce the dataset from 2-D to 1-D import matplotlib.pyplot as plt plt.scatter(X[:, 0], X[:, 1]);Code language: CSS (css) # step - 1 -- Normalize the data X_normalized = X - np.mean(X, axis = 0) print(X_normalized)Code language: PHP (php) # plot the normalized data plt.scatter(X_normalized[:, 0], X_normalized[:, 1])Code language: CSS (css) # let's plot both the datasets - Notice the difference by checking the cordinates. plt.scatter(X[:, 0], X[:, 1]); plt.scatter(X_normalized[:, 0], X_normalized[:, 1]);Code language: PHP (php) # Step - 2 Calculate the covariance matrix C = np.cov(X_normalized, rowvar = False) CCode language: PHP (php) # Step - 3 - Calculate the eigenvalues and eigenvectors eigenvalues, eigenvectors = np.linalg.eig(C) print("Eigenvalues:", eigenvalues) print(eigenvectors)Code language: PHP (php) # Step - 4 - Sort the eigenvalues in desc. order sorted_index = np.argsort(eigenvalues)[::-1] sorted_eigenvalues = eigenvalues[sorted_index] sorted_eigenvaluesCode language: PHP (php) # let's check the variance explained by eigenvalues total_variance = np.sum(sorted_eigenvalues, axis = 0) # it should come out 80.87272727 + 40.69090909 = 121.56363636363636 total_varianceCode language: PHP (php) # percentage of variance explained by each eigenvalues first = sorted_eigenvalues[:1]/total_variance second = sorted_eigenvalues[1:]/total_variance print(second)Code language: PHP (php) As we can see our first or the largest eigenvalue is capturing 98% of the variance in the data and the second one is capturing less than 1. We usually avoid values less than 1 in practice when we are working with larger datasets sorted_eigenvectors = eigenvectors[:, sorted_index] # Step - 5 select the eigenvectors n = 1 # since we want to go from 2-D to 1- D eigenvectors_subset = sorted_eigenvectors[:,0:n] eigenvectors_subsetCode language: PHP (php) # Step - 6 Transform data or project the data onto a subspace give by eigenvectors_subset X_reduced = np.dot(eigenvectors_subset.transpose(), X_normalized.transpose()).transpose() X_reducedCode language: PHP (php) The above output is the one-dimensional representation of the original data. We call the outputs here factor scores along each component. Each row corresponds to an observation, and the column contains the factor score for the first principal component. Interpretation: The factor scores indicate how much each observation contributes to the first principal component. Positive scores suggest that the observation has a positive influence on the first principal component. Negative scores suggest a negative influence. Data Reconstruction: You can reconstruct the data using the factor scores and the eigenvectors. The reconstructed data in the original feature space can be obtained by multiplying the factor scores by the transpose of the eigenvectors and adding back the mean of the original data. Data Visualization: • Plot the factor scores to visualize the distribution of data along the first principal component. This can help you identify patterns or groupings in your data. • Clusters or groups of observations with similar patterns along the first principal component may suggest subpopulations or distinct patterns in your data. You can explore whether these clusters correspond to known categories or characteristics. • Observations with extreme factor scores (either very high or very low) might be considered outliers. Investigate these cases to understand if they represent unusual patterns or if there are data # The factor scores we obtained factor_scores = X_reduced # Additional analysis or visualization can be performed based on the factor scores. # For example, you can plot the factor scores to visualize the distribution. import matplotlib.pyplot as plt plt.plot(factor_scores, 'o-') plt.ylabel('Factor Score (PC1)') plt.title('Distribution of Observations along PC1') plt.show()Code language: PHP (php) PCA Using Sklearn # let's use sklearn to calculate PCA from sklearn.decomposition import PCA pca = PCA(n_components =1) # we are going from 2D to 1D and choosing 1 PC pca.fit(X_normalized) # calculation Xr = pca.transform(X_normalized) #projection print(Xr)Code language: PHP (php) As you can see the results are the same # here is our covariance matrix same as before pca.get_covariance()Code language: CSS (css) pca<strong>.</strong>get_feature_names_out()Code language: Python (python) # this will transform the reduced data back to the original feature space pca.inverse_transform(Xr) Code language: PHP (php) Example 2: PCA For Noise Filtering or Data Compression We will use the digits dataset for this example and show you how you can use PCA for noise filtering and also how you can choose the number of PCs from sklearn.datasets import load_digits import matplotlib.pyplot as plt from sklearn.decomposition import PCA import seaborn as sns; sns.set() digits = load_digits() digits.data.shapeCode language: JavaScript (javascript) Let’s first the PCA without selecting n_components. This will include all PCs. We can plot this to know how much variation is being explained by how many PCs How to choose the n_components pca = PCA().fit(digits.data) plt.xlabel('Number of components') plt.ylabel('Explained variance - commulative');Code language: JavaScript (javascript) From the graph above we can see that we need at least 20 or above components to describe the 90-95% variance in the data. Reducing the dataset to 3D or 2D may lose a lot of variance. This is how you can choose how many components to include in n_components. Another way of doing PCA in sklearn to include maximum variance is by describing n_components according to percentage you can input values between 0 to 1 in n_components for example n_components = 0.95 to describe 95% of the variance. Next, we are going to include some noise in our data and then use PCA to filter out those noise. This is just to show you how you can use PCA to filter out noise from noisy data. # create a function to plot our datset def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16)) plot_digits(digits.data)Code language: PHP (php) # let's introduce some noise noisy_data = np.random.normal(digits.data, 4) plot_digits(noisy_data)Code language: PHP (php) pca = PCA().fit(noisy_data) plt.xlabel('Number of components') plt.ylabel('Explained variance - commulative');Code language: JavaScript (javascript) # let's see how many components preserve the 60% of variance pca = PCA(0.60).fit(noisy_data) pca.n_components_Code language: PHP (php) # let's apply and transform it back to recover the filtered data components = pca.transform(noisy_data) filtered_data = pca.inverse_transform(components) plot_digits(filtered_data)Code language: PHP (php) As we can see that it is a bit better than noisy data we can further apply PCA to make things a lot clear. As you can see the transformed data is not equivalent to the original data. This is what is called the reconstruction error. pca = PCA(0.45).fit(noisy_data) components = pca.transform(noisy_data) filtered_data = pca.inverse_transform(components) Well, it’s better than noise but the idea here is to learn how to perform PCA to reduce noise in any data. This is how you can do it. One thing you should note here you can also use PCA to compress data as well by using the amount of variance you want to preserve. Alternatively, this means you can use PCA for dimensionality reduction while using higher dimension dataset Example 3 – Dimensionality Reduction or Data Compression from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60)Code language: JavaScript (javascript) faces.data.shapeCode language: CSS (css) # When the dataset is too big we use randomized PCA in sklearn rnd_pca = PCA(n_components = 150, svd_solver="randomized").fit(faces.data) data_reduced = rnd_pca.fit_transform(faces.data)Code language: PHP (php) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');Code language: JavaScript (javascript) # faces after PCA fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(rnd_pca.components_[i].reshape(62, 47), cmap='bone')Code language: PHP (php) projected_data = rnd_pca.inverse_transform(data_reduced) projected_data.shapeCode language: CSS (css) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected_data[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction');Code language: PHP (php) As we can see above we have reduced the dimensions of our data down to 150 and in a way we can compress it down. This is how we can reduce the dimensionality and feed the data_reduced to the further algorithm we are trying to use for other cases. What this means is in our pipeline we can first perform PCA which will allow us to preserve as much variance as we want or the essential information in our data and then we can use the reduced dimsn into another algorithm to perform our task efficiently without having to worry about bigger data size. Other forms of PCA in Practice: Incremental PCA Incremental PCA is useful when you have a large dataset that doesn’t fit into memory, and you want to perform PCA in smaller batches import numpy as np from sklearn.decomposition import IncrementalPCA # Generate a small dataset for demonstration X = np.random.rand(100, 5) # Specify the batch size for Incremental PCA batch_size = 10 # Create an Incremental PCA object ipca = IncrementalPCA(n_components=3, batch_size=batch_size) # Incrementally fit the model on batches of the data for i in range(0, len(X), batch_size): batch = X[i:i + batch_size] # Transform the entire dataset using the fitted Incremental PCA model X_transformed = ipca.transform(X) # Inverse transform to obtain an approximation of the original data X_approximated = ipca.inverse_transform(X_transformed) # Print the transformed data print("Original Data Shape:", X.shape) print("Transformed Data Shape:", X_transformed.shape) print("Approximated Data Shape:", X_approximated.shape)Code language: PHP (php) Sparse PCA It introduces sparsity in the loadings (coefficients) of the principal components. The objective is to find a sparse representation of the data, meaning that most coefficients are zero. This can be useful when you suspect that only a small number of features are relevant. import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import SparsePCA from sklearn.datasets import make_multilabel_classification from sklearn.preprocessing import StandardScaler # Generate a synthetic dataset for demonstration X, _ = make_multilabel_classification(n_samples=100, n_features=20, n_classes=2, n_labels=1, random_state=42) # Standardize the data (important for PCA) scaler = StandardScaler() X_scaled = scaler.fit_transform(X) # Create a Sparse PCA object spca = SparsePCA(n_components=3, alpha=0.1) # Adjust alpha for desired sparsity level # Fit the Sparse PCA model on the scaled data # Transform the data using the fitted Sparse PCA model X_transformed = spca.transform(X_scaled) # Inverse transform to obtain an approximation of the original data X_approximated = spca.inverse_transform(X_transformed) # Print the transformed data print("Original Data Shape:", X_scaled.shape) print("Transformed Data Shape:", X_transformed.shape) print("Approximated Data Shape:", X_approximated.shape)Code language: PHP (php) Truncated PCA Truncated SVD (Singular Value Decomposition) is often used as an approximation of PCA, especially when dealing with sparse data. The term “truncated” indicates that only the top-k singular values and their corresponding singular vectors are retained, leading to a reduced-dimensional representation of the data. This can be useful for tasks like dimensionality reduction and matrix factorization import numpy as np from sklearn.decomposition import TruncatedSVD from sklearn.datasets import make_sparse_coded_signal from sklearn.preprocessing import normalize # Generate a small synthetic sparse dataset for demonstration n_samples, n_features = 100, 20 # Specify the number of components and nonzero coefficients n_components, n_nonzero_coefs = 5, 3 X, _, _ = make_sparse_coded_signal(n_samples=n_samples, n_features=n_features, n_components=n_components, n_nonzero_coefs=n_nonzero_coefs, random_state=42) # Normalize for Truncated SVD X = normalize(X, axis=0) # Specify the number of components (k) for Truncated SVD n_components = 5 # Create a Truncated SVD object svd = TruncatedSVD(n_components=n_components) # Fit the Truncated SVD model on the data X_transformed = svd.fit_transform(X) # Inverse transform to obtain an approximation of the original data X_approximated = svd.inverse_transform(X_transformed) # Print the explained variance ratio for each component print("Explained Variance Ratio:", svd.explained_variance_ratio_) # Print the transformed data print("Original Data Shape:", X.shape) print("Transformed Data Shape:", X_transformed.shape) print("Approximated Data Shape:", X_approximated.shape)Code language: PHP (php) Kernel PCA Kernel PCA (Kernel Principal Component Analysis) is an extension of PCA that uses kernel methods to perform non-linear dimensionality reduction. In standard PCA, the principal components are obtained by linearly transforming the data into a new coordinate system. Kernel PCA, on the other hand, implicitly maps the data into a higher-dimensional space using a kernel function, making it possible to capture non-linear relationships between variables. We will study this in another tutorial. import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import KernelPCA from sklearn.datasets import make_circles # Generate a synthetic dataset for demonstration (non-linear data) X, _ = make_circles(n_samples=100, factor=0.5, noise=0.05, random_state=42) # Apply Kernel PCA with the radial basis function (RBF) kernel kpca = KernelPCA(kernel='rbf', gamma=15, n_components=2) X_kpca = kpca.fit_transform(X) # Plot the original and transformed data plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.scatter(X[:, 0], X[:, 1], c='b', marker='o', edgecolors='k', s=50) plt.title('Original Data') plt.subplot(1, 2, 2) plt.scatter(X_kpca[:, 0], X_kpca[:, 1], c='r', marker='o', edgecolors='k', s=50) plt.title('Kernel PCA Transformed Data') Code language: PHP (php) Example 4: PCA on IRIS Dataset Using Plotly This data set consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal lengths. The rows are the samples and the columns are: Sepal Length, Sepal Width, Petal Length and Petal Width. # import the data and plot it import plotly.express as px df = px.data.iris() features = ["sepal_width", "sepal_length", "petal_width", "petal_length"] fig = px.scatter_matrix( fig.show()Code language: PHP (php) # let's perform PCA and then plot using all PCs from sklearn.decomposition import PCA pca = PCA() components = pca.fit_transform(df[features]) labels = { str(i): f"PC {i+1} ({var:.1f}%)" for i, var in enumerate(pca.explained_variance_ratio_ * 100) fig = px.scatter_matrix( fig.show()Code language: PHP (php) # plot using 2D pca = PCA(n_components=2) components = pca.fit_transform(df[features]) fig = px.scatter(components, x=0, y=1, color=df['species']) fig.show()Code language: PHP (php) # using 3D pca = PCA(n_components=3) components = pca.fit_transform(df[features]) total_var = pca.explained_variance_ratio_.sum() * 100 fig = px.scatter_3d( components, x=0, y=1, z=2, color=df['species'], title=f'Total Explained Variance: {total_var:.2f}%', labels={'0': 'PC 1', '1': 'PC 2', '2': 'PC 3'} fig.show()Code language: PHP (php) # plot the explained variance of the dataset import numpy as np exp_var_cumul = np.cumsum(pca.explained_variance_ratio_) x=range(1, exp_var_cumul.shape[0] + 1), labels={"x": "# Components", "y": "Explained Variance"} Code language: PHP (php) PCA For Outlier Detection import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import PCA from sklearn.datasets import make_blobs # Generate a synthetic dataset with outliers for demonstration X, _ = make_blobs(n_samples=300, centers=1, cluster_std=1.0, random_state=42) outliers = np.array([[15, -4], [14, -3], [13, -2]]) # Add outliers X = np.vstack([X, outliers]) # Apply PCA pca = PCA(n_components=2) X_pca = pca.fit_transform(X) # Calculate reconstruction errors (squared Mahalanobis distances) reconstruction_errors = np.sum((X - pca.inverse_transform(X_pca))**2, axis=1) # Set a threshold for detecting outliers (adjust as needed) threshold = np.percentile(reconstruction_errors, 95) # Plot the data and outliers plt.figure(figsize=(10, 6)) # Plot the data points plt.scatter(X[:, 0], X[:, 1], c='b', marker='o', edgecolors='k', s=50, label='Inliers') # Highlight the outliers outliers_mask = reconstruction_errors > threshold plt.scatter(X[outliers_mask, 0], X[outliers_mask, 1], c='r', marker='o', edgecolors='k', s=100, label='Outliers') # Plot the principal components (eigenvectors) origin = pca.mean_ components = pca.components_.T * 3 # Scale for better visualization plt.quiver(*origin, *components[:, 0], color='orange', scale=1, scale_units='xy', angles='xy', label='PC1') plt.quiver(*origin, *components[:, 1], color='green', scale=1, scale_units='xy', angles='xy', label='PC2') plt.title('PCA for Outlier Detection') plt.xlabel('Feature 1') plt.ylabel('Feature 2') Code language: PHP (php) You can then remove the outliers from the data using the outliers_mask variable. from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler # Load the Iris dataset iris = load_iris() X = iris.data # Standardize the data before applying PCA scaler = StandardScaler() X_standardized = scaler.fit_transform(X) # Apply PCA pca = PCA(n_components=2) X_pca = pca.fit_transform(X_standardized) # Calculate reconstruction errors (squared Mahalanobis distances) reconstruction_errors = np.sum((X_standardized - pca.inverse_transform(X_pca))**2, axis=1) # Set a threshold for detecting outliers (adjust as needed) threshold = np.percentile(reconstruction_errors, 95) # Plot the data and outliers plt.figure(figsize=(12, 6)) # Plot the data points plt.scatter(X_pca[:, 0], X_pca[:, 1], c='b', marker='o', edgecolors='k', s=50, label='Inliers') # Highlight the outliers outliers_mask = reconstruction_errors > threshold plt.scatter(X_pca[outliers_mask, 0], X_pca[outliers_mask, 1], c='r', marker='o', edgecolors='k', s=100, label='Outliers') # Plot the principal components (eigenvectors) origin = [0, 0] # Fix the unpacking of components into separate arguments plt.quiver(*origin, components[0, 0], components[1, 0], color='orange', scale=1, scale_units='xy', angles='xy', label='PC1') plt.quiver(*origin, components[0, 1], components[1, 1], color='green', scale=1, scale_units='xy', angles='xy', label='PC2') plt.title('PCA for Outlier Detection on Iris Dataset') plt.xlabel('Principal Component 1') plt.ylabel('Principal Component 2') Code language: PHP (php) If you have any questions please ask in the forum/community support. 1. PCA Wikipedia – Source ↩︎ 2. 2010 John Wiley & Sons, Inc ↩︎ 3. https://www.stat.cmu.edu/~cshalizi/uADA/12/ ↩︎ 4. A Tutorial on Principal Component Analysis, Jonathon Shlens, December 10, 2005; Version 2 ↩︎ 5. Jonathan Richard Shewchuk – Unsupervised Learning and Principal Components Analysis ↩︎ 6. Laurenz Wiskott , PCA Standford Notes ↩︎ 7. Same as point 5 ↩︎ Other Sources & Further Readings:
{"url":"https://neuraldemy.com/principal-components-analysis/","timestamp":"2024-11-02T12:13:53Z","content_type":"text/html","content_length":"465135","record_id":"<urn:uuid:00b4020f-0fe1-4a75-9a7d-8f001656cf97>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00611.warc.gz"}
seminars - The Unlikely Intersection Theory and the Cosmetic Surgery Conjecture The main result of this talk is the following theorem: Let M be a 1-cusped hyperbolic 3-manifold whose cusp shape is not quadratic, and M(p/q) be its p/q-Dehn filled manifold. If p/q is not equal to p'/q' for sufficiently large |p|+|q| and |p'|+|q'|, there is no orientation preserving isometry between M(p/q) and M(p'/q'). This resolves the conjecture of C. Gordon, which is so called the Cosmetic Surgery Conjecture, for hyperbolic 3-manifolds belonging to the aforementioned class except for possibly finitely many exceptions for each manifold. We also consider its generalization to more cusped manifolds. The key ingredient of the proof is the unlikely intersection theory developed by E. Bombieri, D. Masser, and U. Zannier.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&page=88&sort_index=speaker&order_type=desc&document_srl=767899","timestamp":"2024-11-03T20:39:25Z","content_type":"text/html","content_length":"45010","record_id":"<urn:uuid:3a0cfd22-e6ab-4951-957f-4205458e7fdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00069.warc.gz"}
Column Space Calculator Column Space Calculator Calculate the column space of a matrix step by step The calculator will find the column space of the matrix, with steps shown. Related calculator: Row Space Calculator Presenting the Column Space Calculator, a high-speed online resource created to accurately determine the column space of a matrix. Whether you're calculating the basis for column space or striving to understand what it is, this calculator will try to help you. How to Use the Column Space Calculator? • Input First, input your matrix. Remember to properly format your matrix with rows entered from left to right and columns from top to bottom. • Calculation Once your matrix is entered, click the "Calculate" button. • Result The calculator will quickly compute and display the column space of the provided matrix. What Is a Column Space? In the realm of linear algebra, the column space of a matrix, also known as the range of a matrix, is one of the four fundamental subspaces that give us insights into the solutions of linear systems. It consists of all possible linear combinations of the column vectors of the matrix. Mathematically, let's suppose we have a matrix $$$A$$$ whose columns are represented by the vectors $$$\left\{\mathbf{\vec{a_1}},\mathbf{\vec{a_2}},\dots,\mathbf{\vec{a_n}}\right\}$$$. The column space of $$$A$$$, denoted by $$$\operatorname{col}(A)$$$, consists of vectors that can be formed by multiplying each column vector by some scalar and adding them together. This can be expressed in the following way: \operatorname{col}(A)=\left\{x_1\mathbf{\vec{a_1}}+x_2\mathbf{\vec{a_2}}+\dots+a_n\mathbf{\vec{a_n}}\mid x_1,x_2,\ldots,x_n\in\mathbb{R}\right\} Here, $$$x_1,x_2,\ldots,x_n$$$ are scalars and they belong to the set of all real numbers, denoted by $$$\mathbb{R}$$$. The scalars can be any real number, and different scalars will yield different vectors in the column space. Let's take an example to understand this better. Suppose we have the following 3x3 matrix $$$B$$$: The column vectors are $$$\langle1,4,7\rangle$$$, $$$\langle2,5,8\rangle$$$, and $$$\langle3,6,9\rangle$$$. The column space of $$$B$$$ will be the set of all possible vectors we can get by taking any real numbers $$$x_1$$$, $$$x_2$$$, and $$$x_3$$$, and computing the vector $$$x_1\cdot\langle1,4,7\rangle+x_2\cdot\langle2,5,8\rangle+x_3\cdot\langle3,6,9\rangle$$$. The column space of a matrix plays a key role in determining the solution of the system of linear equations that the matrix represents. By understanding the column space, we gain critical insights into the structure of the solution of the system. Why Choose Our Column Space Calculator? • Efficiency Our calculator provides quick results, saving valuable time that manual calculations would require. It's especially useful when dealing with larger matrices that can be complex to handle • User-Friendliness The calculator is designed to be simple to use, with a clear interface and easy-to-follow instructions. This makes it accessible to both beginners and advanced users. • Accuracy The calculator ensures precise results, eliminating the risk of errors that can occur in manual calculations. • Versatility The calculator is capable of handling matrices of many dimensions. Whether it's a 2x2 matrix or a 4x4 matrix, you can rely on this calculator for accurate column space calculations. What is Column Space? In linear algebra, the column space of a matrix is the set of all possible linear combinations of the matrix's column vectors. It's crucial for understanding the solutions of linear systems. What is the basis for the column space of a matrix? The basis for the column space of a matrix consists of the set of linearly independent vectors that span the column space. In other words, it's the smallest set of vectors that can be used to create any vector in the column space. Why should I use the Column Space Calculator? Our Column Space Calculator offers a quick, precise, and user-friendly way to calculate the column space of a matrix. It's an invaluable tool for anyone dealing with linear algebra. Can I use this calculator for matrices of any size? Our Column Space Calculator can handle matrices of different sizes. Whether you have a 2x2 or 3x3 matrix, you can use this calculator to find its column space.
{"url":"https://www.emathhelp.net/calculators/linear-algebra/column-space-calculator/","timestamp":"2024-11-14T18:36:02Z","content_type":"application/xhtml+xml","content_length":"24353","record_id":"<urn:uuid:a975ede7-2035-4aa4-bb8d-c636477c20fa>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00271.warc.gz"}
Excel Chart Multiple Series From One Column 2024 - Multiplication Chart Printable Excel Chart Multiple Series From One Column Excel Chart Multiple Series From One Column – You could make a multiplication graph or chart in Stand out simply by using a web template. You will find many instances of web templates and figure out how to formatting your multiplication graph making use of them. Below are a few tricks and tips to produce a multiplication chart. After you have a design, all you have to do is duplicate the formula and paste it inside a new mobile phone. You can then use this formula to multiply some numbers by an additional establish. Excel Chart Multiple Series From One Column. Multiplication table web template If you are in the need to create a multiplication table, you may want to learn how to write a simple formula. First, you need to lock row one of several header column, then grow the amount on row A by cellular B. Another way to build a multiplication table is to try using merged recommendations. In cases like this, you would probably key in $A2 into column A and B$1 into row B. The end result is really a multiplication dinner table by using a method that works both for columns and rows. You can use the multiplication table template to create your table if you are using an Excel program. Just open up the spreadsheet with your multiplication table change and template the title to the student’s label. You can also modify the page to match your specific requirements. It comes with an choice to change the colour of the cells to improve the appearance of the multiplication desk, too. Then, it is possible to alter the range of multiples suitable for you. Developing a multiplication chart in Stand out When you’re employing multiplication table software program, it is simple to produce a simple multiplication desk in Excel. Simply create a sheet with rows and columns numbered from a single to 40. Where the rows and columns intersect will be the solution. If a row has a digit of three, and a column has a digit of five, then the answer is three times five, for example. The same goes for the Initially, you can enter in the numbers that you should multiply. For example, if you need to multiply two digits by three, you can type a formula for each number in cell A1. To produce the numbers greater, pick the cells at A1 and A8, and after that click on the proper arrow to choose an array of cellular material. After that you can kind the multiplication method in the cells in the other columns and rows. Gallery of Excel Chart Multiple Series From One Column Plotting Multiple Series In A Line Graph In Excel With Different Time Multiple Bar Charts On One Axis In Excel Super User Working With Multiple Data Series In Excel Pryor Learning Solutions Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/excel-chart-multiple-series-from-one-column/","timestamp":"2024-11-12T07:41:46Z","content_type":"text/html","content_length":"52926","record_id":"<urn:uuid:0898d82e-d49c-427b-a093-bb1ccc0947fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00555.warc.gz"}
Transcription Factor Binding Site Prediction Using TF matrices to predict TF binding sites (TFBS) in regions of interest. This is my plan: a. Download TF matrices I have seen TRANSFAC and JASPAR mentioned in relation to TF matrices. I have found some text files in the JASPAR database that seem like what I need and I will probably use these. Would anybody know if these are any different from the TRANSFAC matrices? Any other resources for matrices? b. Predict TFBS in sequence of interest For each TF matrix, predict where TFBS could be found in the sequences of interest. I have looked at the TFBS module for perl and although I don't want to doubt that what it does is right, the way that it searches for TFBS is not clear to me and so I wouldn't want to use it in a serious analysis. My questions: 1. Are there any easy ways to bulk download TF matrices for all known TFs? (vertebrate, fly, nematode - separate for each species) 2. Is there a fast and usable TFBS prediction program? • has to run from the command line • has to be fast (I have quite a few sequences) Since I am completely at a loss and TF prediction is not exactly my area of expertise, I don't know if what I'm asking for is irrelevant, solved 100 times already etc. Feel free to just point me to some relevant reviews or such and/or your favourite programs. It seems that all resources I get are from the early 00s and many are not still functional. Entering edit mode Great package! I had been looking for something like this for some time. Entering edit mode This seems interesting, I'm not so good in python but maybe I could use it. Entering edit mode Hi i have a problem like you, i want to know if you could solved your problem with GIST. i don't know how can i run it. it hasnt any user guide. thanks a lot in advance Entering edit mode Is accurate enough to use TFBS matrices from humans to predict TFBS for other vertebrates ? Is there any relevant paper you can point me out? Is there also any up-to-date dabase with TFBS matrices? thanks a lot Entering edit mode Please check this paper and the related database CISBP: Weirauch, M. T., et al. (2014). "Determination and inference of eukaryotic transcription factor sequence specificity." Cell 158(6): 1431-1443.
{"url":"https://www.biostars.org/p/6415/#95391","timestamp":"2024-11-03T10:25:47Z","content_type":"text/html","content_length":"43436","record_id":"<urn:uuid:701d8304-72a2-4bda-9955-7640c3e97c9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00613.warc.gz"}
Bernardo NIPOTI | Professor (Associate) | Università degli Studi di Milano-Bicocca, Milan | UNIMIB | Economics Management and Statistics | Research profile How we measure 'reads' A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more Bernardo Nipoti currently works at the Department of Economics, Management and Statistics of the University of Milano Bicocca, Italy. Bernardo does research in Bayesian Statistics, with focus on nonparametric methods.
{"url":"https://www.researchgate.net/profile/Bernardo-Nipoti","timestamp":"2024-11-10T03:25:02Z","content_type":"text/html","content_length":"636977","record_id":"<urn:uuid:18630a97-5c2f-4b8d-a87b-322928144d3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00684.warc.gz"}
Bayesian Model Selection - (Advanced Signal Processing) - Vocab, Definition, Explanations | Fiveable Bayesian Model Selection from class: Advanced Signal Processing Bayesian model selection is a statistical method that uses Bayesian principles to compare different models and choose the one that best explains the observed data. This approach incorporates prior beliefs about model parameters and evaluates how well each model fits the data while accounting for model complexity. By applying Bayes' theorem, it quantifies the trade-off between the goodness of fit and the simplicity of models, allowing for more informed decisions in selecting models. congrats on reading the definition of Bayesian Model Selection. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. In Bayesian model selection, the Bayes factor is often used to compare models, which is the ratio of the posterior probabilities of two models given the observed data. 2. This method allows for incorporating prior knowledge into the model selection process, making it particularly useful when data is limited or noisy. 3. Model complexity is penalized in Bayesian model selection, helping to prevent overfitting by favoring simpler models when they perform similarly to more complex ones. 4. Bayesian model selection provides a probabilistic framework that helps quantify uncertainty in model choice, unlike traditional methods that yield only point estimates. 5. The integration of priors and likelihoods in Bayesian model selection offers a comprehensive approach to evaluating competing hypotheses about data-generating processes. Review Questions • How does Bayesian model selection differ from traditional model selection methods? □ Bayesian model selection incorporates prior distributions and uses Bayes' theorem to evaluate models based on both their fit to the data and their complexity. Traditional methods often rely on criteria like AIC or BIC, which do not take into account prior information and may not adequately penalize complexity. The probabilistic nature of Bayesian model selection also allows it to quantify uncertainty in model choices more effectively than frequentist approaches. • What role does the Bayes factor play in Bayesian model selection and how is it calculated? □ The Bayes factor serves as a quantitative measure for comparing the evidence provided by different models given observed data. It is calculated as the ratio of the posterior probabilities of two competing models. This factor allows researchers to determine which model is more likely given the observed data, helping to facilitate informed decisions regarding which model best describes the underlying processes. • Evaluate the advantages and challenges of using Bayesian model selection in practical applications. □ The advantages of Bayesian model selection include its ability to incorporate prior knowledge, provide a comprehensive assessment of uncertainty, and effectively manage complexity through proper penalties. However, challenges include computational intensity due to integration over high-dimensional parameter spaces and the subjective nature of choosing appropriate priors. These factors can complicate implementation but can also enhance flexibility and robustness when performed correctly. "Bayesian Model Selection" also found in: © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/advanced-signal-processing/bayesian-model-selection","timestamp":"2024-11-11T01:05:26Z","content_type":"text/html","content_length":"161583","record_id":"<urn:uuid:dc7a1403-fd34-4093-88f4-9986edb2341d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00209.warc.gz"}
forum.alglib.net :: View topic - Optimization problem with thousands of variables Hello everyone, I am trying to solve an optimization problem of a linear equation of the form F(x)=k1/x1+k2/x2+....+kn/xn, where "n" is of the order of 10k. The problem has about 10k non-linear equality and non-linear disequality constraints. To solve it I have used minnlc with SQP algorithm in C++, but the solution requires tenth of hours. Am I using a wrong solver that slows down everything or is this timing normal for the Alglib library? I saw the same problem to be solved in less than one minute, but I don't know which library had been used.
{"url":"http://forum.alglib.net/viewtopic.php?f=2&t=4588&view=print","timestamp":"2024-11-03T04:19:47Z","content_type":"text/html","content_length":"5265","record_id":"<urn:uuid:38c1f3c5-c9ab-4d38-a193-9f35f53518bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00734.warc.gz"}
Quench Spectroscopy: A Step-By-Step Guide | Dr S J Thomson Quench Spectroscopy: A Step-By-Step Guide A complete worked example of local quench spectroscopy Last week, I had a very nice exchange of e-mails with a PhD student interested in reproducing some results from a paper that I worked on, together with Louis Villa, Julien Despres and Laurent Sanchez-Palencia. The topic of the paper was local quench spectroscopy, a new technique that we proposed in 2020 for experimentally measuring the excitation spectrum of a quantum system by performing a quantum quench and observing the resulting dynamics. (There was also an earlier paper from the group using global quenches which I wasn’t involved with, and two later papers focused on disordered systems that I was involved with - for a full list of papers on the topic, see the end of this post!) As part of this discussion, I put together a short exact diagonalisation code that outlined the main steps of a numerical simulation performing local quench spectroscopy, making use of the excellent QuSpin package, and I thought I’d take this opportunity to turn it into a brief blog post going through the anatomy of a quench spectroscopy calculation. Classical waves moving in a medium exhibit a property called dispersion, where waves of different wavelengths propagate with slightly different velocities, which gives rise to a mathematical quantity called the dispersion relation that links the wavenumber (momentum) of a wave with its frequency, which enables calculation of the group and phase velocities of a wavepacket. That might sound a bit dry, but think of light passing through a prism, or the famous Pink Floyd album cover - white light goes in, but because each of the different frequencies of light travel at different velocities, they all pass through the prism a little differently and exit at different points, resulting in a rainbow of different colours. Quantum particles exhibit wave-like behaviour, and indeed excitations in quantum systems can also be described by a dispersion relation. As described by the Wikipedia page: In the study of solids, the study of the dispersion relation of electrons is of paramount importance. The periodicity of crystals means that many levels of energy are possible for a given momentum and that some energies might not be available at any momentum. The collection of all possible energies and momenta is known as the band structure of a material. Properties of the band structure define whether the material is an insulator, semiconductor or conductor. There are many ways to theoretically calculate dispersion relations, but experimentally it’s extremely challenging. One of the most well-established ways to do it involves measuring a quantity called the dynamical structure factor, which in practice involves measuring two-time correlation functions (i.e. given the state of a quantum system at a time t1, how similar is the state at a later time t2?), but this can be difficult, particularly in ultracold atomic gases where the measurements are typically destructive, i.e. once you measure the system once, you destroy the state that’s being measured, so it’s impossible to measure it again at a later time. So, can we come up with an alternative way of measuring dispersion relations of excitations in ultracold atomic gases? The answer, of course, is ‘yes’! I’ll leave a list of papers at the bottom of this post to check out if you want the full details as to how it works, but here I’ll try to give you an intuitive explanation, followed by the code so you can see it in action for yourself. The basic idea is the following: if we prepare the system in its ground state and then perform a quench to create a non-zero number of the excitations whose properties we want to measure, then by observing the resulting non-equilibrium dynamics, we can watch how the excitations move throughout the system. This gives us a picture of how the excitations move in space and time. The dispersion relation, however, relates the momentum of an excitation to its frequency. But we know that real-space and momentum are related to each other by Fourier transforms, and we know that frequency and time are related to each other by Fourier transforms. So in other words, if we observe the non-equilibrium dynamics in space and time, we can obtain the dispersion relation by making a 2D Fourier transform of the data! There are of course some caveats here - we need to first create a state which contains the excitations we want to measure, then we need to pick an observable capable of measuring their dynamics. I’ll leave these subtleties out of the following, and suggest that if you’re interested, you check out some of the papers listed at the end of this post. Here, I’m going to instead walk you through an example of a numerical simulation of a transverse field Ising model, where the excitations are spin flips, and show you how to compute the non-equilibrium dynamics following a local quench (a single spin flip) and extract the excitation spectrum, then compare it to the exact known solution and see how well it agrees. This model can be solved exactly, so it’s a good test system to use to illustrate the method. (In the paper, we also looked at the more complex Heisenberg and Bose-Hubbard models.) The following code is written in Python, tested using Python 3.9.7 on a Macbook Pro (2021). The numerics for our paper were performed using the TenPy library, which allowed us to use matrix product state methods to reach large system sizes, but here I’ll demonstrate the method on smaller systems using exact diagonalisation, which can be run in a few seconds on a modern laptop. (Consequently, none of the below code was used for our paper, and all was custom written for this blog post and the discussion I had last week with a PhD student. I stress this because I’m not allowed to share the original code used in the paper.) We start by importing the relevant packages: # Initialisation from scipy import fftpack import numpy as np import matplotlib.pyplot as plt from quspin.operators import hamiltonian from quspin.basis import spin_basis_1d from quspin.tools.block_tools import block_ops These packages are all standard (SciPy, NumPy, matplotlib), except possibly QuSpin which is a little more specialised but very useful, and it’s easy to download and install. If you use Anaconda, you can just type conda install -c weinbe58 quspin. Next up, we initialise the parameters that we’re going to use. We’ll set up a transverse field Ising model of size L=11, with a spin-spin coupling J0=1 and an on-site field h=3, such that the ground state of the system is in the fully polarised state. # Define variables L = 11 # System size # Use odd numbers so the flip is on the central spin J0 = 1. # Kinetic term (\sigma^x_i * \sigma^x_{i+1}) h = 3. # On-site term (\sigma_z) Now we’ll use QuSpin to build the Hamiltonian, which can be done like so: # Create site-coupling lists for periodic boundary conditions J = [[-J0,i%L,(i+1)%L] for i in range(L)] hlist = [[-h,i] for i in range(L)] # Create static and dynamic lists for constructing Hamiltonian # (The dynamic list is zero because the Hamiltonian is time-independent) static = [["xx",J],["z",hlist]] dynamic = [] # Create Hamiltonian as block_ops object # This splits the Hamiltonian into different momentum subspaces which can be diagonalised independently blocks=[dict(kblock=kblock) for kblock in range(L)] # Define the momentum blocks basis_args = (L,) # Mandatory basis arguments basis_kwargs = dict(S='1/2',pauli='1') H_block = block_ops(blocks,static,dynamic,spin_basis_1d,basis_args,basis_kwargs=basis_kwargs,dtype=np.complex128,check_symm=False) This initialises the Hamiltonian in terms of Pauli matrices, but spin operators can be used instead by changing the pauli='1' line to pauli='0'. The block_ops construction is used to split the Hamiltonian into different momentum-space blocks, which avoids having to build the entire Hamiltonian as a large matrix and lets use access slightly larger system sizes than we’d be able to reach otherwise. (To get to even larger system sizes, you can also restrict the basis to a particular magnetisation sector - in fact, for this model, we can restrict to the sector containing a single spin flip and simulate very large system sizes indeed, as it basically reduces to a single particle problem, but I’ll keep the following a little more general.) Following our paper, I want to make a measurement of the Pauli y operator on every lattice site, so now I set up the operator using the hamiltonian class. # Create the 1D spin basis basis = spin_basis_1d(L,S='1/2',pauli='1') # Set up list of observables to calculate no_checks = dict(check_herm=False,check_symm=False,check_pcon=False) n_list = [hamiltonian([["y",[[1.0,i]]]],[],basis=basis,dtype=np.complex128,**no_checks) for i in range(L)] According to the scheme set out in our paper, I should at this point initialise the system in its (fully polarised) ground state and apply a local operator to rotate the central spin into the x-y plane. Here I’m going to do something a bit different and bit easier. As we’re quite far into the polarised phase, I can approximate the ground state as a product state with all spins pointing up, i.e. the |1111….> state in the notation of QuSpin. Rather than applying a local operator, I can initialise the system in an equal superposition of the states |1111…> (approximately the ground state) and |11…101…11> (the ground state with a spin flip on the central site). This means that the central spin is pointed somewhere in the x-y plane. (Note that this will work only sufficiently far into the polarised phase!) We can do this using the following code snippet: # Define initial state as superposition of two product states in the Hilbert space # (This is equivalent to the spin rotation, but it's easier to do it like this in the ED code.) st = "".join("1" for i in range(L//2)) + "0" + "".join("1" for i in range(L//2)) iDH = basis.index(st) st_0 = "".join("1" for i in range(L)) i0 = basis.index(st_0) psi1 = np.zeros(basis.Ns) psi1[i0] = np.sqrt(0.5) psi1[iDH] = np.sqrt(0.5) Now that we’ve set up the initial state with a rotated central spin, next we have to time-evolve our state and compute the desired observable (as a reminder, the Pauli y operator on every lattice site). We can do that in the following way, here using a maximum time of t0=20 (implicitly in units of the spin-spin coupling J0) and 150 timesteps. # Time evolution of state times = np.linspace(0.0,20,150,endpoint=True) psi1_t = H_block.expm(psi1,a=-1j,start=times[0],stop=times[-1],num=len(times),block_diag=True) # Time evolution of observables n_t = np.vstack([n.expt_value(psi1_t).real for n in n_list]).T We can plot the density dynamics in the space-time basis using the following code: # Plot density dynamics fig=plt.imshow(n_t[::-1],aspect = 'auto',interpolation=None) plt.colorbar(fig, orientation = 'vertical',extend='max') The n_t[::-1] is used to rotate the figure so that time flows from bottom to top. (This could alternatively be achieved by changing the origin of imshow.) This results in the following figure: (Strictly speaking I should have modified the axis ticks as they’re a little off by default, but nevermind…) For a small system like this, the ’light cone’ spreading of the initial excitation quickly hits the boundaries and reflects back from them, but it turns out that these reflections don’t cause significant problems in this system, so we don’t need to worry about them here. (Beware though, they can cause problems in others!) Now we need to perform the 2D Fourier transform, which we can do using the built-in NumPy fft2 function. Notice here that there are a few tricky points to keep in mind. The first is that quench spectroscopy returns a signal at both positive and negative energies, so we use the fftpack library to compute the range of frequencies probed by this method so that we can label the final plot correctly. The second point is that the built in Fourier transform routines return the result in a way that isn’t very useful to us, so we use the fftshift function to rearrange the result in order from negative to positive frequencies. The Fourier transform code is the following: steps = len(times) dt = times[1] - times[0] # Compute Fourier transform of observables, and lists of momenta/energies ft = np.abs(np.fft.fft2(n_t, norm=None)) ft = np.fft.fftshift(ft) energies = fftpack.fftshift(fftpack.fftfreq(steps) * (2.*np.pi/dt)) # Normalise FT data = np.abs(ft)/np.max(np.abs(ft)) At this point, we’re almost there - all we need to do now is calculate the exact result for the spectrum to compare our results to, and plot the data with the exact result overlaid on top of it. This can be done using the following code: # Compute the exact result for the spectrum for plotting purposes spectrum = np.zeros(L) krange = np.linspace(-np.pi,np.pi,L) for i in range(L): spectrum[i] = 2*np.sqrt(h**2+J0**2-2*h*J0*np.cos(krange[i])) # Plot 2D Fourier transform plt.imshow(data[::-1], aspect = 'auto', interpolation = 'none' , extent = (-np.pi, np.pi, max(energies)/J0, min(energies)/J0)) This results in the following plot, where the density plot shows an object called the quench spectral function (QSF), and on top we plot the exact dispersion relation in red dashed lines. As you can see, they match perfectly! (Sidenote: we actually developed some techniques to get even cleaner data, but they’re too complex to go into here. They’re detailed in some of our later works, if you want more information.) And there you have it - that’s how to do a quench spectroscopy calculation, where we have made use of local quench spectroscopy to extract the dispersion relation of the transverse field Ising model in the ferromagnetic phase, simply by rotating the central spin, observing the resulting dynamics and Fourier-transforming the final data. This code is available as a single Python script from my repository on GitHub. For more information on quench spectroscopy using either local or global quenches, you can check out the following papers:
{"url":"https://steventhomson.co.uk/post/quench_spectroscopy/","timestamp":"2024-11-04T21:32:47Z","content_type":"text/html","content_length":"42937","record_id":"<urn:uuid:1229d60c-8a28-430d-bc42-125223266fc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00039.warc.gz"}
Problem set 4 due Monday, Nov 4, 2024 at noon Make sure your code can run before submission! Runtime > Run all Upload your .ipynb notebook to gradescope by 11:59am (noon) on the due date. Please include your name, Problem set number, and any collaborators you worked with at the top of your notebook. Please also number your problems and include comments in your code to indicate what part of a problem you are working on. Problem 1 The dataset below includes data simulated from work done by Carolyn Rovee-Collier. Dr. Rovee-Collier developed a new way to study very young babies’ ability to remember things over time: the “mobile conjugate reinforcement paradigm”. See a video of this paradim here and a nice description from Merz et al (2017) here: “In this task, one end of a ribbon is tied around an infant’s ankle and the other end is connected to a mobile hanging over his/her crib. Through experience with this set-up, the infant learns the contingency between kicking and movement of the mobile. After a delay, the task is repeated, and retention is measured by examining whether the infant kicks more during the retention phase than at baseline (i.e., spontaneous kicking prior to the learning trials; Rovee-Collier, 1997). Developmental research using the mobile conjugate reinforcement paradigm has demonstrated that both the speed of learning and length of retention increase with age” The simulated dataset includes 4 variables: 1. ratio - the measure of retention 2. day - the delay in days (1 through 14) 3. age - the age group: 2 month olds or 3 month olds 4. age_recoded - the age group recoded as 0 (2 month olds) and 1 (3 month olds) Explore these data with (at least) glimpse and a scatterplot. Include ratio on the y-axis, day on the x-axis, and color the dots by age. You may include any other explorations you wish to perform. Problem 2 Suppose you have specified that you will use a linear regression model to predict the simulated Rovee-Collier babies’ retention ratio by day and age. Your model can be represented by the following \(y=w_1x_1 + w_2x_1 + w_3x_2\), where: • \(y\) = ratio • \(x_1\) = 1 (constant) • \(x_2\) = day • \(x_3\) = age Fit the specified model using ordinary least squares approach with each of the three different functions we learned in class: (1) with lm, (2) with infer, and (3) with parsnip. Did all three ways return the same parameter estimates? Explain why or why not. Problem 3 Given the specified model and the parameters estimated in problem 2, compute the sum of squared error for the fitted model. Note: if you are stuck on Problem 2, you may proceed with this problem by using all zeros as your parameter estimates. Problem 4 Expanding on problem 3, write a more general function that would allow you to compute the sum of squared errors for any parameter estimates of the model specified in problem 2. Your function should have two arguments: (1) data and (2) ‘par’, which is a vector of the parameter estimates. Your function should return a single value as output. Test your function with each of the following parameter 1. 0.1, 0.2, 0.3 2. -10, -30, 5 3. 3, 2, 1 Which of these three options fit the data best? How do you know? Problem 5 Use the optimg package to find the optimal parameter estimates for the model specified in problem 2 via gradient descent. Initialize your search with \(b_0=-100\), \(b_1=100\), and \(b_2=0.5\). How many iterations were necessary to estimate the parameters? Are the parameters estimated by your gradient descent the same as those returned by lm() in Problem 2? Explain why or why not. Problem 6 The function given below finds the ordinary least squares estimate via matrix operations given two inputs: \(X\), a matrix containing the input/explanatory variables, and \(Y\), a matrix containing the output/response variable. Use this function to estimate the free parameters of the model specified in problem 2. Are the parameters estimated by the matrix operation the same as those returned by lm()? Explain why or why not. Problem 7 Estimate the accuracy of the specified model on the population using bootstrapping or k-fold cross validation (choose one, not both). Use the collect-metrics() function to return the \(R^2\) value. Challenge (optional) No points awarded or removed for this question! Just for fun!
{"url":"https://kathrynschuler.com/datasci/psets/pset-04.html","timestamp":"2024-11-04T13:45:00Z","content_type":"application/xhtml+xml","content_length":"40631","record_id":"<urn:uuid:14dbad7d-b130-4ff1-9514-9cfd9abb3c39>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00701.warc.gz"}
Mechanics dynamics Solved examples from dynamics. Dynamics is a part of mechanics in which causes and results of motion are examined. Material slides without friction from slope At the peak of the slope is material point. On time t=0 material point starts sliding down. Slope has triangle shape and stands on the ground. There is no friction between ground and slope’s base. Gravity field works on slope and material point. Gravity field is described by gravity acceleration g. Subject of example is to find road S which slope has traveled during material point sliding from Forces between material point and slope during material point’s slide Orb rolls from slope Orb rolls down from slope. Slope is inclined to the ground by angle α. Gravity field works on orb. Orb’s moment of inertia is known. There is sliding friction between orb and slope’s surface. During rolling orb spins around axis which is the middle of it. Spin axis is also parallel to slope’s surface. Rod during rotary motion Rod has length l and mass m. Rod stands in vertical position. In one moment rod’s end which is on top starts moving down around axis which is placed in rod’s second end. Rod is inside gravity field. Note that rod has mass m. It means that rod is rigid body so we can’t treat it like material point. Rod’s center of gravity is marked in its half of length. Subject of example is to find final velocity of rod’s end when it will be on the ground. Dynamics of rod during rotary motion Weights hanged through movable block Two weights are hanged via rope through movable block. Weights and block have mass. Rope is mass-less. Mass of one weight is bigger from second one. In moment weights start moving. One weight moves to up. Second weight moves down. Movement take place in gravity field. There is no friction between rope and block surface. We assume that air resist doesn’t exist. Because there is no friction on block’s surface no heat is released. During block’s motion rotate around axis in its middle. Block is a roller. Subject of example is to find acceleration of weight which moves down. Mechanics dynamics – example 5 Thread is fixed to the ceiling. Thread has a roller shape. Roller radius is r[m]. Mass m[kg] of thread is known. Thread is unrolling to down direction vertically. In example linear acceleration a[m/s ^2] has to be calculated. Thread is a roller with twine which was coiled on it. Mass of twine is so small so it is possible to assume it as equal to zero. Thread is a rigid body. Mechanics dynamics – example 5 Mechanics dynamics – example 6 Human is climbing on the rope in up direction. Earth gravity field works on human. Human has mass m[kg]. Rope is assumed as mass-less. Subject of example is designation of rope’s tension reaction force. In example the second principle of dynamics is applied. Mechanics dynamics – example 6 Mechanics dynamics – example 7 Roller is rolling to the slope’s top with start velocity v[0]. Roller executes flat motion because it is rotating around its axis of symmetry and moves progressive motion. Between roller contact surface with slope surface exists rolling friction. Rolling friction is described by rolling friction factor ƒ[m]. Roller has mass m[kg] and radius r[m]. Slope’s inclination angle α is known. Subject of example is to find road s which roller has traveled until it had stopped. The principle of conservation of total mechanical energy will be applied. Mechanics dynamics – example 7 Mechanics dynamics – example 8 Thin rod is fixed to the ceiling on one of its ends. End of rod is fixed to the ceiling by joint. Rod has length l[m] and mass m[kg]. Rod executes rotary motion around joint. Angular velocity ω is known. Rod is placed in Earth’s gravity field. Subject of example is designation of maximum force which works in rod’s center of mass. Rod is under influence of centrifugal force and gravity force. You must be logged in to post a comment.
{"url":"http://www.mbstudent.com/mechanical-engineering/mechanics-dynamics/","timestamp":"2024-11-12T07:11:50Z","content_type":"application/xhtml+xml","content_length":"46968","record_id":"<urn:uuid:36d08a67-2db5-466c-85da-a9444c605d21>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00281.warc.gz"}
Secure Chat 2.0 (Elliptic Curves, Protobufs, and MACs) Several months ago, I wrote a post about developing a secure chat in Rust using RSA and AES-CBC. Writing that post taught me a lot (like in this post, all of the crypto algorithms were implemented from scratch), but there were 2 major problems with the final result: 1. It was very hard to maintain. All the serialization/deserialization was done by hand over TCP streams, which meant that adding new features, like I wanted to do in this post, was nearly 2. There were multiple security issues in the chat. For example, messages were’nt MAC’d, meaning that attackers could modify messages sent over the network without the receiver knowing (MACs are explained later in this post). This version of the chat probably also has some security issues, but I’ve fixed the major ones from the previous post Another reason I’ve improved upon the previous project is that since then, I’ve learned some Elliptic-Curve Crypto, so I wanted to apply it somewhere. Prerequisites: Although this is a sequel, you don’t have to read the previous post before this one, since everything is written from the ground up. However, if you want to see the improvements to the code and the protocols, I highly reccommend reading it. Basic familiarity with Finite Fields and Groups is assumed. Additionally, if you haven’t read the previous post, you should be familiar with the difference between symmetric and assymmetric crypto, and the concept of certificates and CAs. Note that this post came out quite long :) I debated whether or not to split it into multiple, smaller posts, but in the end settled on doing one mega-post. If you prefer to, you can read it like a series of posts (each markdown # is a new post) The full code for the project is available here. What’s Protobuf? Despite “Elliptic Curves” being the change listed first in the title, the one I started working on first, and which proved to be very important, is using Protobufs. In the original post, as I mentioned in the intro, I implemented all the serializing & deserializing of messages from scratch. This made maintaining the code very difficult, so we need another solution. I didn’t want to use JSON for two reasons: 3. There’s no built-in support for working with binary data (which is very common in crypto), so we’d have to resort to messy solutions like encoding all binary data as hex-strings 4. There’s no predefined schema that parts of the code have to agree on for communication, which means that instead the schema is seperated all over comments in the code Instead, I used a different serialization format called Protocol Buffers (or Protobuf for short). Protobuf is a binary serialization format (unlike JSON), and was released by Google in 2008. It is language-agnostic (e.g. a Rust app that uses protobufs can talk with a Python app that uses protobufs), and has one major advantage over JSON: to serialize/deserialize with protobuf, you have to define a schema that defines how your messages look like (i.e. what fields they contain, and their types). For example, to serialize/deserialize a person object, we define the following message type in our schema: 1 message Person { 2 string name = 1; 3 uint32 age = 2; 4 } The object has two fields: a name, which is a string, and an age, which is a uint32 (i.e. 32-bit unsigned int). The numbers after the = are field numbers, which are used by the protobuf implementation to identify fields in the binary serialization of the data. Protobufs + Rust Now that we know what protobufs are, let’s see how to use them in Rust. We’ll do this using two crates: • protobuf, which is an implementation for the protobuf spec in Rust, and lets us serialize & deserialize data • protobuf_codegen, which compiles our protobuf schema to Rust code. Since this is all done at compile time, little to no overhead is involved when using protobufs For the part of getting this all set up, I found a very useful project on GitHub that shows how to use everything together. The gist of it is that we define our schema somewhere, for example in a file example.proto under directory protos, and then compile it as follows in the build.rs using protobuf_codegen: 1 fn main() { 2 protobuf_codegen::Codegen::new() 3 .cargo_out_dir("protos") 4 .include("src") 5 .input("src/protos/example.proto") 6 .run_from_script(); 7 } The above code creates a new Codegen struct, defines its output directory to be under protos, specifies that all inputs should reside in the src directory, adds the example.proto schema as input, and then compiles it using protoc, which is the protobuf compiler. Afterwards, to use our protobuf schema from Rust code, we use the include! macro to include the generated Rust code as a module, and then import it: 1 include!(concat!(env!("OUT_DIR"), "/protos/mod.rs")); 2 use example::{Person}; Then, to serialize a Person object, we use: 1 // Create a new Person struct 2 let mut msg = Person::new(); 3 msg.name = "John Doe".to_string(); 4 msg.age = 1337; 5 // Serialize to bytes 6 let msg_bytes = msg.write_to_bytes().unwrap(); 7 println!("{:?}", msg_bytes); Which prints: 1 [10, 8, 74, 111, 104, 110, 32, 68, 111, 101, 16, 185, 10] Nice! This is our Person object, serialized. The astute among you (or those who have read too many ASCII tables :)) will notice the string “John Doe” inside the bytes (74 is ‘J’, 111 is ‘o’, etc.). To decode the bytes back to a Rust struct, we use parse_from_bytes: 1 let msg_deser = Person::parse_from_bytes(&msg_bytes).unwrap(); 3 println!("Name: {}", msg_deser.name); 4 println!("Age: {}", msg_deser.age); This code prints: 1 Name: John Doe 2 Age: 1337 Which is what we would expect. Protobufs Over The Network In the chat, we’re going to transmit protobufs over the network, so it makes sense to write a short API that handles all the boilerplate of: 1. Serializing a message, and sending it over the network 2. Receiving a message of a certain type over the network, and deserializing it to get a Rust object To do this, we’re going to first create a new generic trait called MessageStream, which is an abstraction to send and receive protobuf messages over a TcpStream: 1 /// This trait allows us to send and receive untyped messages over a stream 2 /// We implement it for TcpStream 3 pub trait MessageStream<T: ProtobufMessage> { 4 // Receive a message of type T from the stream 5 fn receive_msg(&mut self) -> Result<T, io::Error>; 6 // Send a message of type T over the stream 7 fn send_msg(&mut self, msg: T) -> Result<usize, io::Error>; 8 } The term “untyped messages” will be explained later. The generic parameter T is the type of message to be sent/received. The ProtobufMessage trait (in the protobuf crate it’s named Message, but I renamed it) is implemented by all protobuf messages; for example the Person message from earlier. In order to send messages, we serialize the message to bytes, and send the number of bytes before so that the receiving end knows where the message ends: 1 impl<T: ProtobufMessage> MessageStream<T> for TcpStream { 2 fn send_msg(&mut self, msg: T) -> Result<usize, io::Error> { 3 // The first 8 bytes of the message are its size (in big-endian) 4 // and the rest of the bytes are the proto itself 5 let mut wire_bytes = msg.compute_size().to_be_bytes().to_vec(); 6 let mut msg_bytes = msg.write_to_bytes()?; 7 // These are the bytes we send over the wire 8 wire_bytes.append(&mut msg_bytes); 10 self.write(&wire_bytes) 11 } 12 } Then, in the receiving side, we start by reading the length of the message, which is the first 8 bytes (because of the way send_msg is implemented) from the stream, read that many bytes into a buffer, and then deserialize said buffer to a T: 1 impl<T: ProtobufMessage> MessageStream<T> for TcpStream { 2 ... 3 fn receive_msg(&mut self) -> Result<T, io::Error> { 4 // Parse the size 5 let mut size_bytes = [0u8; 8]; 6 self.read_exact(&mut size_bytes)?; 7 // Read `size` bytes from the stream 8 let mut payload_bytes = vec![0u8; u64::from_be_bytes(size_bytes).try_into().unwrap()]; 9 self.read_exact(&mut payload_bytes)?; 10 // Parse the payload and return it 11 let msg = T::parse_from_bytes(&payload_bytes)?; 13 Ok(msg) 14 } 15 } Since T is a ProtobufMessage, it implements parse_from_bytes, and therefore we can parse payload_bytes with it. That’s all well and good, and this API will serve us well enough for most of the project, but what if we don’t know what type of message to expect? To solve this problem, we’ll implement another API that handles typed messages: messages in which the type of the message is specified as a byte, and is sent over the wire. For this API, we start by defining a struct called TypedMessage: 1 // A message of a certain type 2 pub struct TypedMessage { 3 msg_type: u8, 4 payload: Vec<u8>, 5 } The payload field contains the serialized protobuf. Like in the previous API, we’ll implement two functions: send_typed_msg and receive_typed_msg: 1 /// Simialr to `MessageStream`. The main difference is that this trait 2 /// sends **typed** messages, i.e. the type of the message is transmitted over the wire 3 /// and the receiver can perform specific actions according to the type of the message received 4 pub trait TypedMessageReader { 5 // Receive a message 6 fn receive_typed_msg(&mut self) -> Result<TypedMessage, io::Error>; 7 } 9 pub trait TypedMessageSender<T: ProtobufMessage> { 10 // Send a message of type T over the stream. We also require 11 // a u8 that indicates the type of the message 12 fn send_typed_msg(&mut self, msg: T, msg_type: u8) -> Result<usize, io::Error>; 13 } To implement these, all we have to do is send, in addition to the length of the message, its type, before the payload: 1 impl<T: ProtobufMessage> TypedMessageSender<T> for TcpStream { 2 fn send_typed_msg(&mut self, msg: T, msg_type: u8) -> Result<usize, io::Error> { 3 // The first 8 bytes of the message are its size (in big-endian) 4 // , after that we have one byte indicating the type, and the rest of the bytes are the proto itself 5 let mut wire_bytes = msg.compute_size().to_be_bytes().to_vec(); 6 let mut msg_bytes = msg.write_to_bytes()?; 7 // These are the bytes we send over the wire 8 wire_bytes.push(msg_type); 9 wire_bytes.append(&mut msg_bytes); 11 self.write(&wire_bytes) 12 } 13 } Receiving a typed message is very similar: 1 impl TypedMessageReader for TcpStream { 2 fn receive_typed_msg(&mut self) -> Result<TypedMessage, io::Error> { 3 // Parse the size 4 let mut size_bytes = [0u8; 8]; 5 self.read_exact(&mut size_bytes)?; 6 // Parse the msg type 7 let mut type_bytes = [0u8; 1]; 8 self.read_exact(&mut type_bytes)?; 9 // Read `size` bytes from the stream 10 let mut payload_bytes = vec![0u8; u64::from_be_bytes(size_bytes).try_into().unwrap()]; 11 self.read_exact(&mut payload_bytes)?; 13 Ok(TypedMessage { 14 msg_type: type_bytes[0], 15 payload: payload_bytes, 16 }) 17 } 18 } That’s it! We now have two APIs that will make our lives much easier. Implementing these APIs saved me a lot of work, since they abstracted away the mechanism for sending & receiving messages, so I didn’t have this to think about the underlying implementation every time I wrote/changed a protocol. Overall, my experience with protobufs has been very positive for this project, and I’ll definitely be using them for other projects as well. Having all the message types be in a schema, instead of specified in comments all over the code, makes the code much tidier, and also makes adding new message types a breeze :) Elliptic Curves Elliptic Curves 101 In the last post, we’ve used RSA for two purposes: encryption/decryption and signing/verification. In practice, however, RSA is rarely used for these goals (perhaps only on legacy systems). An alternative cryptosystem based on Elliptic Curves (EC for short) provides the same level of security, but with a smaller key size, thereby reducing storage needs, and more importantly for our purposes the amount of data that needs to be transmitted over the network. Recall that geometric objects, such as circles or ellipses, can be described using algebraic equations. For example, the following equation describes a circle (centered at the origin) with radius 5: a 2D point (x, y) is on said circle, if and only if it satisfies the below condition. The relationship between geometric shapes and the algebraic equations that describe them is studied in a branch of math called Algebraic Geometry. Another one of the objects studied in Algebraic Geometry is called an Elliptic Curve, which are curves described by the following equation, where ‘a’ and ‘b’ are parameters, and -16(4a^3 + 27b^2) is not equal to zero: For example, if we set ‘a’ to 1.337 and ‘b’ to 2, we get the below curve: Elliptic Curves are used in many branches of math; for example, they were a central part of the proof of Fermat’s Last Theorem. EC Point Addition So now we have this strange curve, but what can we do with it? In crypto, ECs are used to define a group whose elements are the points on the curve. The group operation is called point addition: given two points A and B on the curve, applying the group operation yields a new point A + B that is also on the curve. The Simple Case Generally, we define addition as follows: take two points on the curve, A and B. Draw the unique line that connects A and B. Typically, this line will intersect the curve at some point C. We then define A + B to be -C, which is the reflection of C along the x-axis. An example for adding the two points A: (1.68124, 3) and B: (-0.59243, 1) on the previous curve to yield a new point A + B = D is shown below: Doubling a Point That’s all well and good, but there are still some edge cases we need to figure out. For once, what if A = B, and so there’s no unique line that connects A and B? In this case, we instead use the tangent line to the curve at the point A = B, and other than that use the same procedure of finding the interesection with the curve and reflecting along the x-axis. An example of this is shown below, where we add the same point B from earlier to itself to yield a point D = 2B = B + B. This operation is also called point doubling, and is central to EC Crypto: The equation for the tangent line can be found using Implicit Differentiation. I won’t go through the exact derivation here, since it’s quite technical, and not very interesting. For the implementation, all we need to know is that algebreically, the tangent is to the curve at the point (x0, y0) is defined as below: The Point at Infinity Yet another problem: how can we add a point A to its negative -A (i.e. A’s reflection along the x-axis)? In this case, the line connecting the two points is vertical, and so it doesn’t intersect the curve at any other point, rendering our previous definition useless. In this case, we say that A + (-A) is defined as the point at infinity O, which is an imaginary point that any line interesects with eventually. The point at infinity lies on every elliptic curve, and is the identity element of the group: for every point A, A + O = O + A = A (this is implied by how A + (-A) = O for all points A on the curve). So far, we’ve talked about curves over the real numbers: the coordinates of points that lie on these curves are real numbers. In crypto, however, we only deal with curves over finite fields, namely the additive group of integers modulo p Z_p, where p is a prime. All the previous definitions still apply, but we move them to the realm of finite fields: for instance, when computing the negative of a point A(x, y), which as you’ll recall is defined by (x, -y), the second coordinate -y is now the additive inverse of y modulo p. Implementing ECs in Rust Now, let’s get to implementing ECs in Rust. We start with defining a struct Curve that contains the EC’s parameters (a and b), and the modulus p of the finite field over which we are working: 1 /// An elliptic curve of the form y^2 = x^3 + ax + b 2 /// We only consider curves over `Z_p`, where p is a prime (i.e. the additive group of integers modulo p) 3 #[derive(PartialEq, Clone, Debug)] 4 pub struct Curve { 5 a: BigUint, 6 b: BigUint, 7 p: BigUint, 8 } Next, to use points, we define an enum Point. Recall that a point either has coordinates, or is the point at infinity: 1 /// A general point; Can either have coordinates, or be the 'point at infinity' (O) 2 #[derive(Debug, PartialEq, Clone)] 3 pub enum Point { 4 Coords(BigUint, BigUint), 5 O, 6 } We also define a point associated with a curve, which will make our life a bit simpler down the line: 1 /// A point (x, y) that lies on an EC 2 #[derive(Debug, PartialEq, Clone)] 3 pub struct CurvePoint { 4 point: Point, 5 curve: Curve, 6 } Given a curve, we want to have the option to generate new points on the curve. We do this with a function gen_point (associated with the Curve struct), which takes in optional coordinates, and returns a CurvePoint: 1 /// Return a new point w/coordinates (x, y) on the curve 2 /// x and y are reduced modulo p 3 /// If one of the coordinates is None, the point at infinity is returned 4 pub fn gen_point(&self, coords: Option<(&BigUint, &BigUint)>) -> CurvePoint { 5 if let Some((x, y)) = coords { 6 // Reduce the coordinates modulo p 7 let x_red = x % &self.p; 8 let y_red = y % &self.p; 10 CurvePoint { 11 point: Point::Coords(x_red, y_red), 12 curve: self.clone(), 13 } 14 } else { 15 CurvePoint { 16 point: Point::O, 17 curve: self.clone(), 18 } 19 } 20 } Note how we reduce the coordinates of the point mod p, since the curve is defined over Z_p. In case the coordinates aren’t specified, we simply return the point at infinity. Point Addition in Rust Now for the hard part: point addition. For this I used the algorithm from “An Introduction to Mathematical Cryptography” by Hoffstein et al: Let’s walk through this: • (a) and (b) are the cases where one of the points is the point at infinity, which is the identity, and so the result is the other point (which may also be the identity). • Case (d) is the case in which the one of the points is the additive inverse of the other, and so the result is the identity • In case (e), we define either the line connecting P_1 and P_2, or the line tangent to the curve at P_1 = P_2, depending on whether P_1 != P_2 or P_1 = P_2, respectively. We finally find the point of intersection with the curve, and reflect it along the x-axis to yield the result (x_3, y_3) (these two steps are combined in one step). Let’s get to implementing this! We’ll use the Add trait, which is an alias to the + operation in Rust. Implementing this trait for CurvePoint (we implement this for CurvePoint and not for Point since we also need to know what curve we’re adding over; this is why we combined them both into a single struct) requires implementing a single function add: 1 impl Add for CurvePoint { 2 type Output = Result<CurvePoint, ECError>; 4 fn add(self, rhs: Self) -> Self::Output { 5 ... 6 } 7 } We return a Result since adding two points from two different curves is an error; we define this in a new enum ECError: 1 #[derive(Debug)] 2 pub enum ECError { 3 /// Performing operations on points from two different curves is an error 4 DifferentCurves, 5 } Let’s check for this inside the add function: 1 if self.curve != rhs.curve { 2 Err(ECError::DifferentCurves) 3 } In the other case, where the curves are equal, we’ll match over the two points. We’ll start with handling the simple case where one of the points is the identity (cases (a) and (b) from the algorithm). We’re also extracting the curve into a helper variable: 1 else { 2 let curve = self.curve; 4 match (self.point, rhs.point) { 5 // If either of the points is the point at infinity, 6 // return the other, since the point at infinity is the identity element for point addition 7 (Point::O, q) => Ok(CurvePoint { point: q, curve }), 8 (p, Point::O) => Ok(CurvePoint { point: p, curve }), 9 ... 10 } Next, if both points have coordinates, we denote the coordinates with (x1, y1) and (x2, y2), respectively, as in the algorithm. In case the points are the additive inverse of each other (i.e. each other’s reflection along the x-axis), we return the point at infinity 1 (Point::Coords(x1, y1), Point::Coords(x2, y2)) => { 2 // If P and Q are the inverse of each other (i.e. the reflection along the X axis) 3 if x1 == x2 && y1 == (&curve.p - &y2) { 4 Ok(CurvePoint { 5 point: Point::O, 6 curve, 7 }) 8 } else { 9 ... 10 } 11 ... 12 } Note how we check whether y1 is equal to y2 in modulo p. It follows from the implementation of gen_point that all the coordinates are mod p, so there’s no need to reduce them again (i.e. checking for (y1 % &curve.p) == (&curve.p - (&y2 % &curve.p))). Finally, in case the points aren’t inverses of each other, we compute the slope for the line connecting P and Q (or the tangent line to P = Q in case they’re equal): 1 else { 2 // Compute the slope of the line defined by P and Q 3 let lambda = if (&x1, &y1) != (&x2, &y2) { 4 let denom = (&x2 + (&curve.p - &x1)).modinv(&curve.p).unwrap(); 5 let nom = (y2 + (&curve.p - &y1)) % &curve.p; 7 (nom * denom) % &curve.p 8 } else { 9 // If they're the same point, lambda is different 10 let denom = (BigUint::from(2u64) * &y1).modinv(&curve.p).unwrap(); 11 let nom = (BigUint::from(3u64) * &x1 * &x1 + &curve.a) % &curve.p; 13 (nom * denom) % &curve.p 14 } 15 }; It looks a bit complicated because of all the moduli, but in the case the points aren’t equal, for example, the denominator is defined as 1 / (x2 - x1). The &curve.p - &x1 computes the additive inverse of x1 modulo p, and the modinv computes the multiplicative inverse modulo p of x2 - x1. Finally, now that we have the slope, we compute the intersection with the curve, reflect along the x-axis, and return the result: 1 // The coordinates of the result 2 let x3 = (&lambda * &lambda + (&curve.p - &x1) + (&curve.p - &x2)) % &curve.p; 3 let y3 = ((lambda * (&x1 + (&curve.p - &x3))) % &curve.p 4 + (&curve.p - &y1)) 5 % &curve.p; 7 Ok(CurvePoint { 8 point: Point::Coords(x3, y3), 9 curve, 10 }) Multiplication by a Scalar Great! Now we can add points, but we still aren’t done. With curves over integers, there’s an operation called multiplication by a scalar that forms the basis for many EC crypto algorithms: the multiplication kP of a point P by an integer scalar k is defined as adding P to itself k times. Doing this naively is very inefficient, so instead we’re going to use an algorithm called the double-and-add algorithm (very similar to the square-and-multiply algorithm used for an efficient implementation of modexp). Here’s the gist of the algorithm: • Suppose we want to compute kA, where k is in Z_p and A is a point in a curve over Z_p • Since each b_i is either a 0 or a 1, we can compute the binary representation of k, track the result, and add b_i (2 ^ i A) each time b_i is 1 Here’s the code for this (I implemented the algorithm using the pseudocode from wikipedia): 1 impl CurvePoint { 2 // Mulitply the point by a scalar k. Multiplication by a scalar 3 // is defined by adding the point to itself k times. We do this using the double-and-add algorithm 4 // This is done acc. to the pseudocode on Wikipedia; see https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication 5 pub fn dot(&self, k: &BigUint) -> CurvePoint { 6 // The bits of k, from LSB to MSB 7 let bits: Vec<bool> = (0..k.bits()).map(|pos| k.bit(pos)).collect(); 8 let mut res = CurvePoint { 9 point: Point::O, 10 curve: self.curve.clone(), 11 }; 12 let mut temp = self.clone(); 14 for bit in bits { 15 if bit { 16 res = (res + temp.clone()).unwrap(); 17 } 19 temp = (temp.clone() + temp).unwrap(); 20 } 22 res 23 } 24 } The res starts as being the point at infinity, and is added to each time we have a bit in k is set. At iteration i, temp is defined as 2^i A, and so we’re adding b_i 2^i A to temp at each iteration of the loop (when b_i = 0 we don’t add anything), as in the sum from earlier. How much work does this algorithm save us? Instead of needing to perform k EC additions, we only need to perform log k (the number of bits in the binary representation of k). This is a massive improvement! Standard Curves Some Elliptic Curves are stronger than others, so in ECC we typically work over standard curves (unlike RSA, where you mostly generate your own modulus). The parameters (a, b, and p) of these Standard Curves are chosen specifically to make them more secure than others. To allow usage of some common standard curves, we’ll implement a new module std_curves.rs that contains two common curves: NIST P-256 and Secp256k1 (the curve used by Bitcoin). To do this, we’ll use use the once_cell crate to initialize these standard curves (since we have to call Curve::new). For example, here’s the NIST P-256 curve: 1 // NIST P-256 Curve parameters 2 const NIST_P_256_P: &str = "ffffffff00000001000000000000000000000000ffffffffffffffffffffffff"; 3 const NIST_P_256_A: &str = "ffffffff00000001000000000000000000000000fffffffffffffffffffffffc"; 4 const NIST_P_256_B: &str = "5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b"; 6 /// NIST P-256 Curve 7 pub static NIST_P_256: Lazy<Curve> = Lazy::new(|| { 8 let p = BigUint::from_str_radix(NIST_P_256_P, 16).unwrap(); 9 let a = BigUint::from_str_radix(NIST_P_256_A, 16).unwrap(); 10 let b = BigUint::from_str_radix(NIST_P_256_B, 16).unwrap(); 12 Curve::new(a, b, p) 13 }); For a comprehensive list of standard curves and their parameters, I strongly reccommend the Standard Curve Database by the Czech Centre for Research on Cryptography and Security. Elliptic Curve Cryptography (ECC) Now that we’re familiar with fundemental EC operations, let’s see what we can do with them crypto-wise. We’re going to learn about two very useful algorithms, both of which are used in the chat. Elliptic Curve Diffie-Hellman (ECDH) The first algorithm provides a way to establish a shared secret over an insecure channel. Like its name suggests, it is based on the classic Diffie-Hellman Key Exchange algorithm, introduced in 1976 by mathematicians Whitfield Diffie and Martin Hellman. Classic Diffie-Hellman (DH) Classic Diffie-Hellman is pretty simple: Alice and Bob start off by agreeing upon a finite field Z_p, where p is a prime, and a generator g != 1 in Z_p, with a common choice being g = 2. Next, have Alice and Bob pick private keys a and b, respectively. Both a and b are random elements in Z_p. The protocol then proceeds as follows: That’s it! To see why Alice and Bob compute the same shared secret S, observe the following algebraic trick: I find it fascinating how such a simple trick can be used to create such a useful algorithm :) Diffie-Hellman is based upon a classic problem called the Discrete-Logarithm Problem (DLP). The DLP is stated as follows: “Given a finite field Z_p, where p is a prime, a generator g, and a number in the field of the form b = g^a mod p, find a”. So far, a polytime solution for the DLP has not been found, which makes it a trapdoor function - the modexp is easy to perform, but the inverse operation (DLP) is hard. If an attacker could solve DLP, they could recover Alice and Bob’s private keys from their pubkeys, and compute the shared secret. If we assume the attacker to be passive, currently there are no known attacks to recover the shared secret (at least to my knowledge). But in our chat, the attacker is assumed to be active: they also have the capability to modify messages. In this case, there exists a very simple attack against the protocol shown above (also called unauthenticated DH): Since the attacker can read messages, they also know Alice and Bob’s pubkey. Thus, they can get the shared secret that Alice and Bob compute as follows: 1. To get Alice’s secret, compute A ^ c mod p = g^{ac} mod p 2. To get Bob’s secret, compute B ^ c mod p = g^{bc} mod p An attacker can then decrypt all of the messages. The fundemental problem here is that Alice and Bob have no idea who they’re talking to on the other side. They have no idea whether your messages are being modified or not. To solve this problem, we will have to establish some notion of identity, which we will do using certificates (see the previous post). The certificates are signed using ECDSA (Elliptic Curve Digital Signature Algorithm), which is the algorithm we’ll talk about after ECDH. Elliptic-Curve Diffie Hellman Elliptic-Curve Diffie Hellman is very similar to classic DH, but it uses EC instead of numbers. Alice and Bob start off by agreeing upon a curve C, and a generator point inside that curve G. Then, Alice and Bob generates private keys a and b, which are integers in the range of 1 to n, where n is the order of G (the smallest integer such that nG = O, where O is the point at infinity). Afterwards, the protocol proceeds as follows: Note the similarity to classic DH; The trick here is even simpler: aB = abG = baG = bA. The same attack on unauthenticated DH also exists here. It is very similar (instead of a number, the attacker sends a point), so I won’t repeat it here. ECDH is based on a similar problem to the DLP called ECDLP, stated as follows: Given a curve C, a generator G, and a point A = aG, find a. In the chat, we’re going to use ECDLP to establish a shared secret, which we then use as an AES key. Before we do that, however, we need to construct the infrastructure for certificates, which is based on the next algorithm. Elliptic-Curve Digital Signature Algorithm Another thing that can be done using ECC is digital signatures. For example, one of the most notable applications of ECC, Bitcoin, only supports digital signatures using ECDSA. In order to sign a message m, we perform the following steps: 1. Compute the hash h of m, and reduce it modulo n, where n is the order of the generator G in the curve. We now have 1 <= h < n 2. Pick a random number k between 1 and n - 1 (inclusive) 3. Compute R = kG 4. Set r to be the x-coordinate of R 5. Compute s = (h + rd) / k in mod n (division is inverse multiplication), where d is Alice’s private key 6. Return (r, s) as the signature Then, to verify the signature (r, s) for message m, the verifier does the following: 1. Compute w = 1 / s. Because of the way s is defined, we have w = k / (h + rd) 2. Compute u = wh mod n = hk / (h + rd) mod n 3. Compute v = wr mod n = rk / (h + rd) mod n 4. Accept the signature iff the x-coordinate of Q is equal to r Note that k has to be random. Otherwise, attackers can forge signatures without the private key d (see this Wikipedia article). A well-known product that had this vulnerability is the PlayStation 3: this attack allowed people to forge signatures for homebrew programs and run them on the console. Implemention in Rust First off, we create a new struct Keypair: 1 /// A keypair, containing a private key and a public key, along with some parameters 2 /// such as what curve is used, and the generator point 3 pub struct Keypair { 4 // The private key d; This is the scalar by which we multiply the base point 5 d: BigUint, 6 // The public key Q = d * G, where G is the generator point 7 pub_key: CurvePoint, 8 // The curve used 9 curve: Curve, 10 // The generator point 11 gen: (BigUint, BigUint), 12 // The order of the curve. This is optional, since it's only required in case the user 13 // wants to use ECDSA 14 order: Option<BigUint>, 15 } As well as the keypair itself (d and pub_key), it also contains metadata, such as the curve used ang the generator point. To generate a new keypair, we first generate a private key d, and then compute pub_key = gen.dot(d): 1 impl Keypair { 2 /// Create a new Keypair, that lies on curve `curve` w/Generator `g` 3 /// We also require the order `n` of the curve as a parameter, since this value is needed 4 /// for ECDSA 5 /// We require the user to specify the order of the curve explicitly, 6 /// even though we could just derive it from the curve itself, 7 /// since the order is pre-computed for some standardized curves (see the `std_curves.rs` module) 8 pub fn new(curve: &Curve, g: &(BigUint, BigUint), order: Option<&BigUint>) -> Keypair { 9 // Pick a private key d, which is a random number between 0 and p - 1, where p 10 // is the modulus of the curve (i.e. the order of the field over which the curve is defined) 11 let mut rng = thread_rng(); 12 let d = rng.gen_biguint_below(&curve.p()); 13 // Generate the public key Q, which is defined as d * G 14 // First, convert g into a CurvePoint (point that lies on the curve we get as the argument) 15 let g_point = curve.gen_point(Some((&g.0, &g.1))); 16 let pub_key = g_point.clone().dot(&d); 17 // That's it; Return a new Keypair now 19 Keypair { 20 d, 21 pub_key, 22 curve: curve.clone(), 23 gen: g.clone(), 24 order: order.cloned(), 25 } 26 } 27 ... 28 } The implementation of ECDH is super simple (two lines) - all we need is the pubkey of the other peer: 1 impl Keypair { 2 /// Derive a shared secret using ECDH (EC Diffie-Hellman); As input, this method takes in the other peer's 3 /// public key, which is a CurvePoint 4 /// We return the shared point (`d_A * d_B * G`), from which other methods can derive a secret 5 /// (e.g. by hashing the two coordinates) 6 pub fn ecdh_shared_secret(&self, peer_point: CurvePoint) -> (BigUint, BigUint) { 7 let shared_point = peer_point.dot(&self.d); 9 shared_point.point().coords().unwrap() 10 } 11 ... 12 } The coords method returns the coordinates of a point. The implementation is not all that intreseting, so I didn’t include it in the post. ECDSA requires a bit more work, but it’s also not terribly long. To sign: 1 impl Keypair { 2 /// Sign a message m using ECDSA (EC Digital Signature Algorithm) 3 /// This function receives, as input, the the bytes of the message to be signed 4 /// and outputs the signature, which is of the form (r, s) 5 /// Can also fail in case the user hadn't specified the order of the curve 6 pub fn sign(&self, m: &[u8]) -> Result<(BigUint, BigUint), KeypairError> { 7 if let Some(n) = &self.order { 8 let mut rng = thread_rng(); 9 // We interpret the hash of the message as a number between 1 and n - 1 10 // where n is the order of the curve 11 let m_hash = (1u64 + BigUint::from_str_radix(&digest(m), 16).unwrap()) % n; 12 // Pick a random number k between 1 and n - 1 13 let k = rng.gen_biguint_range(&1u64.into(), n); 14 // Compute R = kG 15 let (gen_x, gen_y) = &self.gen; 16 let base_point = self.curve.gen_point(Some((gen_x, gen_y))); 17 let secret_point = base_point.dot(&k); 18 // Set r = x_R mod n, and compute s = (h + rd) / k in modulo n 19 // x_R is the x-coordinate of point R 20 let r = secret_point.point().coords().unwrap().0 % n; 21 let s = ((m_hash + &r * &self.d) * k.modinv(n).unwrap()) % n; 23 return Ok((r, s)); 24 } 26 Err(KeypairError::SignatureWithoutOrder) 27 } 28 ... 29 } The comments are the exact steps from the previous section. The digest function is provided by the sha256 crate. It returns a string, so we have to convert it to an integer using from_str_radix. Here’s the code for verification: 1 impl Keypair { 2 /// Verifies a signature (r, s) for a message m, given the signer's public key, which is 3 /// (presumably) used to sign m 4 pub fn verify( 5 &self, 6 m: &[u8], 7 sig: (BigUint, BigUint), 8 peer_point: &CurvePoint, 9 ) -> Result<bool, KeypairError> { 10 if let Some(n) = &self.order { 11 let (r, s) = sig; 12 // This is equal to `k / (m_hash + rd)` in modulo n 13 let w = s.modinv(n).unwrap(); 14 let m_hash = (1u64 + BigUint::from_str_radix(&digest(m), 16).unwrap()) % n; 15 // Compute u and v, which are equal to `w * m_hash`, and `w * r`, respectively 16 let u = (&w * m_hash) % n; 17 let v = (&w * &r) % n; 18 // Compute `Q = u * G + v * P` 19 let (gen_x, gen_y) = &self.gen; 20 let base_point = self.curve.gen_point(Some((gen_x, gen_y))); 21 let capital_q = (base_point.dot(&u) + peer_point.dot(&v)).unwrap(); 22 // Accept iff the X-coordinate of Q is equal to r 23 let q_x = capital_q.point().coords().unwrap().0; 25 return Ok(q_x == r); 26 } 28 Err(KeypairError::SignatureWithoutOrder) 29 } 31 ... 32 } Note that we have the underlying crypto algorithms, we can start building the chat itself! The Chat’s Protocol As in the last post, our goal with the protocol is to allow both parties to establish a shared symmetric key, and then have them encrypt & decrypt messages to each other using AES. In the previous post, we did this by first having them verify each other’s identity using certificates, and then having the server encrypt an AES symmetric key using the client’s RSA pubkey. In the new version, the cetificate part is going to be similar, except we sign using ECDSA instead of RSA. To establish the shared secret, we’re going to use ECDH. Below is the simplified flow of the protocol used in the 1. Alice shows her cert to Bob. The cert contains Alice’s pubkey, her name, and her organization 2. Bob verifies the signature on the cert against the CA’s pubkey. If the signature is invalid, the handshake is aborted 3. Bob shows his cert to Alice 4. Alice verifies the signature on the cert against the CA’s pubkey. If the signature is invalid, the handshake is aborted 5. Alice computes the shared secret using ECDH. Note that no more messages need to be exchanged, since the cert already contains the pubkey 6. Bob does the same 7. Now both sides have a shared secret. They hash the x-coordinate of the resulting point to get an AES key Once thing not explained here is how Alice and Bob get the CA’s pubkey. In a real system, e.g. TLS, Alice and Bob will have stored the CA’s pubkey on their device (like a browser does with pubkeys of common CAs). In the chat, however, I did not want to hardcode the key/have the user need to set it as an environment variable, so I just had the CA send it over the network upon request. Note that this is vulnerable to a MITM attack, since an attacker can replace the CA’s pubkey with their own pubkey, and therefore forge whatever certificates they want to. Before carrying out the handshake, both sides request a cert from the CA. The Implementation The Certificate Authority We’ll begin with the communications between users and the CA. The CA needs to support two operations: a request for its pubkey (to which it replies with the pubkey), and a request to sign a certificate (to which it replies with the signature if the operator of the TTP wants to sign it, and an error otherwise). As mentioned before, we’re going to handle all network communications using the protobuf API we wrote at the beginning. Since we don’t know what type of message we’re going to receive, we’ll use the Typed Message API, for which we define the following message codes: 1 pub const TTP_PUBKEY_MSG: u8 = 0u8; 2 pub const TTP_SIG_REQ_MSG: u8 = 1u8; 3 pub const TTP_BYE_MSG: u8 = 2u8; The TTP bye message is sent by a client to the TTP to indicate that it wants to disconnect. Now, let’s write the protobufs we’ll need for the TTP. The first one is a request for the TTP’s pubkey. This is the simplest one, since it’s just empty: 1 // Ask the TTP for its pubkey, which can be used to verify certs 2 // In the real world (most notably TLS), devices just store the pubkeys 3 // of CAs locally, but I didn't want to hardcode the TTP's keypair 4 // in the TTP module 5 message GetTtpPubkey { 7 } The TTP responds with its pubkey, in a message named CurveParameters: 1 // The parameters of an EC of the form y^2 = x^3 + ax + b over Z_p, where p is a prime. 2 // This message also contains the order n of the curve (i.e. # points on the curve) 3 // along w/the coordinates (x, y) of the generator G 4 // We also piggyback the client's public key coordinates on this message 5 message CurveParameters { 6 bytes a = 1; 7 bytes b = 2; 8 bytes p = 3; 9 bytes x = 4; 10 bytes y = 5; 11 bytes order = 6; 12 bytes pub_x = 7; 13 bytes pub_y = 8; 14 } This message contains all the parameters for a curve, along with the pubkey’s coordinates. Now for the signature request. As mentioned before, the user sends their certificate (composed of their pubkey, name, and organization): 1 // Ask the TTP to sign your public key (the info in CurveParameters) 2 // along with some identifying information, such as Name and Organization 3 message TtpGetSignature { 4 CurveParameters pub_key = 1; 5 string name = 2; 6 string org = 3; 7 } We specify the pubkey inside a CurveParameters message (protobuf messages can be nested). The server responds to this message with a TtpSignResponse message, which contains a boolean value indicating whether the CA has agreed to sign the cert, and if so, the signature (r, s): 1 // The response of the TTP WRT a request to get a cert 2 // The TTP can either accept it (in which case this message also contains the signature) 3 // or not 4 message TtpSignResponse { 5 bool signed = 1; 6 // ECDSA Signature 7 optional bytes r = 2; 8 optional bytes s = 3; 9 } Now, let’s get to the Rust code. The main function for the TTP is shown below. 1 fn main() { 2 println!("Listening on port 8888..."); 3 let listener = TcpListener::bind("127.0.0.1:8888").unwrap(); 4 let keypair = Keypair::new( 5 &std_curves::SECP_256_K1, 6 &std_curves::SECP_256_K1_G, 7 Some(&std_curves::SECP_256_K1_N), 8 ); 10 for stream in listener.incoming() { 11 let mut stream = stream.unwrap(); 13 handle_stream(&mut stream, &keypair).expect("Error occurred while handling client"); 14 } 15 } The function generates a keypair on the Secp256k1 curve, and then calls handle_stream for each incoming client. The SECP_256_K1_G and SECP_256_K1_N constants are also defined in the std_curves module, and contain the generator point (standard curves also have standard generator points) and the order of the generator point for Secp256k1, respectively. The handle_stream function is primarily a match on the type of messages received: 1 fn handle_stream(stream: &mut TcpStream, keypair: &Keypair) -> Result<(), io::Error> { 2 loop { 3 // Read the protobuf from the client 4 let typed_msg = stream 5 .receive_typed_msg() 6 .expect("Failed to receive message from client"); 8 match typed_msg.msg_type() { 9 // The client requested our public key 10 TTP_PUBKEY_MSG => { 11 let _ = GetTtpPubkey::parse_from_bytes(&typed_msg.payload()).unwrap(); 13 handle_get_pubkey_req(stream, keypair).expect("Failed to send pubkey to client"); 14 } 15 // The client requested us to sign a cert 16 TTP_SIG_REQ_MSG => { 17 let get_sig_req = TtpGetSignature::parse_from_bytes(&typed_msg.payload()).unwrap(); 19 handle_get_sig_req(stream, keypair, get_sig_req).unwrap(); 20 } 21 TTP_BYE_MSG => break, 22 // Unknown message type 23 _ => { 24 println!("Unknown message type {}", typed_msg.msg_type()); 25 } 26 } 27 } 29 Ok(()) 30 } Here’s the handler for the GetTtpPubkey message: 1 fn handle_get_pubkey_req(stream: &mut TcpStream, keypair: &Keypair) -> Result<usize, io::Error> { 2 // Construct the CurveParameters message, which contains our pubkey 3 let curve_params = CurveParameters::from(keypair); 4 // Send the pubkey to the client over the stream 5 stream.send_msg(curve_params) 6 } We convert our pubkey to a CurveParamters using from (see the implementation below), and then send it using the API. 1 impl From<&Keypair> for CurveParameters { 2 fn from(value: &Keypair) -> Self { 3 let mut curve_params = CurveParameters::new(); 4 let curve = value.curve(); 5 curve_params.p = curve.p().to_bytes_be(); 6 curve_params.a = curve.a().to_bytes_be(); 7 curve_params.b = curve.b().to_bytes_be(); 8 curve_params.order = value.order().unwrap().to_bytes_be(); 9 let (gen_x, gen_y) = value.gen(); 10 curve_params.x = gen_x.to_bytes_be(); 11 curve_params.y = gen_y.to_bytes_be(); 12 let (pub_x, pub_y) = value.pub_key().point().coords().unwrap(); 13 curve_params.pub_x = pub_x.to_bytes_be(); 14 curve_params.pub_y = pub_y.to_bytes_be(); 16 curve_params 17 } 18 } Then, to sign a certificate, we do the following: 1 // Handle a request to sign a certificate 2 fn handle_get_sig_req( 3 stream: &mut TcpStream, 4 keypair: &Keypair, 5 req: TtpGetSignature, 6 ) -> Result<usize, io::Error> { 7 // The name and organization of the signee 8 let (name, org) = (&req.name, &req.org); 10 print!( 11 r#"Got a request to sign a certificate for the following person: 12 ---------------------- 13 Name: {} 14 Organization: {} 15 ---------------------- 16 Do you want to sign the certificate? (y/n): 17 "#, 18 name, org 19 ); 21 let should_sign; 23 let mut input = String::new(); 25 loop { 26 print!("Enter 'y' or 'n': "); 27 stdout().flush().unwrap(); 29 stdin().read_line(&mut input).expect("Failed to read line"); 30 input = input.trim().to_lowercase(); 32 if input == "y" { 33 should_sign = true; 34 break; 35 } else if input == "n" { 36 should_sign = false; 37 break; 38 } else { 39 println!("Invalid input. Please enter 'y' or 'n'."); 40 input.clear(); 41 } 42 } 44 if should_sign { 45 // Construct the certificate, which is the data we sign 46 // We can just do this by converting the request to bytes 47 // since it contains, by design, all the data we need 48 let cert = req.write_to_bytes().unwrap(); 49 // Sign the cert 50 let (r, s) = keypair.sign(&cert).unwrap(); 51 // Construct a response, and send it to the client 52 let mut sign_response = TtpSignResponse::new(); 53 sign_response.signed = true; 54 sign_response.r = Some(r.to_bytes_be()); 55 sign_response.s = Some(s.to_bytes_be()); 57 return stream.send_msg(sign_response); 58 } 60 let mut sign_response = TtpSignResponse::new(); 61 sign_response.signed = false; 62 sign_response.r = None; 63 sign_response.s = None; 65 stream.send_msg(sign_response) 66 } We start by extracting the name and organization written on the cert, and then asking the TTP operator whether they want to sign it. If so, we sign it using the keypair, construct a TtpSignResponse and fill it with r and s, and then send it to the client using the API. Otherwise, we respond to the client with a message saying that the cert has not been signed. Note that we don’t even need to deserialize the cert; we just sign the bytes of the TtpGetSignature protobuf, which is what the client and server present to each other during the handshake as well. The Handshake Now that we have the TTP ready, we can start writing the client and the server. The server function is called server_ec, and receives two parameters: a reference to a ChatArguments struct and a mutable reference to a Peer struct: 1 pub fn server_ec( 2 args: &ChatArguments, 3 peer: &mut Peer, 4 ) -> Result<(BigUint, BigUint), HandshakeError> It returns either the shared secret (which is a point on the client and server’s curve), or a HandshakeError, which is an enum we’ll add new types of errors to as we’ll go. The ChatArguments struct contains the parameters needed by the server and client (e.g. what port the CA runs on), and is filled out by the interactive CLI frontend for the chat. Its definition is shown below: 1 /// The arguments the chat frontend needs to provide to use 2 /// server and client functions 3 pub struct ChatArguments { 4 // The port of the server 5 pub port: u16, 6 // The address of the server 7 pub address: String, 8 // The TTP's port 9 pub ttp_port: u16, 10 // The TTP's address 11 pub ttp_address: String, 12 // User's name 13 pub name: String, 14 // User's organization 15 pub org: String, 16 } The Peer struct contains information about the connection, such as the AES cipher the peers talk over, and the TcpStream the peers talk over: 1 /// The connection with the peer 2 pub struct Peer { 3 stream: Option<TcpStream>, 4 pub cipher: Option<AesCtr>, 5 pub hmac: Option<HMAC>, 6 } The AesCtr struct implements AES in CTR mode, and the HMAC struct allows us to authenticate messages. We’ll go over their implementation later. Similarily, the client is written in a function called client_ec that has the same signature: 1 pub fn client_ec( 2 state: &ChatArguments, 3 peer: &mut Peer, 4 ) -> Result<(BigUint, BigUint), HandshakeError> The server starts off by generating a keypair, and then fills it into a struct called IdentityInfo: 1 let curve = &std_curves::NIST_P_256; 2 // Generate a keypair 3 let keypair = Keypair::new( 4 curve, 5 &std_curves::NIST_P_256_G, 6 Some(&std_curves::NIST_P_256_N), 7 ); 8 // Generate our identity so we can sign a cert 9 let identity = IdentityInfo::new(&keypair, &args.name, &args.org); The IdentityInfo struct packs all the information needed to get a certificate into one struct, and is defined as follows: 1 /// This information identifies each user, and is required for the TTP 2 /// to grant one a cert 3 pub struct IdentityInfo<'a> { 4 /// Keypair of the grantee 5 keypair: &'a Keypair, 6 /// Name of the grantee (e.g. John Doe) 7 name: String, 8 /// Oranization of the grantee (e.g. Example Organization Inc.) 9 org: String, 10 } The server then proceeds by connecting to the TTP, asking it for its pubkey, and disconnecting from it: 1 let mut ttp_stream = TcpStream::connect(format!("{}:{}", args.ttp_address, args.ttp_port)).unwrap(); 2 // Ask the TTP for its pubkey 3 let (ttp_curve_keypair, ttp_pubkey) = get_ttp_pubinfo(&mut ttp_stream); 4 // Bye bye TTP 5 let bye_msg = TtpBye::new(); 6 ttp_stream.send_typed_msg(bye_msg, TTP_BYE_MSG).unwrap(); The get_ttp_pubinfo function sends a GetTtpPubkey message to the TTP, generates a keypair on the TTP’s curve, extracts the TTP’s pubkey, and returns them both: 1 /// Given a stream to the TTP, (1) generate a keypair on the TTP's curve 2 /// and (2) return the TTP's pubkey as a CurvePoint 3 pub fn get_ttp_pubinfo(ttp_stream: &mut TcpStream) -> (Keypair, CurvePoint) { 4 // Ask the TTP for its pubkey 5 let get_pubkey_req = GetTtpPubkey::new(); 7 ttp_stream 8 .send_typed_msg(get_pubkey_req, TTP_PUBKEY_MSG) 9 .unwrap(); 11 let ttp_pubkey_msg = MessageStream::<CurveParameters>::receive_msg(ttp_stream).unwrap(); 13 let (a, b, p) = ( 14 BigUint::from_bytes_be(&ttp_pubkey_msg.a), 15 BigUint::from_bytes_be(&ttp_pubkey_msg.b), 16 BigUint::from_bytes_be(&ttp_pubkey_msg.p), 17 ); 18 let ttp_curve = Curve::new(a, b, p); 19 let (x, y) = ( 20 BigUint::from_bytes_be(&ttp_pubkey_msg.x), 21 BigUint::from_bytes_be(&ttp_pubkey_msg.y), 22 ); 23 let order = BigUint::from_bytes_be(&ttp_pubkey_msg.order); 24 // Generate our keypair on the **TTP**'s curve, which may be different than the curve used 25 // to talk to the client 26 let ttp_curve_keypair = Keypair::new(&ttp_curve, &(x, y), Some(&order)); 27 // The TTP's pubkey 28 let ttp_pubkey_coords = ( 29 BigUint::from_bytes_be(&ttp_pubkey_msg.pub_x), 30 BigUint::from_bytes_be(&ttp_pubkey_msg.pub_y), 31 ); 32 let ttp_pubkey = ttp_curve.gen_point(Some((&ttp_pubkey_coords.0, &ttp_pubkey_coords.1))); 34 (ttp_curve_keypair, ttp_pubkey) 35 } The only reason we generate a keypair on the CA’s curve as well is to allow us to call the signature & verification functions we defined earlier on the Keypair struct. After getting the CA’s pubkey, we the CA to sign our certificate: 1 // Ask the TTP for a cert 2 let (ttp_sign_req, ttp_sig) = identity.ask_ttp_cert(&mut ttp_stream); The ask_ttp_cert method is defined on the IdentityInfo struct: 1 impl<'a> IdentityInfo<'a> { 2 ... 4 /// Ask the TTP for a certificate, given our pubkey (the one on the **server's curve** and not the TTP's curve) 5 /// and a stream to the TTP 6 /// The TTP's curve is only used to validate the TTP's signatures 7 /// This function also returns the certificiate which is the actual data being signed 8 pub fn ask_ttp_cert(self, ttp_stream: &mut TcpStream) -> (TtpGetSignature, TtpSignResponse) { 9 // Request the TTP to sign our cert 10 let ttp_get_signature = TtpGetSignature::new(); 11 let mut ttp_sign_req = ttp_get_signature; 12 let curve_params = CurveParameters::from(self.keypair); 13 ttp_sign_req.pub_key = MessageField::some(curve_params); 14 ttp_sign_req.name = self.name; 15 ttp_sign_req.org = self.org; 17 ttp_stream 18 .send_typed_msg(ttp_sign_req.clone(), TTP_SIG_REQ_MSG) 19 .unwrap(); 20 // Read the response we got 21 ( 22 ttp_sign_req, 23 MessageStream::<TtpSignResponse>::receive_msg(ttp_stream).unwrap(), 24 ) 25 } 26 } As you’ll recall, the TTP signs the bytes of the TtpGetSignature request, so we return the request itself as well as the CA’s response, allowing the server to show its certificate to the client later. After asking the CA to sign us a cert, we need to check whether our cert has indeed been signed (in contrast to the previous post, where the CA has always signed certs regardless of their 1 // This is in server_ec 2 if !ttp_sig.signed { 3 eprintln!("The CA hasn't signed our cert."); 4 return Err(HandshakeError::CertNotSigned); 5 } The CertNotSigned is the first type of error we’ll add to HandshakeError. It indicates that the CA hasn’t signed our certificate. On the client side of things, we do the same thing: 1 // In client_ec 2 let curve = &std_curves::NIST_P_256; 3 // Generate a keypair 4 let keypair = Keypair::new( 5 curve, 6 &std_curves::NIST_P_256_G, 7 Some(&std_curves::NIST_P_256_N), 8 ); 9 // Generate our identity 10 let identity = IdentityInfo::new(&keypair, &state.name, &state.org); 11 let mut ttp_stream = TcpStream::connect(format!("{}:{}", state.ttp_address, state.ttp_port)).unwrap(); 12 let (ttp_curve_keypair, ttp_pubkey) = get_ttp_pubinfo(&mut ttp_stream); 13 // Request the TTP to sign our cert 14 let (ttp_sign_req, ttp_sig) = identity.ask_ttp_cert(&mut ttp_stream); 16 // Bye bye TTP 17 let bye_msg = TtpBye::new(); 18 ttp_stream.send_typed_msg(bye_msg, TTP_BYE_MSG).unwrap(); 20 // If the TTP hasn't signed our cert, we can't continue the handshake 21 if !ttp_sig.signed { 22 eprintln!("CA hasn't agreed to sign our cert."); 23 return Err(HandshakeError::CertNotSigned);} 24 } At this point, both the client and the server have valid certs. The server listens for a connection from the client: 1 // In server_ec 2 // Start the server 3 println!("Listening on port {}...", args.port); 4 let listener = TcpListener::bind(format!("{}:{}", args.address, args.port)).unwrap(); 6 if let Some(stream) = listener.incoming().next() { 7 let mut stream = stream.unwrap(); 8 ... 9 } else { 10 Err(HandshakeError::ServerConnection) 11 } The ServerConnection error simply indicates that an error occurred while we were trying to get the TcpStream for the next client. The client connects to the server: 1 // In client_ec 2 // Connect to server 3 let mut stream = TcpStream::connect(format!("{}:{}", state.address, state.port)).unwrap(); Recall that we can’t do an ECDH yet; neither peer knows who they’re talking to on the other side of the connection. To solve this, the client starts by showing its signed certificate to the server using the function show_peer_cert: 1 // In client_ec 2 // Show it our cert 3 show_peer_cert(&mut stream, ttp_sign_req, ttp_sig); The show_peer_cert function receives as arguments the stream with the other peer, the ttp_sign_req (our certificate, returned by ask_ttp_cert), and the signature (ttp_sig, returned by ask_ttp_cert). It is defined as follows: 1 /// Send the other side our cert, given a reference to the cert data, and the TTP's signature 2 pub fn show_peer_cert(stream: &mut TcpStream, cert: TtpGetSignature, ttp_sig: TtpSignResponse) { 3 let mut cert_show_msg = ShowCertificate::new(); 4 cert_show_msg.cert = MessageField::some(cert); 5 cert_show_msg.r = ttp_sig.r.unwrap(); 6 cert_show_msg.s = ttp_sig.s.unwrap(); 8 stream.send_msg(cert_show_msg).unwrap(); 9 } This function constructs a ShowCertificate proto, which is defined as below. It contains our certificate, and the CA’s signature. The function then sends the ShowCertificate to the other peer. 1 // Show the other peer our certificate 2 message ShowCertificate { 3 // The cert 4 TtpGetSignature cert = 1; 5 // The TTP's signature 6 bytes r = 2; 7 bytes s = 3; 8 } On the server side of things, we receive the client’s ShowCertificate: 1 // in server_ec 2 // Wait for client's cert 3 let client_cert = MessageStream::<ShowCertificate>::receive_msg(&mut stream).unwrap(); The server then validates the signature on the client’s certificate using the TTP’s pubkey: 1 // in server_ec 2 // Verify the signature on the certificate using the TTP's pubkey 3 let is_client_valid = client_cert.validate_peer_cert(&ttp_curve_keypair, &ttp_pubkey); The validate_peer_cert function extracts the certificate and signature from the ShowCertificate proto, and validates them against the CA’s pubkey using the ECDSA verify function we’ve implemented earlier for Keypair: 1 impl ShowCertificate { 2 /// Validate the client's cert. Also requires our keypair on the TTP's curve 3 /// and the TTP's pubkey (since the cert is validated against the TTP's pubkey) 4 pub fn validate_peer_cert(&self, ttp_curve_keypair: &Keypair, ttp_pubkey: &CurvePoint) -> bool { 5 let client_cert_bytes = self.cert.write_to_bytes().unwrap(); 6 let sig = ( 7 BigUint::from_bytes_be(&self.r), 8 BigUint::from_bytes_be(&self.s), 9 ); 11 ttp_curve_keypair 12 .verify(&client_cert_bytes, sig, ttp_pubkey) 13 .unwrap() 14 } 15 } Finally, the server tells the client whether the client’s cert is valid, so that in case it is not both sides can abort the handshake: 1 send_val_status(&mut stream, is_client_valid).unwrap(); This function (send_val_status) sends a ValidationResponse proto: 1 // The client sends this to the server (and vice versa) 2 // to indicate whether the certificate of the other side is valid or not 3 // if the cert is not valid, both sides call off the handshake, 4 // since they can't be sure who they're talking to on the other side 5 message ValidationResponse { 6 bool is_valid = 1; 7 } This proto has a single field: is_valid which is set to true if the cert is valid, and false if not. The send_val_status function is defined below: 1 /// Tell the other side whether their cert is valid 2 pub fn send_val_status(stream: &mut TcpStream, val_status: bool) -> Result<usize, std::io::Error> { 3 let mut msg = ValidationResponse::new(); 4 msg.is_valid = val_status; 6 stream.send_msg(msg) 7 } As you can see, it simply constructs the protobuf, and sends it using our API. The client receives this message, and returns in case the server says the client’s cert is not valid: 1 // In client_ec 2 // Check whether the server has validated our identity 3 let is_identity_valid = MessageStream::<ValidationResponse>::receive_msg(&mut stream).unwrap(); 5 if !is_identity_valid.is_valid { 6 eprintln!("Server says that our identity is invalid."); 7 return Err(HandshakeError::PeerRejects); 8 } The PeerRejects error indicates that the other peer has rejected our certificate. Now that the server is sure the client’s certificate is valid, we also ask the server’s user whether the user who presented the certificate is indeed the user they wanted to talk with: 1 // In server_ec 2 // Ask the user whether they want to continue the handshake, 3 // based on the (now validated by the CA) identity of the client 4 let should_continue = ask_user_peer(&client_cert); The ask_user_peer function prints a certificate, and prompts for a ‘y’ or a ‘n’: 1 /// Print the other side's identity from their cert 2 pub fn print_cert_identity(show_cert: &ShowCertificate) { 3 println!( 4 "---\nName: {}\nOrganization: {}\n---", 5 show_cert.cert.name, show_cert.cert.org 6 ); 7 } 9 /// Ask the user whether they want to talk with the other peer 10 pub fn ask_user_peer(show_cert: &ShowCertificate) -> bool { 11 println!("The other peer presents itself as follows: "); 12 print_cert_identity(show_cert); 13 println!("Is this who you want to talk to? (y/n): "); 14 let should_continue; 15 let mut input = String::new(); 17 loop { 18 print!("Enter 'y' or 'n': "); 19 stdout().flush().unwrap(); 21 stdin().read_line(&mut input).expect("Failed to read line"); 22 input = input.trim().to_lowercase(); 24 if input == "y" { 25 should_continue = true; 26 break; 27 } else if input == "n" { 28 should_continue = false; 29 break; 30 } else { 31 println!("Invalid input. Please enter 'y' or 'n'."); 32 input.clear(); 33 } 34 } 36 should_continue 37 } The server now sends an AbortHandshake proto to the client (very similar to a ValidationResponse proto): 1 // Before continuing the handshake, the user is asked 2 // whether they want to continue the handshake 3 // in case not, both sides abort the handshake 4 message AbortHandshake { 5 bool is_abort = 1; 6 } This is done using the send_abort_msg function: 1 // In server_ec 2 send_abort_msg(&mut stream, !should_continue).unwrap(); Defined as follows: 1 // Tell the other peer whether the user wants to continue the handshake 2 // or abort it 3 pub fn send_abort_msg(stream: &mut TcpStream, is_abort: bool) -> Result<usize, io::Error> { 4 let mut msg = AbortHandshake::new(); 5 msg.is_abort = is_abort; 7 stream.send_msg(msg) 8 } The client receives this message, and if the server wants to abort the connection, both sides shut the stream down: 1 // In server_ec 2 if !should_continue { 3 stream.shutdown(std::net::Shutdown::Both).unwrap(); 4 return Err(HandshakeError::AbortConnection); 5 } 7 // In client_ec 8 // Check whether the server wants to abort the handshake 9 let abort_handshake = MessageStream::<AbortHandshake>::receive_msg(&mut stream).unwrap(); 11 if abort_handshake.is_abort { 12 eprintln!("The other peer wants to abort the handshake."); 13 stream.shutdown(std::net::Shutdown::Both).unwrap(); 14 return Err(HandshakeError::PeerAborts); 15 } The AbortConnection error means that the user wants to abort the connection, and the PeerAborts error means that the other peer wants to abort the connection. Likewise, everything that happened until now also happens the other way around (server sends its cert to the client, client validates it, etc.): 1 // In client_ec 2 // Validate the server's cert 3 let server_cert = MessageStream::<ShowCertificate>::receive_msg(&mut stream).unwrap(); 5 // Verify the signature on the certificate using the TTP's pubkey 6 let is_server_valid = server_cert.validate_peer_cert(&ttp_curve_keypair, &ttp_pubkey); 7 send_val_status(&mut stream, is_server_valid).unwrap(); 9 if !is_server_valid { 10 eprintln!("The server's certificate is not valid. Aborting handshake..."); 11 stream.shutdown(std::net::Shutdown::Both).unwrap(); 12 return Err(HandshakeError::BadPeerCert); 13 } 15 // Ask the user whether they want to continue the handshake, based on the (validated by the CA) identity 16 // of the server 17 let should_continue = ask_user_peer(&server_cert); 18 // Tell the server whether the user wants to abort the message 19 send_abort_msg(&mut stream, !should_continue).unwrap(); 21 if !should_continue { 22 stream.shutdown(std::net::Shutdown::Both).unwrap(); 23 return Err(HandshakeError::AbortConnection); 24 } And on the server side: 1 // Send our certificate for the client to validate 2 show_peer_cert(&mut stream, ttp_sign_req.clone(), ttp_sig.clone()); 4 let is_identity_valid = 5 MessageStream::<ValidationResponse>::receive_msg(&mut stream).unwrap(); 7 if !is_identity_valid.is_valid { 8 eprintln!("Client says that our identity is invalid."); 9 stream.shutdown(std::net::Shutdown::Both).unwrap(); 11 return Err(HandshakeError::PeerRejects); 12 } 14 let client_aborts = MessageStream::<AbortHandshake>::receive_msg(&mut stream).unwrap(); 16 if client_aborts.is_abort { 17 eprintln!("The client wants to abort the handshake."); 18 stream.shutdown(std::net::Shutdown::Both).unwrap(); 20 return Err(HandshakeError::PeerAborts); 21 } Now, the client and the server are both sure of each other’s identity, so we can perform an ECDH. Remember that, by definition, the certificate of a user contains their public key, so we don’t need to transmit any more messages over the network! On the server side: 1 peer.stream = Some(stream); 2 // At this point, since both sides have each other's certs (and hence each other's pubkeys) 3 // we can perform an ECDH and establish a shared secret 4 Ok(client_cert.est_shared_secret(&keypair)) The est_shared_secret function is defined on the ShowCertificate struct (which contains a cert and the CA’s signature on the cert), and is essentially a wrapper around the ecdh_shared_secret function we’ve implemented earlier on Keypair: 1 impl ShowCertificate { 2 /// Establish the shared secret using the other side's cert (self) and our keypair 3 pub fn est_shared_secret(self, keypair: &Keypair) -> (BigUint, BigUint) { 4 let client_pubkey_info = self.cert.unwrap().pub_key.unwrap(); 5 let (client_pub_x, client_pub_y) = ( 6 BigUint::from_bytes_be(&client_pubkey_info.pub_x), 7 BigUint::from_bytes_be(&client_pubkey_info.pub_y), 8 ); 9 let curve = keypair.curve(); 10 let server_pubkey = curve.gen_point(Some((&client_pub_x, &client_pub_y))); 12 keypair.ecdh_shared_secret(server_pubkey) 13 } 15 ... 16 } The client does the same thing: 1 // At this point, since both sides have each other's certs (and hence each other's pubkeys) 2 // we can perform an ECDH and establish a shared secret 3 let shared_secret = server_cert.est_shared_secret(&keypair); 5 peer.stream = Some(stream); 7 Ok(shared_secret) And this is it! Now both sides have a shared secret, which is a point on the curve over which their keypairs are defined. The APIs we defined along the project made this much easier. Symmetric Crypto Now that the two peers have a shared symmetric key, we need to write the code that encrypts & decrypts messages. In the last post, we’ve used AES-CBC as a symmetric cipher, but today we’re going to use another AES mode: CounTeR mode. Also, unlike last time, we’re also going to authenticate messages using a MAC (Message Authentication Code), more specifically a SHA-256-HMAC. Doing so prevents attackers from conducting attacks that rely on submitting arbitrary ciphertexts. The core idea of CTR mode is to turn a block cipher into a stream cipher. It does this as follows (diagram taken from wikipedia): For now, ignore the nonce. To encrypt the first block, we encrypt a block full of 0’s, and then XOR the ciphertext with the first block of plaintext to yield the first ciphertext block. To encrypt the second block, we encrypt a block that is all 0’s except for the last byte which is a 1, and XOR the ciphertext with the second block of plaintext, and so on. In CTR mode, decryption as the same as encryption - this is due to XOR being its own inverse (i.e. x ^ y ^ y = x). More formally, given a block of ciphertext C_i, by definition we have C_i = P_i ^ AES(i, key), and therefore P_i = C_i ^ AES(i, key). Counter mode, as presented above, has a very deadly flaw: encryption is deterministic. Encrypting the same plaintext two different times will result in the same ciphertext, allowing an attacker to detect patterns in the traffic. In order to prevent this problem, we add a nonce into the mix. When encrypting a ciphertext, we also use a random nonce. The counter then starts counting from the nonce, instead of from 0. The nonce is transmitted over the network along with the message’s ciphertext so that the other peer will know how to decrypt it. We represent AES-CTR with a 1 #[derive(Clone)] 2 pub struct AesCtr { 3 cipher: Aes256, 4 } The Aes256 struct is provided by the aes crate; implementing AES is beyond the scope of this post. To initialize a new AesCtr given a key, we do the following: 1 impl AesCtr { 2 // Create a new AES-CTR cipher 3 pub fn new(key: &[u8]) -> AesCtr { 4 let key_arr = GenericArray::from_slice(key); 5 let cipher = Aes256::new(key_arr); 7 AesCtr { cipher } 8 } 10 ... 11 } Then, encryption receives a message and a nonce as an argument, and does the following: 1 impl AesCtr { 2 pub fn encrypt(&self, msg: &[u8], nonce: usize) -> Vec<u8> { 3 // AES-CTR encrypts using a running counter, where we XOR each byte of the msg 4 // with a byte from a running keystream 6 let num_blocks = msg.len().div_ceil(AES_BLOCK_SIZE); 7 let mut msg_bytes = msg.chunks(AES_BLOCK_SIZE); 8 let mut ciphertext = vec![]; 10 for i in nonce..nonce+num_blocks { 11 // Pad it to the block size 12 let mut i_slice = vec![0u8; AES_BLOCK_SIZE - 8]; 13 i_slice.extend(&i.to_be_bytes()); 14 let i_slice: [u8; AES_BLOCK_SIZE] = i_slice.try_into().unwrap(); 15 let mut key_block = GenericArray::from(i_slice); 16 self.cipher.encrypt_block(&mut key_block); 17 // # Of bytes to encrypt in this block 18 let msg_block = msg_bytes.next().unwrap(); 19 let to_encrypt = msg_block.len().min(AES_BLOCK_SIZE); 21 for j in 0..to_encrypt { 22 ciphertext.push(key_block.get(j).unwrap() ^ msg_block.get(j).unwrap()); 23 } 24 } 26 ciphertext 27 } 29 ... 30 } We first compute the number of blocks, and split the message into blocks. We then count from nonce to nonce + num_blocks, and at each iteration encrypt a block containg i’s (padded) byte representation. This ciphertext is then XORed with the corresponding plaintext block. As mentioned before, decryption is the same as encryption: 1 impl AesCtr { 2 ... 4 pub fn decrypt(&self, msg: &[u8], nonce: usize) -> Vec<u8> { 5 // Encryption is the same as decryption in CTR mode 6 self.encrypt(msg, nonce) 7 } 8 } HMAC (Hash-based Message Authentication Code) Right now, our two peers have validated the identity of each other (using ECDSA), established a shared secret (using ECDH), and can encrypt & decypt messages using AES-CTR. So why aren’t we done yet? Remember that we’re dealing with active attackers, that can modify messages as well as passively listen in on them. Right now, there’s nothing stopping an attacker from modifying one of the encrypted messages sent over the network. For example, suppose that the server encrypts the message “A” (single uppercase ‘A’). Recall that the ciphertext is constructed as follows (for simplicity, assume that the nonce is 0x1234): 1. Encrypt a block that contains the nonce, padded to the block size. If the block size is 32 bit, we encrypt the block 0x00001234: C0 = AES(0x00001234, key) 2. Take the first byte in C0, and call it x 3. The ciphertext is x ^ 'A' = x ^ 0x41 What happens an attacker sees this message, and XORs it with 0x1? This will result in the modified ciphertext x ^ 0x41 ^ 0x1. When the client decrypts this ciphertext (recall that encryption is identical to decryption in CTR mode), they will get x ^ 0x41 ^ 0x1 ^ x = (x ^ x) ^ (0x41 ^ 0x1) = 0 ^ (0x41 ^ 0x41) = 0x41 ^ 0x1 = 0x40 = '@' instead of the original ‘A’. This property is called mallebility, and can have very severe consequences. To solve this, we will use a crypto concept called MAC (Message Authentication Code). MACs are used to ensure the integrity of a message (i.e. they guarantee that the message was not modified along the way). MACs exist in the form of a tag sent along with the message. When Alice wants to send a message m to Bob, she computes the MAC of m using a shared key she and Bob hold. When the message eventually arrives to Bob, he verifies the MAC using the shared key, and, if the MAC is valid, knows that the message has not been changed. You can only compute a MAC if you have the key; computing a MAC without having the key is called MAC forgery. There are 3 common ways to do MACs: 4. Encrypt-then-MAC: first encrypt the message, and compute a MAC on the ciphertext 5. MAC-then-encrypt: compute a MAC on the plaintext message, and encrypt the plaintext, along with the MAC 6. MAC-and-encrypt: compute a MAC on the plaintext, but only encrypt the plaintext and not the MAC (unlike MAC-then-encrypt) The pros and cons of each approach is beyond the scope of this post; if you’re interested, I reccommend reading this StackExchange question. There are also multiple ways to compute the MAC itself, but in this post we’re only going to use of the most common ones, called HMAC, which is performed as follows: 7. Compute outer padding: outer_pad = key ^ block of [0x5c] 8. Compute inner padding: inner_pad = key ^ block of [0x36] 9. The MAC is hash(outer_pad || hash(inner_pad || m)) As you can see, HMAC uses a hash function. In the chat, we’re going to use SHA256. HMAC with SHA256 is called HMAC-SHA256. We will implement this using a struct HMAC: 1 #[derive(Clone)] 2 pub struct HMAC { 3 key: Vec<u8>, 4 } To compute a MAC, we follow the steps in the above algorithm: 1 impl HMAC { 2 /// Derive the MAC for message msg; Returns the bytes of the MAC 3 pub fn mac(&self, msg: &[u8]) -> Vec<u8> { 4 // The inner & outer hashes are the key XORed with 0x5c and 0x36, respectively 5 let mut outer_pad: Vec<u8> = self.key.iter().map(|x| x ^ HMAC_OUTER_PAD).collect(); 6 let mut inner_pad: Vec<u8> = self.key.iter().map(|x| x ^ HMAC_INNER_PAD).collect(); 7 // Compute sha256(inner_pad || msg) 8 inner_pad.append(&mut msg.to_vec()); 9 let mut inner_hash = hex::decode(sha256::digest(inner_pad)).unwrap(); 10 // Compute sha256(outer_pad || inner_hash) 11 outer_pad.append(&mut inner_hash); 13 hex::decode(sha256::digest(outer_pad)).unwrap() 14 } 15 ... 16 } This is a bit longer than the pseudocode, since we can’t concatenate the vectors with one operation (like in Python), so we have to use append. The hex::decode function converts the output of the SHA256 from a hex string format (e.g. a1a2a3a4) to a vector of bytes. To verify a MAC, we compute the MAC for the message, and compare it with the provided MAC: 1 /// Verify the MAC for a message 2 pub fn verify(&self, msg: &[u8], tag: &[u8]) -> bool { 3 self.mac(msg) == tag.to_vec() 4 } The Chat CLI This part is not super interesting, so I’ll keep it short and focus on the important parts. To parse commands, I defined an enum of command types (shown below), tokenized commands, and then parsed it as a specific command based on the first token. 1 pub enum CommandType { 2 Set, 3 Connect, 4 Listen, 5 Send, 6 Help, 7 Exit, 8 Unk, 9 } 11 pub struct Command { 12 op: CommandType, 13 args: Vec<String>, 14 } For example, set port 5555 is a command with type Set and arguments vec!["port", "5555"]. Here’s a short description of each command: • Set changes the State of the chat, essentially a struct that contains the parameters relevant to the client and server code (e.g. the server’s address) • Connect connects to a server based on the state of the chat • Listen starts a server based on the state of the chat • Send sends a message once we have an encrypted connection • Help shows the help message • Exit exits out of the chat • Unk is a no-op, and is used when parsing invalid commands The State is defined as follows: 1 // The current state of the chat - e.g. what is the TTP's IP and Address, what algorithm should be used etc. 2 #[derive(Clone)] 3 pub struct State { 4 server_addr: Option<Ipv4Addr>, 5 server_port: Option<u16>, 6 ttp_addr: Option<Ipv4Addr>, 7 ttp_port: Option<u16>, 8 algo: Algorithm, 9 name: Option<String>, 10 org: Option<String>, 11 } The Algorithm member is reserved, in case I’ll add any more algorithms such as RSA in the future. Below is the code that handles the connect command: 1 fn connect(state: &State, peer: &mut Peer) -> Result<(), HandshakeError> { 2 if !check_state_full(state) { 3 return Err(HandshakeError::UnfilledParams); 4 } 6 match state.algo { 7 Algorithm::EllipticCurve => { 8 let args = ChatArguments::from(state); 9 let shared_secret = client_ec::client_ec(&args, peer)?.0; 10 let key = 11 BigUint::from_str_radix(&sha256::digest(shared_secret.to_bytes_be()), 16).unwrap(); 12 let cipher = AesCtr::new(&key.to_bytes_be()); 13 let hmac = HMAC::new(&key.to_bytes_be()); 14 peer.cipher = Some(cipher); 15 peer.hmac = Some(hmac); 16 } 17 Algorithm::RSA => { 18 //println!("Unimplemented server for RSA."); 19 } 20 } 22 Ok(()) 23 } We first check whether the state of the chat is full, and the user has filled in all necessary parameters (e.g. we can’t connect to a server if we don’t know its port), and then match according to the algorithm. We convert the State to ChatArguments, and then call the client_ec function, which as you’ll recall returns the shared secret established with the server, which is a (BigUint, BigUint) (a point on an EC). We take the first coordinate, and hash it using SHA256 to create a key, used for initializing an AES-CTR cipher and an HMAC struct. The code for handling listen is quite similar, except it calls server_ec instead of client_ec: 1 fn listen(state: &State, peer: &mut Peer) -> Result<(), HandshakeError> { 2 // Make sure that the state has all the values we need 3 if !check_state_full(state) { 4 return Err(HandshakeError::UnfilledParams); 5 } 7 match state.algo { 8 Algorithm::EllipticCurve => { 9 let args = ChatArguments::from(state); 10 // Use the hash of the x-coordinate of the shared secret returned by ECDH 11 // as an AES-CTR key 12 let shared_secret = server_ec::server_ec(&args, peer)?.0; 13 let key = 14 BigUint::from_str_radix(&sha256::digest(shared_secret.to_bytes_be()), 16).unwrap(); 15 let cipher = AesCtr::new(&key.to_bytes_be()); 16 let hmac = HMAC::new(&key.to_bytes_be()); 17 peer.cipher = Some(cipher); 18 peer.hmac = Some(hmac); 19 } 20 Algorithm::RSA => { 21 //println!("Unimplemented server for RSA."); 22 } 23 } 25 Ok(()) 26 } The final command we’ll look at the implementation for is send: 1 fn send(args: Vec<String>, peer: &mut Peer) { 2 let msg = args 3 .iter() 4 .map(|x| x.to_owned() + " ") 5 .collect::<Vec<String>>() 6 .concat(); 7 peer.send_encrypted(msg.as_bytes()).unwrap(); 8 } Since the spaces aren’t included in the tokenization, we add them in between tokens. We then called send_encrypted, which is a method impemented on a Peer: 1 impl Peer { 2 ... 4 pub fn send_encrypted(&mut self, msg: &[u8]) -> Result<usize, io::Error> { 5 // Send the CTR mode nonce (the initial value of the counter) 6 // using a constant nonce is bad, since it causes the same plaintext 7 // to result in the same ciphertext 8 let mut rng = thread_rng(); 9 let nonce = rng.gen::<usize>(); 10 // Encrypt the message 11 let ciphertext = self.cipher.as_mut().unwrap().encrypt(msg, nonce); 12 let mut msg = ChatMessage::new(); 13 msg.nonce = nonce.to_be_bytes().to_vec(); 14 // Compute a MAC on the ciphertext (i.e. encrypt-then-mac) 15 msg.mac = self.hmac.as_ref().unwrap().mac(&ciphertext); 16 msg.ciphertext = ciphertext; 18 self.stream.as_mut().unwrap().send_msg(msg) 19 } 20 } This method creates a random nonce, encrypts the message using the cipher and under the nonce, and then creates a ChatMessage, which is a proto defined as follows: 1 // An encrypted chat message 2 message ChatMessage { 3 // AES-CTR random nonce that this message 4 // is encrypted under 5 bytes nonce = 1; 6 // The MAC (message authentication code) for this message 7 // The underlying algorithm we use is HMAC-SHA-256 8 bytes mac = 2; 9 // The ciphertext 10 bytes ciphertext = 3; 11 } The ChatMessage includes the nonce under which the message is encrypted, the MAC (which is computed for the ciphertext; i.e. encrypt-then-MAC), and the ciphertext itself. The send_encrypted method fills all these parameters, and then sends the message using the API. Finally, to receive messages, we start a new thread with the following code: 1 fn recv_thread(stream: &mut TcpStream, cipher: &AesCtr, hmac: &HMAC) { 2 loop { 3 let msg = MessageStream::<ChatMessage>::receive_msg(&mut *stream).unwrap(); 4 let ciphertext = msg.ciphertext; 5 let mac = msg.mac; 6 let nonce = usize::from_be_bytes(msg.nonce.try_into().unwrap()); 7 // Before decrypting, verify the MAC to protect against attacks 8 if hmac.verify(&ciphertext, &mac) { 9 let plaintext = cipher.decrypt(&ciphertext, nonce); 11 println!("recv< {}", String::from_utf8(plaintext).unwrap()); 12 } else { 13 eprintln!("Invalid MAC detected. Your connection is (probably) under an MITM attack."); 14 } 15 } 16 } We receive the message using the API, verify the MAC using the HMAC on the Peer, and if the MAC is valid, decrypt the message using the cipher, and print the result. Now we’re really done :) Now for the fun part! The demo can be found below, or in higher resolution on YouTube :) Note how because we’re using a random nonce, even when the same message “attack at dawn” is sent twice, it looks completely different over the network, so an attacker can’t tell that it’s the same This is the longest post I’ve wrote (so far :)). I’ve learned a lot from writing this (both the code and the post), and in general I believe that improving upon previous work is a very good way to learn new things. Here’s a short summary of what we did: • Learn about Protobufs • Implement a protobuf API over the network in Rust • Learn about Elliptic Curves, and implement them in Rust • Implement 2 very useful ECC algorithms in Rust: ECDH and ECDSA • Implement an authenticated handshake to establish a shared secret using the aforementioned algorithms • Impelement AES-CTR • Implement HMAC • Wrap it all up in a CLI Thanks for reading! As mentioned in the Intro, The full code for the project is available here.
{"url":"https://vaktibabat.github.io/posts/ecurvechat/","timestamp":"2024-11-10T04:25:48Z","content_type":"text/html","content_length":"300857","record_id":"<urn:uuid:a9e358bb-890b-4653-ad38-337c3ce2aaeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00062.warc.gz"}
Percentile Quantile Decile Quartile Someone to clarify to what each of this terms corresponds exactly ? and what if one refers to the 2nd decile or the 3rd quantile ? Percentile: you divide your population or sample by 100%. So the 12 percentile is 12% and the 4th percentile is 4% ? Quintile ? Decile ? Quantile ? Quartile ? never mind, got it. just have to divide by 100 depending on what you have. Just know that quintile is 5, decile 10, quartile 4 percentile 100 and the mediane for the most common one Quintile - divide population by 5 Decile - divide population by 10 Quantile - just refers to a point that you have used to divide up a population Quartile - divide population by 4 The 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantiles are called quartiles → Q The 5-quantiles are called quintiles → QU The 6-quantiles are called sextiles → S The 10-quantiles are called deciles → D The 12-quantiles are called duo-deciles → Dd The 20-quantiles are called vigintiles → V The 100-quantiles are called percentiles → P The 1000-quantiles are called permilles → Pr P Miss Yiota, median is the middle no. mode is the most common, i hope ur sentence above was a typo.
{"url":"https://www.analystforum.com/t/percentile-quantile-decile-quartile/66511","timestamp":"2024-11-03T19:00:11Z","content_type":"text/html","content_length":"24917","record_id":"<urn:uuid:1c33bd14-bbe7-4d71-a2d8-7afe34dd5e04>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00095.warc.gz"}
Latent Class Analysis Latent Class Analysis (LCA) is a statistical technique that is used in factor, cluster, and regression techniques; it is a subset of structural equation modeling (SEM). LCA is a technique where constructs are identified and created from unobserved, or latent, subgroups, which are usually based on individual responses from multivariate categorical data. These constructs are then used for r further analysis. LCA models can also be referred to as finite mixture models. Questions Answered: What subtypes of disease exist within a given test? What domains are found to exist among the different categorical symptoms? Assumptions in latent class analysis: 1. Non-parametric: Latent class does not assume any assumptions related to linearity, normal distribution or homogeneity. 2. Data level: The data level should be categorical or ordinal data. 3. Identified model: Models should be justly identified or over identified and also the number of equations must be greater than the number of the estimated parameter. 4. Conditional independence: Observations should be independent in each class. Discover How We Assist to Edit Your Dissertation Chapters Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services. • Bring dissertation editing expertise to chapters 1-5 in timely manner. • Track all changes, then work with you to bring about scholarly writing. • Ongoing support to address committee feedback, reducing revisions. Key concepts and terms in LCA: • Latent classes: Latent classes are those observed variables that are derived from the unobserved variables. Latent classes divide the cases into their respective dimensions in relation to the variable. For example, cluster analysis groups similar cases and puts them into one group. The numbers of clusters in the cluster analysis are called the latent classes. In SEM, the number of constructs is called the latent classed. • Models in latent class analysis: To calculate the probability that a case will fall in a particular latent class, the maximum likelihood method is used. The maximum likelihood estimates are those that have a higher chance of accounting for the observed results. • Latent class cluster analysis: Latent class cluster analysis is a different form of the traditional cluster analysis algorithms. The old cluster analysis algorithms were based on the nearest distance, but latent class cluster analysis is based on the probability of classifying the cases. • Latent class factor analysis: Latent class factor analysis is different from the traditional factor analysis. Traditional factor analysis was based on the rotated factor matrix. In latent class factor analysis, the factor is based on the class, one class shows one factor. • Latent class regression analysis: One set of items is used to establish class memberships, and then additional covariates are used to model the variation in class memberships. Biemer, P. P., & Wiesen, C. (2002). Measurement error evaluation of self-reported drug use: A latent class analysis of the U.S. National Household Survey on Drug Abuse. Journal of the Royal Statistical Society, 165(1), 97-119. Chung, H., Flaherty, B. P., & Schafer, J. L. (2006). Latent class logistic regression: Application to marijuana use and attitudes among high school seniors. Journal of the Royal Statistical Society, 169(4), 723-743. Clogg, C. C. (1995). Latent class models. In G. Arminger, C. C. Clogg, & M. E. Sobel (Eds.), Handbook of statistical modeling for the social and behavioral sciences (pp. 311-359). New York: Plenum Clogg, C. C., & Goodman, L. A. (1984). Latent structure analysis of a set of multidimensional contingency tables. Journal of the American Statistical Association, 79(388), 762-771. Croon, M. A. (1991). Investigating Mokken scalability of dichotomous items by means of ordinal latent class analysis. British Journal of Mathematical and Statistical Psychology, 44(2), 315-331. Dayton, C. M. (1998). Latent class scaling analysis. Thousand Oaks, CA: Sage Publications. Flaherty, B. P. (2002). Assessing the reliability of categorical substance use measures with latent class analysis. Drug and Alcohol Dependence, 69(1), 7-20. Goodman, L. A. (1974). Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika, 61(2), 215-231. Hagenaars, J. A. (1993). Loglinear models with latent variables. Newbury Park, CA: Sage Publications. Kolb, R. R., & Dayton, C. M. (1996). Correcting for nonresponse in latent class analysis. Multivariate Behavioral Research, 31(1), 7-32. Lanza, S. T., Collins, L. M., Lemmon, D. R., & Schafer, J. L. (2007). PROC LCA: A SAS procedure for latent class analysis. Structural Equation Modeling, 14(4), 671-694. Lazarsfeld, P. F., & Henry, N. W. (1968). Latent Structure Analysis. Boston: Houghton Mifflin. Loken, E. (2004). Using latent class analysis to model temperament types. Multivariate Behavioral Research, 39(4), 625-652. McCutcheon, A. L. (1987). Latent class analysis. Newbury Park, CA: Sage Publications. Mooijaart, A., & van der Heijden, P. G. (1992). The EM algorithm for latent class analysis with equality constraints. Psychometrika, 56(4), 699-716. Vermunt, J. K., & Magidson, J. (2002). Latent class cluster analysis. In J. A. Hagenaars & A. L. McCutcheon (Eds.), Applied latent class models (pp. 89-106). Cambridge, UK: Cambridge University Latent Variable and Latent Structure Models (Quantitative Methodology Series) Related Pages:
{"url":"https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/latent-class-analysis/","timestamp":"2024-11-13T04:48:55Z","content_type":"text/html","content_length":"113102","record_id":"<urn:uuid:ae1dde2f-cc51-402d-b703-eeb094b816a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00399.warc.gz"}
Research Guides: Open Educational Resources (OER): Electronics This brief introduction serves as an orientation to the playlist as well as recommendations on how to make the most of these free resources to further your educational goals. Contains video lectures and associated study guides used to support the flipped classroom approach to teaching basic electricity and electronics. The aim of this textbook is to explain the design and function of electronic circuits and components. The text covers electronic circuit components, DC analysis, and AC analysis.
{"url":"https://libguides.olympic.edu/c.php?g=1052778&p=7676790","timestamp":"2024-11-06T16:50:16Z","content_type":"text/html","content_length":"46093","record_id":"<urn:uuid:4982c3d4-9192-406c-9e65-e4d0eed95c90>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00005.warc.gz"}
Y Intercept - Meaning, Examples | Y Intercept Formula - [[company name]] [[target location]], [[stateabr]] Y-Intercept - Explanation, Examples As a student, you are always working to keep up in school to avert getting swamped by topics. As guardians, you are always searching for ways how to motivate your children to be successful in academics and furthermore. It’s specifically essential to keep up in math because the theories continually founded on themselves. If you don’t comprehend a particular topic, it may hurt you in future lessons. Understanding y-intercepts is an ideal example of theories that you will work on in mathematics repeatedly Let’s look at the basics about y-intercept and take a look at some in and out for solving it. If you're a mathematical wizard or novice, this small summary will provide you with all the knowledge and tools you need to tackle linear equations. Let's jump directly to it! What Is the Y-intercept? To completely comprehend the y-intercept, let's think of a coordinate plane. In a coordinate plane, two perpendicular lines intersect at a point known as the origin. This point is where the x-axis and y-axis meet. This means that the y value is 0, and the x value is 0. The coordinates are noted like this: (0,0). The x-axis is the horizontal line going across, and the y-axis is the vertical line going up and down. Each axis is counted so that we can locate points on the plane. The counting on the x-axis rise as we move to the right of the origin, and the values on the y-axis increase as we drive up along the origin. Now that we have gone over the coordinate plane, we can specify the y-intercept. Meaning of the Y-Intercept The y-intercept can be considered as the initial point in a linear equation. It is the y-coordinate at which the graph of that equation overlaps the y-axis. In other words, it portrays the number that y takes while x equals zero. Further ahead, we will show you a real-life example. Example of the Y-Intercept Let's imagine you are driving on a long stretch of road with a single lane runnin in both direction. If you start at point 0, location you are sitting in your vehicle this instance, therefore your y-intercept will be equal to 0 – considering you haven't shifted yet! As you initiate driving down the road and started gaining speed, your y-intercept will increase before it archives some higher value once you arrive at a destination or stop to make a turn. Thus, once the y-intercept may not appear particularly applicable at first glance, it can provide knowledge into how objects change over time and space as we travel through our world. So,— if you're always stuck attempting to comprehend this theory, remember that nearly everything starts somewhere—even your journey down that straight road! How to Locate the y-intercept of a Line Let's comprehend regarding how we can discover this number. To guide with the process, we will outline a handful of steps to do so. Then, we will provide some examples to show you the process. Steps to Find the y-intercept The steps to discover a line that crosses the y-axis are as follows: 1. Search for the equation of the line in slope-intercept form (We will dive into details on this later in this tutorial), that should appear similar this: y = mx + b 2. Replace 0 in place of x 3. Calculate the value of y Now once we have gone through the steps, let's take a look how this procedure would work with an example equation. Example 1 Find the y-intercept of the line portrayed by the equation: y = 2x + 3 In this instance, we can substitute in 0 for x and figure out y to locate that the y-intercept is equal to 3. Therefore, we can state that the line goes through the y-axis at the point (0,3). Example 2 As another example, let's assume the equation y = -5x + 2. In such a case, if we substitute in 0 for x once again and work out y, we find that the y-intercept is equal to 2. Thus, the line crosses the y-axis at the coordinate (0,2). What Is the Slope-Intercept Form? The slope-intercept form is a technique of depicting linear equations. It is the most popular form utilized to convey a straight line in scientific and mathematical applications. The slope-intercept formula of a line is y = mx + b. In this operation, m is the slope of the line, and b is the y-intercept. As we checked in the last section, the y-intercept is the point where the line intersects the y-axis. The slope is a scale of angle the line is. It is the rate of deviation in y regarding x, or how much y shifts for each unit that x changes. Now that we have went through the slope-intercept form, let's see how we can employ it to locate the y-intercept of a line or a graph. Find the y-intercept of the line signified by the equation: y = -2x + 5 In this instance, we can see that m = -2 and b = 5. Thus, the y-intercept is equal to 5. Consequently, we can conclude that the line goes through the y-axis at the coordinate (0,5). We can take it a step higher to explain the angle of the line. Founded on the equation, we know the slope is -2. Replace 1 for x and work out: y = (-2*1) + 5 y = 3 The solution tells us that the next point on the line is (1,3). When x replaced by 1 unit, y replaced by -2 units. Grade Potential Can Support You with the y-intercept You will revisit the XY axis over and over again during your science and math studies. Theories will get more complicated as you move from solving a linear equation to a quadratic function. The time to master your comprehending of y-intercepts is now prior you straggle. Grade Potential gives expert teacher that will help you practice solving the y-intercept. Their personalized interpretations and solve problems will make a positive difference in the outcomes of your examination scores. Whenever you believe you’re lost or stuck, Grade Potential is here to guide!
{"url":"https://www.losangelesinhometutors.com/blog/y-intercept-meaning-examples-y-intercept-formula","timestamp":"2024-11-11T04:47:06Z","content_type":"text/html","content_length":"76627","record_id":"<urn:uuid:0b51e019-3a46-4af4-a120-4ba6b723c8dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00462.warc.gz"}
You are purchasing a 30-year, zero-coupon bond. The yield to maturity is 8.68 percent and the... You are purchasing a 30-year, zero-coupon bond. The yield to maturity is 8.68 percent and the... You are purchasing a 30-year, zero-coupon bond. The yield to maturity is 8.68 percent and the face value is $1,000. What is the current market price? Current price = $82.32 Zero coupon bond pays no coupon. PV of bond = Face vale/(1 + required rate)^years to maturity Let's put all the values in the formula = 1000/ (1 + 0.0868)^30 = 1000/ (1.0868)^30 = 1000/ 12.1476 = $82.32 Feel free to comment if you need further assistance J Pls rate this answer if you found it useful.
{"url":"https://justaaa.com/finance/85909-you-are-purchasing-a-30-year-zero-coupon-bond-the","timestamp":"2024-11-09T23:31:19Z","content_type":"text/html","content_length":"40242","record_id":"<urn:uuid:d0909147-1b3c-4afb-b82e-9eca05544397>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00793.warc.gz"}
Class 7 Maths Triangle And Its Properties | MCQs MCQ FOR CLASS 7 MATHEMATICS | The Triangle and its Properties – Chapter 6 1. How many altitudes can a triangle have? A. 1 B. 2 C. 3 2. A ——— connects a vertex of a triangle to the mid-point of the opposite side. A. Altitude B. Median C. Opposite side 3. An ———- angle of a triangle is equal to the sum of its interior opposite angles. A. Exterior angle B. Interior angle C. Adjacent angle 4. The total measure of the three angles of a triangle is ———— A. 90 B. 180 C. 360 5. If the two angles of a triangle are 50 degree and 70 degree, then the measure of third angle is —————— A. 50 B. 60 C. 70 6. A triangle in which all the three sides are of equal lengths is called an —————– A. Equilateral triangle B. Scalene triangle C. Isosceles triangle 7. A triangle in which two sides are of equal lengths is called an ————— A. Scalene triangle B. Equilateral triangle C. Isosceles triangle 8. The sum of the lengths of any two sides of a triangle is ——— than the third side. A. greater than B. less than C. equal to 9. In a right angled triangle, the side opposite to the right angle is called the ————- A. leg B. hypotenuse C. altitude 10. The difference between the lengths of any two sides is ——– than the length of the third side. A. greater than B. less than C. altitude 11. In a right angled triangle, if one angle is 45 degree then the measure of the third angle is ————- A. 90 B. 45 C. 180 12. In a triangle if all angles are equal, then the measure of each angle is ———- A. 45 B. 60 C. 90 13. If the Pythagoras property holds, the triangle must be ———— A. right angled B. acute angled C. obtuse angled 14. Which is the longest side of a right angle? A. altitude B. hypotenuse C. legs 15. The perpendicular line segment from a vertex of a triangle to its opposite side is called an —— of the triangle. A. altitude B. median C. base 1. 3 2. Median 3. Exterior angle 4. 180 5. 60 6. Equilateral triangle 7. Isosceles triangle 8. greater than 9. hypotenuse 10. less than 11. 45 12. 60 13. right angled 14. Hypotenuse 15. altitude 1 Response 1. hi this is really helpful
{"url":"https://www.learnmathsonline.org/cbse-class-7-math/class-7-maths-triangle-and-its-properties-mcqs/","timestamp":"2024-11-01T22:56:05Z","content_type":"text/html","content_length":"74644","record_id":"<urn:uuid:684bd62c-135a-4ee1-bf86-aefbe8aec5e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00580.warc.gz"}
Implicit finite difference in time-space domain with the helix transform Next: Previous implementations of the Up: Barak: Implicit helical finite Previous: Barak: Implicit helical finite Implicit finite difference is a widely used method in geophysical data processing, commonly utilized for the approximation of the differential wave equation when extrapolating wavefields. In comparison with explicit methods, implicit finite difference has unconditional numerical stability, thus enabling larger finite differencing steps during the computation. This is an attractive prospect, as the implication is a shorter processing time for wave extrapolation. However, implementing implicit finite difference in a multidimensional problem is not trivial. The method requires the solution of a sparse set of linear equations per each propagation step. The cost of solving these linear equations, and the computational complexity required, becomes unreasonable at anything greater than 2 dimensions. It is possible to split the solver so that only one dimension of the problem is computed at each propagation step, thus reducing the complexity and possibly the amount of resources required for the computation. However, this method may introduce azimuthal anisotropy to the solution if the actual differential equation being solved is non-separable. Two concepts combine to greatly aid us in this matter. The first is the helix approach, envisaged in Claerbout (1997). The helix effectively enables us to treat multidimensional problems as one dimensional problems. Specifically, it enables execution of multidimensional convolutions as 1-D convolutions, and likewise for deconvolutions. Convolution equates to polynomial multiplication, while deconvolution equates to polynomial division. The application of convolution or deconvolution to a data set is likened by Claerbout to winding a coil (the filter coefficients) around the data, where the data is treated as a long set of traces combined end-to-end along their fast axis, as shown in Figure 1. Figure 1. Sketch of the helix concept - convolution takes place by winding a "coil" of filter coefficients over a "coil" of data values (Claerbout (1997)) The second is the concept of spectral factorization. The purpose of spectral factorization is to input a series of coefficients, and create an alternate set of causal filter coefficients which have a causal inverse. The result will usually be a minimum-phase filter. The autocorrelation of this new set of filter coefficients recreates the original values of the input series. The upshot of this is that application of the original series' coefficients to a dataset is akin to convolving the data with the spectrally factorized filter coefficients in one direction, and then convolving again in the other direction (``coiling'' and then ``uncoiling'' the filter coefficients over the data). This effectively applies the filter and it's time reverse (adjoint) to the data, which amounts to multiplying the data by the original input series' coefficients. In the case of finite differencing, the ``input'' series might be the Laplacian, which when made to traverse over the data has the effect of a 2nd derivative approximation. The spectral factorization concept enables us to represent a finite-difference operator as a forward and reverse convolution of filter coefficients. The helix concept disconnects us from the dimensionality of the problem, and enables simple application of 1D convolution and deconvolution to multidimensional problems. Together they enable an alternate method of propagating wavefields - by treating the finite-difference solution as a set of convolutions and deconvolutions. My aim is to use the helix transform to propagate wavefields in the time-space domain, using an implicit finite-difference approximation of the 2-way acoustic wave equation. For this purpose, I formulate the proper implicit finite-difference weights with regard to the order of the difference approximation and the dimensionality of the problem, and use spectral factorization to create a causal filter with a causal inverse, whose convolution will equal those coefficients. These filter coefficients are then applied to the wavefield by deconvolution, using the helical coordinate Implicit finite difference in time-space domain with the helix transform Next: Previous implementations of the Up: Barak: Implicit helical finite Previous: Barak: Implicit helical finite
{"url":"https://sepwww.stanford.edu/data/media/public/docs/sep140/ohad1/paper_html/node1.html","timestamp":"2024-11-10T17:20:55Z","content_type":"text/html","content_length":"8615","record_id":"<urn:uuid:1911f047-3d71-4989-95a4-c049808a990e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00630.warc.gz"}
Spring 2006 Consumable Credentials in Logic-Based Access Control Friday, May 12^th, 2006 from 12-1 pm in NSH 1507. Logic-based access control has many advantages. Policies are easy specify, easy to understand, and easy to prove correct. To gain access to a resource, users create a logical proof which is then checked by the reference monitor before access is granted. As long as the proof is valid and meets the access-control policy, which is also specified in logic, the user is allowed access. Unfortunately, once a proof has been constructed, it can be copied and reused at will. This prevents the system from implementing any sort of bounded-use policy. We develop a means to overcome this deficiency in logic-based access control. Because resources can now be consumed, we also develop techniques to prevent resources from being wasted, even in a setting where proofs are generated in a distributed manner. In Partial Fulfillment of the Speaking Requirement From Adequate Proofs to Efficient Proofs Friday, May 26^th, 2006 from 12-1 pm in NSH 1507. In the last few decades the idea of mechanizing mathematics has made a lot of progress. It is becoming routine for mathematicians (and computer scientists) to be able to put definitions, theorems, proofs, and other mathematical gadgets into a form that computers can manipulate. There are already many tools available that can confirm mathematical truths to a much higher degree of confidence than hand-checked human-written proofs. However, these formal proofs typically blow up in size compared to "paper" proofs, because every last formal detail of the proof must (in principle) be included. This talk describes an attempt to address this problem: it turns out that a more compact representation of proofs comes naturally out of thinking about what it means for a formal system to be an adequate representation of the informal mathematics it’s meant to encode. (In Partial Fulfillment of the Speaking Requirement) Web contact: sss+www@cs
{"url":"http://www.cs.cmu.edu/~sss/Spring2006.html","timestamp":"2024-11-10T20:47:39Z","content_type":"text/html","content_length":"19992","record_id":"<urn:uuid:65394b5f-be94-41a1-987c-8e6f068626b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00540.warc.gz"}
The Garden with Insight garden simulator v1.0: Auto Operations Garden with Insight v1.0 Help: Auto Operations The plant environment control component provides mechanisms for applying irrigation water, fertilizer, lime, and pesticide or for simulating grazing or drainage systems. Drainage via underground drainage systems is treated as a modification of the natural lateral subsurface flow of the area. Drainage is simulated by indicating which soil layer contains the drainage system and the time required for the drainage system to reduce plant stress. The drainage time in days replaces the travel time in equation 44 for the layer containing the system. The EPIC user has the option to simulate dryland or irrigated agricultural areas. Sprinkler or furrow irrigation may be simulated and the applications may be scheduled by the user or automatically. As implied, the user scheduled option allows application dates and rates to be inputted. With the automatic option, the model decides when and how much water to apply. Required inputs for the automatic version include a signal to trigger applications (the three trigger choices include: plant water stress level (0-1), plow layer soil water tension in kPa, or root zone soil water deficit in mm), the maximum volume applied to each crop in mm, the runoff fraction, minimum and maximum single application volumes in mm, and the minimum time interval between applications in days. Two modes of application, rigid and flexible, are available. Rigid mode: 1. User schedule - the exact input volumes are applied on specified dates. 2. Automatic option - maximum single application volumes are applied when triggered. Flexible mode: 1. User schedule - the application volume is the minimum of the specified volume, the maximum single application volume, and the volume required to fill the root zone to field capacity. 2. Automatic option - the application volume is the minimum of the maximum single application volume and the volume required to fill the root zone to field capacity. Also, irrigation does not occur when the application volume derived from the appropriate mode and options (except for rigid, user-scheduled) is less than the input minimum single application volume. The application mode (rigid or flexible) is fixed for the entire crop rotation. However, the trigger value and criterion (plant water stress level, soil water tension, or root zone water deficit) and the runoff fraction may be changed at any time during the operation schedule. Also, a combination of user and automatic scheduling is permitted. Fertilizer application is similar to irrigation - scheduling may be input or automatic and rigid and flexible modes are available. Required inputs for the automatic version include a trigger (plant N stress level (0-1)), maximum annual N applied to a crop in kg/ha, and minimum time between applications in days. Automatic fertilizing at planting is also optional. Rigid mode: 1. User schedule - The exact input rates of N and P are applied at specified depths on scheduled dates. 2. Automatic option - a fraction of the annual maximum N rate for the crop is applied when triggered. The application fraction and the maximum rate are inputs. Also P is applied at a rate necessary to bring the plow layer (0.2 m) P concentration to a level specified at the start of a simulation. All automatic applications are placed in the second soil layer. Flexible mode: 1. User schedule - the model samples N and P concentration in the root zone and compares with user preset N and P concentrations. Applications occur on schedule at specified depths and rates if the root zone N and P concentrations do not exceed user standards, otherwise, applications are delayed until N and P concentrations are depleted below user standards. 2. Automatic option - the N application rate is the difference between the preset rate (application fraction times the maximum annual rate) and the root zone N content. The P application strategy is the same as in the rigid mode. Other features and limitations include only mineral N (in NO3 form) and P may be applied automatically. Organic N and P and ammonia are applied by user scheduling. The maximum annual N application for a crop can be changed at planting. A combination of user and automatic scheduling is permitted. Automatic applications occur only when N is the active crop growth constraint even though the trigger value is reached. Thus, the annual N and P application rates vary according to the crop's needs, the soil's ability to supply those needs, and the magnitude of the N stress relative to water and temperature stresses. EPIC simulates the use of lime to neutralize toxic levels of Al and/or to raise soil pH to near- optimum levels. Different algorithms are used to estimate lime requirements of &quothighly weathered& quot soils (Oxisols, Ultisols, Quartzipsamments, Ultic subgroups of Alfisols, and Dystric suborders of Inceptisols) (Sharpley et al., 1985) and other soils. The highly weathered soils have large amounts of variable-charge clays. Moderate amounts of lime are required to increase their pH to about 5.5 and convert extractable Al to more inactive forms. However, the pH of these soils is highly buffered above pH 5.5, and very large amounts of lime are required to raise the pH to near 7.0. As a aResult, soils with variable charge clays are usually limed only to reduce Al saturation to acceptable levels. The Al saturation of each soil layer is estimated with the equations (Jones, 1984) [Equation 330] and [Equation 331] where ALS is the Al saturation of soil layer l in percent calculated as KCL-extractable Al divided by effective cation exchange capacity (ECEC), BSA is the base saturation calculated from cation exchange capacity (CEC) determined by the NH4OAc (pH = 7.0) method in percent, C is the organic carbon content in percent, and PH is the soil pH. Equation 330, 331 ALS = 154.2 - 1.017 * BSA - 3.173 * C - 14.23 * PH, if PH &lt= 5.6 ALS = 0.0, if PH &gt 5.6 same but added bounds of 0-95 if ph &lt= 5.6 ALS = AluminumSaturation_pct BSA = baseSaturation_pct C = organicC_pct PH = soilpH For highly weathered soils, the lime required to neutralize toxic Al in the plow layer is estimated with the equation [Equation 332] where RLA is the lime required to neutralize Al in t/ha, ECEC is the effective cation exchange capcity in cmol(p+)/kg, BD is the soil bulk density in t/m3, and PD is the plow depth in m. (These are not by layer.) Equation 332 RLA = 0.1 * ALS * ECEC * PD * BD deltaBSA = 0.1 * ALS * ECEC SWT = PD * BD RLA = deltaBSA * SWT so it is the same RLA = LimeToNeutralizeAlForHighlyWeatheredSoil_kgPha ALS = aluminumSaturation_pct ECEC = effectiveCEC_cmolpplusPkg deltaBSA = changeInBaseSaturationToOffsetAlSat_pct SWT = totalSoilWeightInMaxTillageDepth_tPha PD = plowDepth_m BD = bulkDensity_tPm3 ECEC is calculated as SMB/ALS (Soil Survey Staff, 1982), where SMB in cmol/kg is the sum of the bases extracted by NH4OAc (pH = 7.0). The constant 0.1 (in equation 332) converts cmol(p+)/kg extractable aluminum to equivalent CaCO3 in t/ha, assuming 2 cmol(p+) CaCO3 are required to completely neutralize 1 cmol(p+) extractable Al (Kamprath, 1970). At the end of each year, enough lime is applied to meet the lime requirement (RLA) if RLA &gt= 1 t/ha. If RLA &lt 1 t/ha no lime is applied. When lime is applied, the plow layer pH is raised to 5.4 and ALS is reduced to 0. For EPIC, soil acidification and decreasing base saturation are caused by addition of fertilizer N and symbiotic N fixation by legumes. All fertilizer N is assumd to derived from anhydrous ammonia, urea, ammonium nitrate, or mixtures of these with equivalent acidifying effects. The CaCO3 equivalent of fertilizer or fixed N is assumed to be 1.8 kg CaCO3/kg N (Pesek et al., 1971). This is within the range of variation reported by Pierre et al. (1971) for fertilized corn and by Nyatsanga and Pierre (1973) and Jarvis and Robson (1983) for legumes. At the end of each year of simulation, the plow layer pH is reduced to reflect the change in base saturation caused by N fertilizer and N fixation. The change in base saturation is computed with the equation [Equation 333] where FN is the amount of N fertilizer added during the year in kg/ha and WFX is the amount of N fixation by legumes in kg/ha. Equation 333 deltaBSA = 0.036 * (FN + WFX) / (PD * BD * CEC) deltaBSA = 0.036 * (FN + WFX) / SWT SWT = PD * BD, but CEC term is in next equation (PH) deltaBSA = ChangeInBaseSaturationByNAdded_frn FN = nFertilizerAdded_kgPha WFX = nFixation_kgPha SWT = totalSoilWeightInMaxTillageDepth_kgPha PD = plowDepth_m BD = bulkDensity_tPm3 CEC = cationExchangeCapacity_cmolPkg The PH value is reduced by using the equation [Equation 334] where the constant 0.5 approximates the slope of the relationship between pH and deltaBSA for several soils when the values of BSA are between 60 and 90 (Peech, 1965). Equation 334 PH = PH(o) - 0.05 * deltaBSA PH = PH(o) - 0.05 * 100 * deltaBSA / CEC (here is the CEC term from the deltaBSA equation) PH = soilpH = SoilpHAfterChangeFromNAdded deltaBSA = changeInBaseSaturationByNAdded_pct CEC = cationExchangeCapacity_cmolPkg For other soils, the lime requirement is the amount of time needed to raise soil pH to 6.5 according to the equation [Equation 335] where deltaBSA is the change in base saturation needed to raise soil pH to 6.5. The constant 0.05 converts deltaBSA in percent to equivalent CaCO3 in t/ha, assuming that applied CaCO3 reacts with equivalent unsaturated CEC. Equation 335 RLA = 0.05 * PD * BD * CEC * deltaBSA RLA = 0.05 * SW * CEC * deltaBSA PD * BD = SW, so this is the same RLA = LimeFor6p5PHForNonHighlyWeatheredSoil_kgPha PD = plowDepth_m BD = bulkDensity_tPm3 SW = totalSoilWeightInMaxTillageDepth_kgPha CEC = cationExchangeCapacity_cmolPkg deltaBSA = changeInBaseSaturationToRaisePHTo6p5_pct The deltaBSA is estimated with the relation [Equation 336]. Equation 336 deltaBSA = min((6.5 - PH) / 0.023, 90 - BSA) deltaBSA = ChangeInBaseSaturationToRaisePHTo6p5_pct PH = soilpH BSA = baseSaturation_pct For soils that are not highly weathered, lime application is simulated if at the end of the year, RLA &gt 2.0 t/ha. When lime is applied, pH is changed to 6.5, base saturation is increased by deltaBSA, and ALS is set to 0. This new equation, derived from equations 336 and 335, estimates the new pH value when a given amount of lime is added. The derivation is as follows. deltaBSA = min((6.5 - pH) / 0.023, 90 - BSA) to reach a pH of 6.5 RLA = 0.01 * SWT * ECEC * deltaBSA deltaBSA = RLA / (0.05 * SWT * ECEC) substituting equation 336 for deltaBSA and ignoring the minimum, (newpH - pH) / 0.023 = RLA / (0.05 * SWT * ECEC) now solving for newpH, newpH = 0.023 * RLA / (0.05 * SWT * ECEC) + pH Note: We did not include the pest part of EPIC in this version of Garden with Insight. The three pests considered by EPIC are insects, weeds, and plant diseases. The effects of all three pests are expressed in the EPIC pest factor. Crop yields are estimated at harvest as the product of simulated yield and pest factor. The pest factor ranges from 0.0 to 1.0 -- 1.0 means no pest damage and 0.0 means total crop destruction by pests. The pest factor is simulated daily as a function of temperature, moisture, and ground cover [Equation 337] where PSTI is the accumulated pest index for day k, T(mn) is the minimum temperature for day i in degrees C, RFS is the accumulated rainfall for 30 days preceding day i in mm, RFS(T) is the threshold 30-day rainfall amount in mm, CV is the ground cover (live biomass and crop residue) on day i in t/ha, and CV(T) is the threshold cover value in t/ha. When T(mn) is less than 0.0, the pest index is reduced using the equation [Equation 338]. Thus, the pest index grows rapidly during warm moist periods with adequate ground cover and is reduced by cold temperatures. This general pest index is an attempt to account for major differences in pest problems related to climate variability. Equation 338 if RFS &gt RFS(T) and CV &gt CV(T) and T(mn,i) &gt 0.0 PSTI = (sum with i from 1 to k of) T(mn,i) * (1.0 + RFS - RFS(T)) if T(mn,i) &lt 0.0 PSTI(k) = PSTI(k-1) + T(mn) if RFS &gt RFS(T) and CV &gt CV(T) and T(mn,i) &gt 0.0 PSTI(k) = PSTI(k-1) + T(mn,i) * (1.0 + (RFS - RFS(T)) / 100) otherwise the same PSTI = PestPopulationIndex T(mn,i) = minTempForDay_degC RFS = previousThirtyDaysRainfall_mm (computed every day) RFS(T) = thresholdThirtyDayRainfallForPests_mm CV = aboveGroundBiomassAndResidue_tPha CV(T) = thresholdBiomassAndResidueForPests_tPha The pest index is reduced using the equation [Equation 339] where PSTE is the pesticide kill fraction ranging from near 0.0 to near 1.0. Thus, if the kill fraction approaches 1.0, the pest index is reduced nearly 1000 units. Equation 339 PSTI(k) = PSTI(k-1) - 1000 * PSTE deltaPSTI = PestFactorReductionFromPesticide PSTE = killFractionForPesticide_frn At harvest, the pest factor is computed from the pest index using the equation [Equation 340] where PSTF is the pest factor used to adjust crop yield, PSTM is the minimum pest factor value for a crop, and k is time since last harvest in days. Equation 340 PSTI* = PSTI / k PSTF = 1.0 - (1.0 - PSTM) * (PSTI* / (PSTI* + exp(2.7 - 0.499 * PSTI*))) PSTF = PestFactor_frn PSTM = minPestWeedDiseaseFactor_frn PSTI = pestPopulationIndex PSTI* = pestPopulationIndex / daysSinceLastHarvest k = daysSinceLastHarvest Furrow Diking Furrow diking is the practice of building small temporary dikes across furrows to conserve water for crop production. Since they reduce runoff, they may also aid in erosion control. The EPIC furrow diking model allows construction of dikes for any ridge spacing and at any interval down the furrows. Dikes may be constructed or destroyed mechanically on any day of the year. If estimated runoff for a particular event (rainfall) exceeds the dike storage volume (average for all furrows in the field), overtopping occurs and all of the estimated runoff is lost. If not, all of the rainfall infiltrates and is available for plant use. When runoff destroys the dikes, the model rebuilds them automatically. Rainstorms that do not overtop the dikes cause settling and, thus, reduce storage volume. Settling is estimated with the equation [Equation 341] where H(o) is the dike height before settling, H is the dike height after settling, and Y is the USLE estimate of soil loss (sediment yield) in t/ha. Ridge height is also reduced with the settling function contained in equation 341. The dikes are automatically rebuilt when H/H(o) &lt 0.7. Equation 341 H = H(o) * exp(-0.1 * Y) much different. H = DikeHeightAfterSettlingDueToRain_mm Y = totalErosion_tPha Dike volume The dike storage volume is estimated by assuming that the furrow and the dike are triangular and that the dike side slopes are 2:1. Given the dike and ridge heights, the dike interval, and the slope down the furrow, the volume can be calculated directly. There are two possible dike configurations that require slightly different solutions. Normal case - lower slope Normally, the dike interval is relatively short (1-3 m) and the slope along the furrow is relatively flat (&lt1.0%). When the dike is full, water extends from the top of the downslope dike up the furrow to a point above the toe (bottom) of the upslope dike. The volume is calculated by using cross-sectional areas at the toes of the two dikes. This approach computes the volume in three parts (1. between the top and the toe of the downslope dike, 2. between the toes of the two dikes, and 3. between the toe and the waterline on the upslope dike). Beginning at the centerline of the downslope dike, the volume equations are DV(I) = 1/2 * H * D(2) * W(2) (Equation 342) DV(II) = 1/4 * (DI - 4 * H) * (D(2) * W(2) + D(3) * W(3)) (Equation 343) DV(III) = 1/4 * (XD - DI + 2 * H) * D(3) * W(3) (Equation 344) where DV is the dike volume between cross sections in m3, H is the dike height in m, D is the water depth in m, W is the water surface width in m, DI is the dike interval in m, XD is the distance from the center of the downslope dike to the waterline on the upslope dike in m, and subscripts 2 and 3 refer to cross sections 2 and 3. Cross section 2 is at the toe of the downslope dike and cross section 3 is at the toe of the upslope dike. Water depth is calculated with the equations D(2) = H - 2 * S * H (Equation 345) D(3) = H - S * (DI - 2 * H) (Equation 346) where S is the slope in m/m along the furrow (which I think is the same as the land surface slope). Water surface width is a function of depth and ridge spacing, RS in mm W = RS * D / H (Equation 347) The distance XD is computed with the equations XD = DI - 2 * (H - DZ) (Equation 348) DZ = H - S * XD (Equation 349) where DZ is the water line eleveation on the upslope dike. The constant 2 in equation 348 comes from the assumed 2:1 dike upslopes. Simultaneous solution of equations 348 and 349 yields XD = DI / (1 + 2 S) (Equation 350) Substituting D, W, and XD into equations 342, 343, and 344 and summing gives DV = 1/4 * RS / H * (H2 * (1-2S)2 * (DI - 2H) + (H - S(DI - 2H))2 * (DI / (1+2S) - 2H)) (Equation 351) Equation 351 is divided by the total surface area of a furrow dike to convert volume from m3 to mm [Equation 352]. DV = DikeVolumeForLowSlope_mm DI = dikeInterval_m H = dikeHeight_m S = slopeSteepness_mPm More unusual case - higher slope In the simpler and more unusual dike configuration, the upslope waterline does not extend to the toe of the upslope dike. Only one cross section is involved and the volume is computed into two parts. Equation 342 is used to calculated the most downslope volume, and the upslope volume is calculated with the equation DV(2) = 1/4 * D(2) * W(2) * H * (1/S - 2) (Equation 353) Adding equations 342 and 353, substituting D and W, and converting from m3 to mm gives [Equation 354]. Equation 354 DV = 250 / (DI * H) * (sqr(H) * sqr(1 - 2 * S) + sqr(H - S * (DI - 2 * H)) * (DI / (1 + 2 * S) - 2 * H)) DV(2) = 250 * sqr(H) * sqr(1 - 2 * S) / (S * DI) DV = 250 * 1000 * FDSF / (DI * RH) * (sqr(H) * sqr(1 - 2 * S) * (DI - 2 * H) + sqr(H - S * (DI - 2 * H)) * (0.5 * DI / (S + 0.5) - 2 * H) only difference here is extra term and extra factor (mentioned in publication) DV(2) = 250 * 1000 * FDSF / (RH * DI) * H / S * sqr(H) * sqr(1 - 2 * S) DV = DikeVolumeForHighSlope_mm DI = dikeInterval_m H = dikeHeight_mm RH = ridgeHeight_mm S = slopeSteepness_mPm FDSF = fractDikeVolAvailForWaterStorage_frn Thus, the average dike volume of a field is estimated with equation 352 or 354 as dictated by slope and dike height and interval. However, no field is exactly uniform in slope; dike and ridge heights vary, and furrow and dike side slopes may not be triangular. Therefore, the model provides a user- controlled dike efficiency factor to allow for varying conditions across a field. The dike efficiency factor also provides for conservative or optimistic dike system design. Note: We did not include the grazing part of EPIC in this version of Garden with Insight. Livestock grazing is simulated as a daily harvest operation. Users specify daily grazing rate in kg/ha, minimum grazing height in mm, harvest efficiency, and date grazing begins and ends. Harvest efficiency is used to estimate the fraction of grazed plant material used by animals - not returned as manure, etc. Any number of grazing periods may occur during a year and the grazing schedule may vary from year to year within a rotation. Grazing ceases when forage height is reduced to the user-specified cutoff value and resumes automatically when new growth exceeds the cutoff height if the grazing period has not expired. Note: We did not include the economics part of EPIC in this version of Garden with Insight. The economic component of EPIC is more accurately represented as a crop budget and accounting subsystem. The algorithms keep track of the costs of producing and marketing the crops. Costs (and income) are divided into two groups: those costs which do not vary with yield and those that do. These groups will be addressed in turn. All cost registers are cleared at harvest. All operations after harvest are charged to the next crop in the cropping sequence. Tillage and (preharvest) machine operation costs are assumed to be independent of yield. These operation costs must be calculated outside of EPIC and are inputted as one variable into the tillage file. This cost cell contains all costs associated with the single operation or activity (e.g., a chiseling activity includes fuel, labor, depreciation, repair, interest, etc., for both the tractor and the chisel). A budget generator program like the Micro Budge Management System (MBMS) (McGrann et al., 1986) is convenient for making these calculations. This is an updated interaction program developed from the Enterprise Budget Calculator (Kletke, 1979). The MBMS is more compatible with EPIC in that it has output capabilities to itemize cost by machine operation. This information (when converted to metric units) can be input directly into the equiment file in EPIC. Farm overhead, land rent, and other fixed costs can be charged to the crop by first creating null operations in the equipment file with machine number and cost information only and then triggering the cost in EPIC with a null activity. Government payments can be credited by using negative cost entries in the same Costs which are yield and management dependent are entered into EPIC in two regions of the input data. Seed costs, seeding rates, and crop prices are entered in the crop parameter file for each crop code. Seed costs are calculated as the product of seeding rate and cost per kilogram. Amendment costs are calculated similarly. The amendments include elemental N and P, irrigation water, and lime. Total cost per hectare is based on the product of crop yield and net crop price. Net crop price is the market price minus the harvest, hauling, and other processing costs which are yield dependent. The net price must be determined outside EPIC. When valid cost figures are entered into these EPIC input cells, the model will return annual cost and returns by crop. EPIC budget information is valuable not only for profit analyses but also risk analyses, since the annual distributions of profits and costs can be captured. Risk analysis capability greatly enhances the analytical value of EPIC for economic studies. The greatest value of EPIC to economic analysis is not its internal economic accounting, but the stream of physical outputs on daily, monthly, annual, or multi- year periods that can be input into economic models, budget generators, and risk analysis systems. EPIC estimates crop yields, movement of nutrients and pesticides, and water and sediment yields. Changes in inputs necessary to respond to changes in managment, soil quantity and quality, climate (i.e., global warming), droughts etc., are also estimated. These outputs become inputs into economic and natural resource models facilitating comprehensive analyses of alternative policies and programs. You can import EPIC data files into Garden with Insight with much caution -- see Importing EPIC data files.
{"url":"https://kurtz-fernhout.com/help100/00000293.htm","timestamp":"2024-11-06T18:49:45Z","content_type":"text/html","content_length":"30355","record_id":"<urn:uuid:ea077dab-441b-4a49-b7ac-8533c33a870e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00878.warc.gz"}
Kids.Net.Au - Encyclopedia > Hamiltonian path problem Hamiltonian cycle Hamiltonian circuit problem in graph theory is to find a through a given graph which starts and ends at the same and includes each vertex exactly once. It is a special case of the traveling salesman problem, obtained by setting the distance between two cities to unity if they are adjacent and infinity otherwise. Like the traveling salesman problem, the Hamiltonian cycle problem is NP-complete. The requirement that the path start and end at the same vertex distinguishes it from the Hamiltonian path problem. The problem is named after Sir William Rowan Hamilton. External links All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/ha/Hamiltonian_path_problem","timestamp":"2024-11-05T06:48:57Z","content_type":"application/xhtml+xml","content_length":"12406","record_id":"<urn:uuid:09b7c783-bf29-478a-8254-a1205ddb0de9>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00110.warc.gz"}
Screw Torque Calculator - Savvy Calculator Screw Torque Calculator About Screw Torque Calculator (Formula) A Screw Torque Calculator is a tool used to determine the amount of torque required to tighten a screw or bolt to a specified level of tightness. Torque is the rotational force applied to the fastener to create tension and secure the joint properly. Proper torque application is crucial for ensuring that the fastener doesn’t become loose over time or get damaged due to excessive force. The formula for calculating screw torque involves several factors: 1. Coefficient of Friction (μ): The coefficient of friction between the screw and the materials being fastened together. It represents the resistance to motion between the threads of the screw and the mating surface. 2. Thread Pitch (P): The distance between successive threads on the screw. It is usually measured in millimeters or inches. 3. Thread Radius (r): The effective radius of the screw, which is half the nominal diameter. It is also measured in millimeters or inches. 4. Applied Force (F): The force applied to the screw, usually in Newtons (N) or pounds-force (lbf). The formula for screw torque calculation is: Torque (T) = (F * μ * r) / (2 * π * P) where: T = Torque in Newton-meters (Nm) or pound-feet (lbf-ft) F = Applied Force in Newtons (N) or pounds-force (lbf) μ = Coefficient of Friction (dimensionless) r = Thread Radius in meters (m) or inches (in) π = Pi (approximately 3.14159) P = Thread Pitch in meters (m) or inches (in) When using the calculator, make sure to use consistent units for all the variables. Different types of screws and applications may have varying coefficients of friction, so it is essential to refer to the manufacturer’s guidelines or consult engineering handbooks for specific values. It’s important to note that over-torquing or under-torquing a screw can lead to problems. Under-tightening may result in a weak joint, while over-tightening can cause the fastener to break or strip the threads, leading to potential failures. Using a screw torque calculator helps ensure that the right amount of force is applied to achieve the desired clamping force and joint integrity. Always follow the recommended torque values specified by the screw manufacturer or engineering standards for your particular application to achieve optimal results. Leave a Comment
{"url":"https://savvycalculator.com/screw-torque-calculator","timestamp":"2024-11-08T07:52:13Z","content_type":"text/html","content_length":"143083","record_id":"<urn:uuid:52213438-5681-44da-89de-e8986b44be69>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00677.warc.gz"}
4^th Grade (MAT) Targeted Standard (AR) Algebraic Reasoning (OA) Operations and Algebraic Thinking Learners will analyze patterns and relationships to generate and interpret numerical expressions. MAT-04.AR.OA.02 Identify and apply the properties of operations for addition, subtraction, multiplication, and division and justify thinking. Properties of Operations • MAT-02.AR.OA.02 Apply the properties of operations to solve addition and subtraction equations and justify thinking. • MAT-03.AR.OA.02 Apply the properties of operations to solve multiplication and division equations and justify thinking. • MAT-04.AR.OA.02 Identify and apply the properties of operations for addition, subtraction, multiplication, and division and justify thinking. • MAT-05.AR.OA.02 Analyze problems using the order of operations to solve and evaluate expressions while justifying thinking. • MAT-06.AR.EE.03 Identify when two expressions are equivalent. Apply the properties of operations to generate equivalent expressions. • MAT-07.AR.EE.01 Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions involving variables, integers, and/or non-negative fractions and decimals with an emphasis on writing equivalent expressions. • MAT-08.AR.EE.05 Solve linear equations with rational number coefficients and variables on both sides, including equations that require using the distributive property and/or combining and collecting like terms. Interpret the number of solutions. Give examples of linear equations in one variable with one solution, infinitely showing solutions or no solutions. • MAT-09.AR.05 Justify each step in solving a linear equation that may or may not have a solution. » MAT-04 Standards
{"url":"https://learnbps.bismarckschools.org/mod/glossary/showentry.php?eid=49185&displayformat=dictionary","timestamp":"2024-11-11T18:07:56Z","content_type":"text/html","content_length":"51043","record_id":"<urn:uuid:598b2cc4-2438-4bb6-95a7-4d646d07d69b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00291.warc.gz"}
Calculating Electric Potential on a Nonconducting Rod (2024) • Forums • Homework Help • Introductory Physics Homework Help • Thread starterDODGEVIPER13 • Start date ElectricElectric potentialPotentialRod In summary, you were trying to determine the potential at a point on a nonconducting rod by summing the potentials of all the charges at various distances from the origin. You plugged in the value for k and found that it was 2klamdaL. You then plugged in the answer for L and found that the potential at point P was 2klamdaL(log(sqrt(2L^2)+L)). Homework Statement A nonconducting rod of length L = 6.00 cm and uniform linear charge density λ = +2.28 pC/m. Take V = 0 at infinity. What is V at point P at distance d = 8.00 cm along the rod's perpendicular heres the diagram best I could show it: Homework Equations lamda = charge/L The Attempt at a Solution What I did was this charge = L(lamda) the I said ∫kq/d ds which equals ∫klamda(L)/d * cos(theta) ds (where cos(theta)=d/sqrt(d^2+(L/2)^2)) so 2klamdaL∫1/sqrt(d^2+(L/2)^2) from 0 to L/2. which gives me 2klamdaL(log(sqrt(2L^2)+L) when I plug in the answer is incorrect I think my integral set up is completley incorrect? Moderator note: Moved to the introductory physics forum. Last edited by a moderator: My diagram is screwed up the .P with the line I made should be right over the little line seperating the L/2 lines sorry. Are you saying your origin is in the middle of the rod or at the edge of the rod? You're kind of maybe on the right track. Remember that the basic concept is to make infinitesimal sums as you move along a charge distribution. Take a charge at a certain point from the origin and how that contributes to the potential, then move an infinitesimal amount and see how that new charge contributes. Staff Emeritus Science Advisor Homework Helper Gold Member DODGEVIPER13 said: Homework Statement A nonconducting rod of length L = 6.00 cm and uniform linear charge density λ = +2.28 pC/m. Take V = 0 at infinity. What is V at point P at distance d = 8.00 cm along the rod's perpendicular heres the diagram best I could show it: Homework Equations lamda = charge/L The Attempt at a Solution What I did was this charge = L(lamda) the I said ∫kq/d ds which equals ∫klamda(L)/d * cos(theta) ds (where cos(theta)=d/sqrt(d^2+(L/2)^2)) so 2klamdaL∫1/sqrt(d^2+(L/2)^2) from 0 to L/2. which gives me 2klamdaL(log(sqrt(2L^2)+L) when I plug in the answer is incorrect I think my integral set up is completley incorrect? Moderator note: Moved to the introductory physics forum. DODGEVIPER13 said: My diagram is screwed up the .P with the line I made should be right over the little line separating the L/2 lines sorry. Yes, no matter how many spaces you enter, the output only shows one space. Use the "Codes" ikon to show something closer to what you want. The "Codes box" actually displays in a mono-spaced font, so you still have to tweak it a bit. Use some editor, like "Notepad" to format it like you want it to appear, or simoly count characters. .P | | | | <+++++++++++++++++++++++++++> <------L/2---|------L/2-----> Ok so a test charge is positive and potential increase when moving close to a postive charge and I assume what you were saying is that from this I should be able to determine that is increasing at that infinitesmal movement toward that positive charge. but what does this tell me about how to set up the inetgral also the origin is in the middle. Okay, now I know the geometry a bit better I can probably help more specifically. You've got your origin in the middle of the line and you are looking at a point above it, that's good because you've got some symmetry. Yeah, the idea is basically that a charge a certain distance from the middle (either right or left) will contribute a potential, or could also be an electric field (the idea is similar), to the point above the middle. If you had a single point a distance x' away you'd have see if you can generalize that to an integral form Thanks for the assist I got it sorry for the length of time to reply. FAQ: Calculating Electric Potential on a Nonconducting Rod 1. What is electric potential? Electric potential is a measure of the electrical potential energy per unit charge at a given point in space. It is also known as voltage. 2. How is electric potential calculated? Electric potential is calculated by dividing the electric potential energy by the amount of charge present at a given point. It is measured in volts (V). 3. What is the electric potential over a rod? The electric potential over a rod is a measure of the electric potential at different points along the length of the rod. It takes into account the distribution of charge along the rod and can be calculated using the equation V= kQ/x, where k is the Coulomb constant, Q is the charge on the rod, and x is the distance from the rod. 4. What factors affect the electric potential over a rod? The electric potential over a rod is affected by the amount and distribution of charge on the rod, as well as the distance from the rod. It is also influenced by the presence of other charged objects or conductors in the surrounding space. 5. How does the electric potential change along the length of a rod? The electric potential over a rod typically decreases as the distance from the rod increases. This is because the electric potential energy decreases as the distance between charges increases. However, the exact change in electric potential along the length of a rod depends on the distribution of charge along the rod and any external factors affecting the electric field. Similar threads MIT OCW, 8.02 Electromagnetism: Potential for an Electric Dipole Calculating eletric potential using line integral of electric field Potential due to a rod with a nonuniform charge density Rod with Current on Rails in Magnetic Field Disk with rod attached rotating about the center of the disk Potential and Electric Field near a Charged CD Disk Find electric field a distance P from a charged rod Electric Potential inside an insulating sphere How do you calculate Electric Potential Energy in a Square? Potential difference between two points in an electric field • Forums • Homework Help • Introductory Physics Homework Help
{"url":"https://windypointhouse.com/article/calculating-electric-potential-on-a-nonconducting-rod","timestamp":"2024-11-12T23:48:55Z","content_type":"text/html","content_length":"81579","record_id":"<urn:uuid:6b21d60d-27e1-44d5-a572-fb1c59b799af>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00380.warc.gz"}
CBSE Class 8 Maths Revision Notes – Free Download - CoolGyan CBSE Class 8 Maths Revision Notes CBSE Class 8 Maths Notes is available with CoolGyan in PDF download format and are updated as per the syllabus. These are made compatible with complete exam preparation. Focusing on this aspect, we offer Class 8 Maths Notes that are equipped with various shortcut techniques and step by step methods, to solve problems. These are prepared by our experts at CoolGyan, who has years of experience. Consequently, with our 8th Maths Notes, students can acquire substantial knowledge about the rudiments of all such chapters. CoolGyan is a platform that provides free NCERT Solution and other study materials for students. Students can register and get access to the best and reliable source of study materials specially made by master teachers at CoolGyan. Subjects like Science, Maths, English will become easy to study if you have access to NCERT Solution for Class 8 Science, Maths solutions and solutions of other Chapter wise Revision Notes for Class 8 Maths CBSE Class 8 Maths Chapter wise Notes To establish a strong foothold in this competitive era, the CBSE curriculum has carved out a well-thought syllabus for students of all classes. For Standard 8 students, they have presented subject content that will – Accelerate their knack of practising sums – Boost their interest to pursue maths in their higher studies The chapters included in the syllabus are enumerated below: 1. Rational Numbers Notes 2. Linear Equations in One Variable Notes 3. Understanding Quadrilaterals Notes 4. Practical Geometry Notes 5. Data Handling Notes 6. Squares and Square Roots Notes 7. Cubes and Cube Roots Notes 8. Comparing Quantities Notes 9. Algebraic Expressions and Identities Notes 10. Visualising Solid Shapes Notes 11. Mensuration Notes 12. Exponents and Powers Notes 13. Direct and Inverse Proportions Notes 14. Factorisation Notes 15. Introduction to Graphs Notes 16. Playing With Numbers Notes Students to gain all-encompassing guidance on the above-mentioned chapters can resort to our Class 8 Maths Notes, furnished with a qualitative explanation of the theories and properties instrumental in solving sums.
{"url":"https://coolgyan.org/revision-notes/cbse-class-8-maths-notes/","timestamp":"2024-11-03T15:38:10Z","content_type":"text/html","content_length":"87261","record_id":"<urn:uuid:85717ee6-2b92-4f3b-94df-993b792a24e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00255.warc.gz"}
t Distribution The t distribution is a bell-shaped distribution centered at the value 0. For large degrees of freedom, it looks very similar to the standard normal distribution, but for small degrees of freedom, it has wider tails. Explore the shape of the t distribution by changing the degrees of freedom. Check the box to see how it compares to the standard normal distribution. Download Graph
{"url":"https://dcmpdatatools.utdanacenter.org/tdist/","timestamp":"2024-11-02T15:43:26Z","content_type":"text/html","content_length":"13889","record_id":"<urn:uuid:98b746ea-8372-417d-b9a5-d9d03e0b707c>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00895.warc.gz"}
How many fixed points do involutions have? It's a fairly well-known fact that if we pick a permutation of the set [n] at random, then the expected number of cycles of length k in that permutation is 1/k, for 1 ≤ k ≤ n. (Somewhat more memorably, the average number of elements in a permutation of [n] which are of a k-cycle is 1.) So what would you expect to happen if you just considered permutations have cycles of length 1 and 2 -- that is, involutions -- and sampled uniformly at random from them? If you're like me, your first thought is that the average involution will have twice as many 1-cycles as 2-cycles, and the same number of elements in 1-cycles as 2-cycles -- that is, the average involution on [n] will have n/2 1-cycles (i. e. fixed points) and n/4 2-cycles, for a total of n/2 fixed points and n/2 elements in 2-cycles. But then you look at "n/2 fixed points" and think that that seems awfully large... it turns out the average number of fixed points of an involution chosen uniformly at random from all involutions is about n . This follows from standard generating function arguments. The exponential generating function of the number of involutions marked for their number of fixed points is exp(uz+z /2); that is, the coefficient of z in that function is n! times the number of involutions on [n] with k fixed points. Standard methods from, say, Flajolet and Sedgewick (which I will probably buy when it comes out in print later this year, because I seem to cite it constantly) gives that the expected number of fixed points is and this can actually be rewritten as na , where a is the number of involutions on [n], that is, is the probability that any given element of an involution is actually a fixed point -- although it's hard to say exactly why this should be true.) Then, if you're still like me, you think "alas, I have forgotten how to figure out the asymptotics of coefficients of entire functions". But the asymptotic number of involutions is the last example of Herbert Wilf's . After some work, the asymptotic formula he gives for a gives that the expected number of fixed points in an involution is n - 1/2 + o(1) Once you know that fixed points are rare, then it's not hard to guess that their distribution should be approximately Poisson, and thus variance should be of the same order of magnitude as the mean -- and the variance result turns out to be true. (I don't know about the Poisson result.) The variance is, I believe, n - 1 + o(1), although this is only from numerical evidence. (The generating-function way to calculate the variance relies on the definition of the variance as the mean of the square minus the square of the mean; this means I need better asymptotics in order to verify this. The better asymptotics are certainly achievable, but they're not at my fingertips.) The result is a bit surprising, though -- why does cutting out cycles of length 3 and greater so drastically change the relative numbers of 1-cycles and 2-cycles? But involutions make up a vanishingly small proportion of all permutations, and weird things can happen in these asymptotically negligible sets without the bulk of the population caring at all. 19 comments: "a_(n-1) / a_n is the probability that any given element of an involution is actually a fixed point -- although it's hard to say exactly why this should be true" Given one of the a_n involutions on [n], choose an element and ask "is this a fixed point?" Surely by symmetry it doesn't matter which point we choose--they all have the same probability of being a fixed point, so choose the last one, n. It is a fixed point iff the rest of the permutation, which is a permutation on [n-1], is an involution in its own right. Well, this happens a_(n-1) times out of the a_n, hence the probability a_ (n-1) / a_n. thanks. That's basically what I wanted to say, although for some reason words failed me yesterday. Random thought: is there a relationship between the n^{1/2} that shows up here and that which shows up in the "birthday paradox"? not as far as I know. But I don't have any reasonable explanation for why it should be true yet, other than that that's what the generating functions say. I'm looking for one. It just seems that there's some similarity, in that you're looking for pairs which satisfy some condition together. Here is a simple argument that we should expect a random involution of n points to have about n^{1/2} fixed points: Recall (or prove) that the number of fixed point free involutions of 2n elements is (2n-1)(2n-3)(2n-5)...5*3*1. So the number of involutions of an n element set which have k fixed points is a_{k,n}=(n choose k) *(n-k-1)(n-k-3)(n-k-5)...5*3*1 where k must have the same parity as n. Then a_{k,n}/a_{k-1,n}=(n-k)/k(k-1) So a_{k,n} increases with k as long as (n-k)>k(k-1) and decreases with k thereafter. Rearranging n-k > k(k-1), we get n>k^2. We should be able to use Stirling's formula to work out the limiting distribution of the number of fixed points. I think that I don't believe your claim about the variance. I haven't been completely rigorous, but I think the variance should be of order n^{1/4}, not n^{1/2}. Rough argument for why it should be less than n^{1/2}: First, correct the ratio computation above. In fact, r_{k,n}:=a_{k,n}/a_{k-1,n}=(n-k+2)/k(k-1). (This isn't important, but I'd be embarassed to keep carrying this error around.) Fix any constant c>0. I claim that, if k>(1+c)*n^{1/2} then a_{k,n}/a^{n^{1/2},n} decays rapidly in n. More precisely, a_{k,n}/a^{n^{1/2},n} is a product of about c n^{1/2}/2 terms of the form r_{j,n}, at least c n^{1/2}/4 of which are less than O(n/(1+c/2)^2 n). So a_{k,n}/a^{n^{1/2},n}=O((1+c/2)^{- c/2 n^{1/2}}). In particular, this decays faster than the reciprocal of any polynomial, so terms with this large k will, as n goes to infinity, contribute nothing to the variance (or, more generally, to any moment.) The same argument shows that there is no contribution from k<(1-c)n^{1/2}. In other words, in the limit probability distribution, everything is clumped around n^{1/2}, with a variance that is o(n^{1/2}). I have a more precise argument, explaining why I believe the variance should by n^{1/4}, but I am worried that I have made some dumb error. Does what I have done make sense so far? How good is your data? David Speyer I don't see an obvious flaw there. But I trust my data for now. (And it's pretty easy to generate -- just the second moment can similarly be obtained by differentiation. The exact asymptotic expansion is hard to get, of course, but coefficients of z^n in exp(z+z^2/2) arbitrarily far out are easy to find. Then again, your argument for o(n^{1/2}) variance could be correct with variance something like n^{1/2}/(log log n) which would be hard to detect when I've only calculated the first hundred or so coefficients of the generating function. At some point I'm going to actually do the computations I alluded to and write this and various other results of the same type up more formally; keep an eye out here. Thanks for your comment! Here's one intuitive reason why conditioning on being an involution gives so many more 2-cycles (which may also be able to made rigorous into an n^(1/2) proof): Let us generate our random involution as follows. 1. Generate a permutation sigma uniformly at random. 2. Expose the cycle containing n. If that cycle has length 3 or more, give up and start again from the step 1. 3. Otherwise, expose the rest of the cycles. If you have an involution, keep sigma. Otherwise, go back to step 1. After step 2, we're equally likely to have placed n in a 2 cycle as in a 1 cycle. However, we still need to pass step 3. If n is in a cycle of length 1, we keep sigma in this final step as often as a random permutation on n-1 elements is an involution. If n is in a cycle of length 2, we keep sigma as often as a random permutation on n-2 elements is an involution. Involutions become significantly more rare as n increases, so a permutation on n-2 elements is much more likely to be an involution than one on n-1 points. Correspondingly, we keep many more of the permutations with n in a 2 cycle than with n in a 1 cycle. OK, I've got some data now and it is -- inconclusive. I can do values of n around 1000 in basically no time; the largest that I have done is 4096. It is definitely true that the distribution is peaked almost exactly at n^{1/2}. It also looks like a bell curve. I haven't got the code somputing standard deviations yet, but visually it looks like n^{1/2}. So this sounds kind of like a bell curve with center and variation n^{1/2}. But that is impossible. We know that the probability is precisely zero of having a negative number of fixed points. I'll probably put some pictures and data up online once I clean it up. David Speyer` the standard deviation can't be n^{1/2}, of course -- that would mean that none of the mass of the distribution is more than one standard deviation below the mean. The only distributions like that, if I remember correctly, are distributions with all the mass at two points; certainly a normal distribution (which I suspected this is even without seeing your data, because that's just how a lot of these things work) won't be of that sort. But the variance (the square of the standard deviation) could be n^{1/2}, as I conjectured in the original post, making the standard deviation n^{1/4}. Then we would have from the non-negativity of the number of fixed points that none of the mass of the distribution is more than n^{1/4} standard deviations below the mean, which is a reasonable thing to say since the result is true in some asymptotic sense as n goes to infinity. OK, some numerical data. When n=1024, 2500, 3025, 4096 the mean of k is 31.5116, 49.5074, 54.5068, 63.5058 and the std. dev is 31.0193, 49.0124, 54.0113, 63.0097 That's a great fit for a standard deviation of n^{1/2}. But all the pictures look like bell curves! What the heck? David Speyer Oh, you're right, I'm dumb :). The data below is variances, not standard deviations. So the standard deviations are n^{1/4}, even though the bell curves look much wider by eye. OK, now I think we are saying the same thing. Mean=n^{1/2}, Std. Dev. = n^{1/4}. The only thing we disagree on is that I think we are looking at a normal distribution and you think it is Poisson. Let me put up a picture and see if I convince you. Check out this histogram depicting the probabilities of various numbers of fixed points in a random involution of 4096 elements. David S. The Poisson distribution tends to a normal distribution (after proper normalization) as the parameter tends to infinity, so you and Isabel are at a complete agreement. All that is left is to indeed prove that it's Poisson. I haven't thought about it, but my guess is that it shouldn't be too hard. It was rather interesting for me to read the blog. Thank author for it. I like such topics and anything that is connected to this matter. I definitely want to read more soon. Keep on posting such themes. I love to read blogs like this. BTW add more pics :) Good brief and this enter helped me alot in my college assignement. Gratefulness you seeking your information. I am regular visitor of this website[url=http://www.weightrapidloss.com/lose-10-pounds-in-2-weeks-quick-weight-loss-tips].[/url]Plenty of useful information on godplaysdice.blogspot.com. Do you pay attention towards your health?. Let me present you with one fact here. Recent Scientific Research points that closely 80% of all U.S. grownups are either fat or weighty[url=http:// www.weightrapidloss.com/lose-10-pounds-in-2-weeks-quick-weight-loss-tips].[/url] Hence if you're one of these individuals, you're not alone. Infact many among us need to lose 10 to 20 lbs once in a while to get sexy and perfect six pack abs. Now next question is how you can achive quick weight loss? You can easily lose with with little effort. If you improve some of your daily diet habbits then, its like piece of cake to quickly lose weight. About me: I am webmaster of [url=http://www.weightrapidloss.com/lose-10-pounds-in-2-weeks-quick-weight-loss-tips]Quick weight loss tips[/url]. I am also health expert who can help you lose weight quickly. If you do not want to go under painful training program than you may also try [url=http://www.weightrapidloss.com/acai-berry-for-quick-weight-loss]Acai Berry[/url] or [url=http:// www.weightrapidloss.com/colon-cleanse-for-weight-loss]Colon Cleansing[/url] for fast weight loss.
{"url":"https://godplaysdice.blogspot.com/2008/02/how-many-fixed-points-do-involutions.html?showComment=1202236860000","timestamp":"2024-11-10T21:38:35Z","content_type":"text/html","content_length":"79664","record_id":"<urn:uuid:1a63655d-43e0-48ba-bae9-37ff1e1a6ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00869.warc.gz"}
What is Ferranti Effect in Power Transmission Lines? Alternating CurrentOverhead LinesPower System Ferranti Effect – Causes, Circuit Diagram, Advantages & Disadvantages Ferranti Effect in Power Lines: Circuit Diagram, Causes, Effects, Controlling, Advantages and Disadvantages As we know that electricity is generated at power generation plants using huge electromechanical generators by conversion from other forms of energy. This electrical energy is then transmitted over a long-distance transmission line to end users. The electrical power transmission lines require multiple safety devices and components to ensure the safety of the connected loads and personnel as well as maximize the efficiency of the power transmission and distribution system. The transmission line faces various kinds of losses and phenomena that affect its efficiency. One such phenomenon that greatly impacts the transmission line is Ferranti Effect. Generally, we assume that the voltage always drops in the transmission lines due to the line losses. Ferranti Effect causes the receiving voltage to be greater than the sending voltage in a long-distance transmission line that has a very light load or no load at all. This article explains the causes, benefits, drawbacks, etc. of the Ferranti effect. What is Ferranti Effect? The Ferranti effect is a phenomenon in which the voltage at the receiving end (load side) is greater than the voltage at the sending end (source or generating side) of a long transmission line or cable during light load or no load conditions. The rise in voltage is due to more reactive power being generated by the line capacitance in the power lines than the power being consumed. The Ferranti effect was discovered by British electrical engineer Sebastian Ziani de Ferranti in 1887. He observed a rise in the voltages at certain points in the London power system. He observed that the medium to long transmission power cables having light load or no loads has greater receiving voltage Vr than the sending voltage Vs. it does not occur in short transmission lines. In short, the receiving end voltage of an unloaded power line being greater than the sending end voltage due to the line capacitance is known as Ferranti effect. Related Posts: What Causes Ferranti Effects? Ferranti Effect mainly occurs due to the presence of a huge charging current due to the capacitance of the transmission line. Although different factors affect the current in the transmission line. However, Ferranti Effect occurs due to the following three reasons; • Transmission line capacitance • Load at the receiving end • Supply Frequency Transmission Line Capacitance The conductors in the transmission line are placed in closed proximity, especially in underground cables that develop capacitance between them. In fact, the transmission cable constitutes many shunt capacitors and series inductors equally distributed along the length of the cable. The capacitance increases with an increase in the length of the transmission line. The capacitors draw a large amount of charging current that flows through the whole length of the line. The capacitor generates reactive power that flows in opposite direction toward the source. The inductors in the line consume the reactive power causing a voltage drop across them. The voltage drop is in-phase with the sending voltage. Therefore the voltages add up and the receiving voltage is increased. The Ferranti effect depends on the length of the transmission line and the type of cable being used. Here is a table chart showing details. Type of Transmission Length of Transmission Nature of Impedance Ferranti Reasons line line Effect Short transmission line Short transmission lines have very small negligible capacitance, thus negligible charging current and No Overhead Resistive and inductive No Ferranti effect. < 50 miles or 80 km Medium transmission Resistive, inductive and Medium transmission lines have a high capacitance that develops a large charging current and causes Ferranti Overhead 50 to 100 miles capacitive Yes effect. 80 to 160 km Long transmission line Overhead Resistive, inductive and Yes Long transmission lines have very high capacitance and very large charging current thus Ferranti effect is more > 100 miles or 160 km capacitive prominent. Underground Multicolor cables Resistive, inductive and Yes Cables include conductor in very close proximity to those of overhead lines that develop very high charging capacitive current and Ferranti effect. Related Posts: Connected Load The Ferranti effect also depends on the load connected at the receiving end. The load can be in either three conditions • No load • Light load • Full load Under no load condition, there is only a charging current and no load current flowing through the transmission line. The charging current is drawn by the shunt capacitors in the line. It generates reactive power that causes a voltage drop across the inductor that is in-phase with the sending voltage and increases the voltage at the receiving end. When light load is connected, there is a very low load current as compared to the charging current flowing through the lines. Due to the line capacitance, the charging current is leading in nature. The capacitor generates reactive power that flows through the inductors which is greater than the reactive power consumed by the inductors due to low load current. The voltage drop across the inductors is almost in phase with the source voltage and it is proportional to the charging current. As the charging current is higher than the load current, the Ferranti effect occurs. In full load conditions, the load current is higher than the charging current drawn by the capacitor (it is almost constant). Since a large load current is flowing through the series inductors, the reactive power consumed by the inductor is larger than the reactive power generated by the capacitor. Therefore the net reactive power is negative and the voltage decreases at the receiving end. Supply Frequency As we know that the Ferranti effect occurs due to the reactive power generated in the shunt capacitance of the power lines. However reactive power only occurs if supply voltage and current has some frequency. Since DC has zero frequency, there is no Ferranti effect. We can say that transmission lines that operate at high frequency are more prone to the Ferranti effect. Related Posts: Circuit Diagram, Phasor Diagram & Equation for Ferranti Effects Let’s consider the equivalent circuit diagram of a long transmission line. Since a long transmission line is composed of high capacitance and inductance distributed throughout the whole length of the line, this diagram represents the parameters per kilometer of length. Therefore the capacitance and inductance are proportional to the length of the line. The shunt capacitors are in parallel while the inductors are in series with the power lines as shown below. The parameters in the given circuit diagram are • V[s] = Sending or source Voltage (Generating End) • V[r] = Receiving Voltage • I[s] = Sending or Source Current • I[r] = Receiving Current • I[c] = Capacitive or Charging Current • R = Resistance of line • X[c] = Capacitive reactance of the line • X[L] = inductive reactance of the line • C = Capacitance of line • L = inductance of the line Let’s look at the phasor diagram of the given circuit. During the Ferranti effect, there is no load therefore Receiving current I[r ]= 0 I[s] = I[c] The receiving voltage Vr is taken as reference OA where the capacitive current Ic is represented by perpendicular line OD leading the Vr by 90°. The capacitive current Ic has a voltage drop across the line resistance R and line inductance L given by Voltage drop across resistor = I[c]R = represented by AB Voltage drop across inductor = I[c]X[L ]= represented by BC The inductive voltage drop I[c]X[L ](BC) leads the resistive voltage drop I[c]R (AB) by 90°. Where the sending voltage Vs is the sum of all voltage drops + receiving voltage represented by OC. V[s] = V[r] + Resistive voltage drop + Inductive voltage drop V[s] = OA + AB + BC = OC As shown in the phasor diagram, the receiving voltage Vr at the load side is greater than the sending voltage Vs at the source side. Now let’s derive the equation for the Ferranti effect using the equivalent circuit diagram of the transmission line. V[s] = V[r] + resistive drop + inductive drop V[s] = V[r] + I[c]R + I[c]X[L] V[s] = V[r] + I[c] (R + X[L]) Since capacitive current, I[c] = jwCV[r] V[s] = V[r] + jwCV[r] (R + X[L]) Since X[L ]= jwL V[s] = V[r] + jwCV[r] (R + jwL) V[s] = V[r] + jwCV[r]R + j^2w^2CLV[r] V[s] = V[r] + jwCV[r]R – w^2CLV[r] V[s] – V[r] = jwCV[r]R – w^2CLV[r] In long transmission lines, the line resistance is much smaller than the line reactance. Therefore the resistance R, as well as the resistive voltage drop is neglected V[s] – V[r] = -w^2CLVr Now assume the capacitance and inductance per km of the length are Co and Lo respectively and the length of the transmission line is l. The equation becomes V[s] – V[r] = -w^2(C[o]l)(L[o]l)V[r] V[s] – V[r] = -w^2l^2 C[o]L[o]V[r] Since the line capacitance is distributed throughout the whole length (l) of the transmission line, the charging current, as well as the voltage drop associated with it, is taken as average. V[s] – V[r] = – (1/2) w^2l^2 C[o]L[o]V[r] The voltage difference between the sending and receiving voltage is negative which means the voltage rises. And it is directly proportional to the square of frequency (w), the square of line length (l). This equation proves that the Ferranti effect increases with an increase in the length of the transmission line and supply frequency. Therefore small transmission lines and HVDC transmission are not affected by the Ferranti effect. Related Posts: How to Reduce Ferranti Effect? Ferranti effect creates voltage instability in the electrical system that creates a hazardous situation for equipment and personnel at the load side. There are certain measures taken to minimize the Ferranti effect. Shunt Reactor Ferranti effect occurs due to the generation of reactive power in the power system and there are no loads to absorb this reactive power. We need to install something in the transmission line that can absorb this excessive reactive power. A shunt reactor can absorb this reactive power by installing it in the transmission line. It is usually installed at the load end. A medium transmission line requires the shunt reactor to be placed at the receiving end. However long transmission lines require it to be installed at a periodic distance or in the middle of the line. Whereas underground cables have very high capacitance and require relatively very short distance of around 10 miles or 15 km intervals. Load Management Ferranti effect occurs when there is no or light load connected with the lines. To reduce the Ferranti effect, it must fulfill the given condition. Load current > Charging current. The loads in a transmission line should be monitored continuously and the load must be ensured to be kept above the limit. It can be done by placing multiple light load lines onto a single line. Advantages & Disadvantages of Ferranti Effects The Ferranti effect does not have many advantages but disadvantages. However, some of them are given below. Reduced Copper losses The copper loss is the power loss in the lines due to the current flowing through them. It appears in the form of heat. It depends on the amount of current flowing through the line and is given by Copper loss = I^2R As the charging current I[c] is leading in nature and the inductive current I[L] is lagging. The vector sum of both current cancel each other and decrease the load current to some extent. In no or light load conditions, the capacitive current is greater than the inductive current. Therefore the net load current I[net] phase is more inclined towards the capacitive current. Since the load current is decreased, the copper losses also decrease. Power Factor Improvement Power factor occurs due to the phase angle of the capacitive current and inductive current. As the capacitive current reduces the effect of the inductive current. Therefore the net current is mostly composed of resistive current which improves the power factor. The Ferranti effect has many disadvantages such as: Over Voltage The Ferranti effect increases the sending voltage when it is received at the load end of the transmission line. The increase in voltage can damage any load connected with the system. Since every load is designed to operate at a nominal voltage. Overvoltage can damage insulation and drastically reduce the lifetime of equipment and it can permanently damage the load. Voltage Regulation It increases the voltage regulation which must be near zero. It reduces the efficiency of the transmission line by reducing the amount of load current flowing through it. The charging current takes most of the ampacity of the line conductor thus reducing its Related Posts:
{"url":"https://www.electricaltechnology.org/2022/10/ferranti-effect.html","timestamp":"2024-11-08T15:35:07Z","content_type":"text/html","content_length":"359231","record_id":"<urn:uuid:e8a2dd31-3200-4bac-a122-f80d5abdd482>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00769.warc.gz"}
Normalization Techniques In the complex landscape of deep learning, normalization techniques have emerged as pivotal components in enhancing the performance and stability of neural networks. These methods primarily address the challenge of internal covariate shift—a phenomenon where the distribution of activations changes during training—by adjusting the scale and variance of input data or activations at various layers. The benefits of normalization are manifold: from accelerating convergence and reducing training time, to enhancing generalization and mitigating the infamous vanishing or exploding gradient issues. With a plethora of normalization techniques available today, ranging from the widely-adopted Batch Normalization to the more niche Instance Normalization, there's a growing necessity for practitioners to understand their intricacies, strengths, and appropriate use-cases. Whether you're designing a state-of-the-art vision model or a style transfer algorithm, the choice of normalization can be crucial to achieving desired results. This article delves into the diverse world of normalization, illuminating the principles, applications, and nuances of each technique. As we unpack these methods, we'll gain insights into their transformative impact on deep learning models and the ever-evolving journey of making neural networks more efficient and robust. Batch Normalization (BN) In the vast realm of deep learning, Batch Normalization (BN) has secured its position as a seminal technique, revolutionizing the way we train deep neural networks. Introduced by Sergey Ioffe and Christian Szegedy in 2015, the primary motivation behind BN was to address the challenge of internal covariate shift. This phenomenon refers to the changing distributions of internal node activations during training, which can result in slower convergence and necessitate the use of lower learning rates. Benefits of Batch Normalization: • Improved Convergence: BN allows the use of higher learning rates, accelerating training. • Mitigates Vanishing/Exploding Gradients: By maintaining activations within a certain scale. • Acts as a Regularizer: In some cases, BN reduces or even eliminates the need for dropout. • Less Sensitivity to Weight Initialization: Training becomes less finicky about the initial weights. Limitations and Considerations: • Dependency on Batch Size: Very small batch sizes can make the mean and variance estimates noisy, potentially destabilizing training. • Performance Overhead: The normalization process adds computational complexity, potentially impacting training and inference time. In conclusion, while BN has been instrumental in advancing deep learning, it's crucial to understand its workings, benefits, and limitations to leverage it effectively in various applications. Whether you're training a simple feed-forward network or a complex deep model, Batch Normalization can often be the catalyst for efficient and stable training. Divisive Normalization (DN) In the diverse world of neural network normalization techniques, Divisive Normalization (DN) stands out as a biologically inspired method, rooted in observations from primary visual cortex neurons. Unlike many other normalization techniques that have been proposed primarily for improving deep learning models, DN finds its origins in computational neuroscience, where it was used to model responses of visual neurons. Benefits of Divisive Normalization: • Biological Relevance: It closely models observations from the primary visual cortex. • Contrast Gain Control: DN can adjust neuron responses based on the contextual activity, making the model robust to varying input statistics. • Potential for Improved Generalization: Especially in tasks related to visual perception and processing. Limitations and Considerations: • Computational Overhead: Calculating responses for every neuron based on its neighbors can be computationally intensive, especially for dense networks. • Parameter Choices: The influence weights w ij and the constant σ can impact the performance and stability of the normalization. In summary, Divisive Normalization offers an intriguing bridge between computational neuroscience and deep learning. While not as commonly used in mainstream deep learning architectures as techniques like Batch Normalization, DN provides an alternative perspective, drawing from the way natural neural systems operate. Understanding and experimenting with DN can not only open avenues for more biologically plausible models but also potentially harness the benefits of DN in achieving better and more robust neural network performance. Group Normalization (GN) Group Normalization (GN) is a compelling normalization technique that emerged in the context of the limitations posed by techniques like Batch Normalization, especially when dealing with small batch sizes. Proposed by Yuxin Wu and Kaiming He in 2018, GN segregates channels into groups and normalizes the features within each group, making it independent of batch size. Benefits of Group Normalization: • Batch Size Independence: GN operates consistently regardless of the batch size, making it particularly useful for tasks where batch size flexibility or very small batch sizes are needed. • Stable Training: Offers stable training dynamics even without the need for extensive hyperparameter tuning. • Simpler Scaling: Helps in scaling up models and architectures without needing to reconsider normalization strategy. Limitations and Considerations: • Group Number Sensitivity: The choice of the number of groups G can influence the performance. For instance, when G=1, GN becomes Layer Normalization, and when G=C, it's equivalent to Instance • Possible Performance Overhead: Introducing groups can add slight computational complexity, especially for architectures with a large number of channels. In a nutshell, Group Normalization provides a robust alternative to techniques like Batch Normalization, particularly in scenarios where batch size becomes a constraint. It underscores the versatility of normalization techniques in adapting to different requirements and challenges in deep learning. Given its merits, GN should be a strong consideration for deep learning practitioners looking to optimize training stability and performance. Instance Normalization (IN) As deep learning models began expanding their horizons beyond traditional tasks, newer normalization techniques emerged to cater to specific needs. Instance Normalization (IN) is one such technique, gaining prominence in the domain of style transfer and generative models. Unlike Batch Normalization, which normalizes across a batch, or Group Normalization that normalizes across grouped channels, IN focuses on normalizing individual instances independently. Benefits of Instance Normalization: • Improved Style Transfer: IN has been pivotal in achieving high-quality results in style transfer algorithms, as it tends to standardize the contrast of features across instances. • Stability in Generative Models: Especially in Generative Adversarial Networks (GANs), IN can help in achieving stable and consistent generation. • Independence from Batch Statistics: Unlike Batch Normalization, IN's performance isn't tethered to batch size or the data distribution within a batch. Limitations and Considerations: • Limited Utility Outside Specific Domains: While IN shines in tasks like style transfer and certain generative models, it might not be the best choice for all deep learning applications, especially conventional tasks like classification. • Loss of Inter-Instance Information: Since normalization is done per instance, any relational information between instances in a batch is disregarded during the normalization process. To conclude, Instance Normalization is a testament to the adaptability of normalization techniques to the evolving challenges and domains of deep learning. While its applicability might be niche compared to some other normalization methods, in the realms where it excels, it has proven indispensable. For practitioners working on style transfer, generative models, or similar tasks, understanding and utilizing IN can be the key to achieving superior results. Layer Normalization (LN) In the myriad of normalization techniques tailored to optimize deep learning performance, Layer Normalization (LN) emerges as a method that’s both versatile and less dependent on batch dynamics. Distinct from techniques like Batch Normalization, which computes statistics across a batch, or Instance Normalization, which normalizes individual instances, LN operates across inputs of a layer for a single instance. Benefits of Layer Normalization: • Batch Independence: LN is independent of the batch size, making it beneficial for varying batch sizes and for scenarios where using large batches is computationally infeasible. • Consistent Training Dynamics: LN provides stable training even when there are changes in data distribution or batch dynamics. • Versatility: LN can be applied to various types of layers, including recurrent layers, making it especially useful in recurrent neural networks (RNNs) where Batch Normalization might be tricky to Limitations and Considerations: • No Inter-Instance Regularization: LN only offers intra-instance normalization, meaning there’s no regularization effect between different instances in a batch, which is something Batch Normalization offers. • Potential Suboptimal Performance: In some architectures or tasks, especially where inter-instance statistics are important, LN might not perform as well as other normalization techniques. In essence, Layer Normalization is a potent tool in the deep learning toolkit, providing a batch-independent normalization strategy that can be pivotal in specific scenarios, especially in sequence-based models like RNNs and LSTMs. As with all normalization techniques, understanding its strengths, limitations, and ideal application scenarios is key to harnessing its full potential. Switchable Normalization (SNorm) Amidst the abundance of normalization methods available for deep learning, Switchable Normalization (SNorm) offers a unique, flexible approach. Rather than being tied down to a single normalization strategy, SNorm is designed to adaptively leverage the strengths of multiple normalization techniques, namely Batch Normalization (BN), Instance Normalization (IN), and Layer Normalization (LN). Benefits of Switchable Normalization: • Adaptive Learning: SNorm adjusts to the specific requirements of the data and task by learning the ideal normalization strategy. • Versatility: By blending different normalization techniques, SNorm can be suitable for a broader range of applications. • Robustness: With its adaptive nature, SNorm can potentially handle varying data distributions and training dynamics more gracefully than static normalization methods. Limitations and Considerations: • Increased Complexity: SNorm introduces additional parameters, potentially increasing the complexity of the model. • Computational Overhead: Computing statistics for multiple normalization methods and adjusting weights might introduce some computational overhead, especially in deeper networks. In summation, Switchable Normalization is a reflection of the evolving landscape of deep learning optimization techniques. By dynamically adjusting to the best normalization strategy, it offers a flexible and potentially more robust solution. For practitioners exploring advanced optimization strategies, SNorm can be a valuable addition to the repertoire, particularly in scenarios where traditional normalization methods might fall short. Spectral Normalization (SN) Deep learning, as a field, continually devises innovative techniques to ensure stability, especially in the training of models. Spectral Normalization (SN) stands out as a prominent technique that's primarily aimed at stabilizing the training of Generative Adversarial Networks (GANs) by constraining the Lipschitz constant of the model's layers. However, its applications have also been realized in other deep learning contexts beyond just GANs. Benefits of Spectral Normalization: • Stabilized GAN Training: GANs are notoriously challenging to train due to issues like mode collapse and vanishing gradients. SN helps mitigate these problems by ensuring more stable gradients. • Improved Model Generalization: Beyond GANs, applying SN in other deep learning models has shown potential in reducing overfitting and improving generalization. • No Need for Weight Clipping: In some GAN setups, weights are often clipped to ensure stability. With SN, such manual interventions are unnecessary as the normalization process inherently bounds the weights. Limitations and Considerations: • Computational Overhead: Computing the largest singular value can be computationally intensive, especially for large matrices. However, in practice, power iteration methods are used to approximate this value efficiently. • Hyperparameter Sensitivity: Like many techniques in deep learning, the effectiveness of SN can sometimes hinge on the right choice of hyperparameters. To wrap up, Spectral Normalization emerges as a powerful technique, particularly for stabilizing the notoriously capricious training dynamics of GANs. However, its utility doesn't stop there; it's a tool that has broader implications for deep learning, with potential benefits for a range of architectures and tasks. As with any advanced technique, a nuanced understanding and careful application are key to maximizing its benefits. Weight Normalization (WN) In the quest to enhance the optimization landscape for deep learning, several techniques have been proposed to combat issues such as slow convergence and poor initialization. Weight Normalization (WN) is one such technique, introduced as an alternative to the more commonly known Batch Normalization. While both techniques aim to improve training dynamics, their methodologies and primary motivations differ. Benefits of Weight Normalization: • Faster Convergence: WN can lead to faster training convergence compared to networks without normalization, and sometimes even faster than those with Batch Normalization. • Independence from Batch Size: Unlike Batch Normalization, the performance of WN doesn’t hinge on the batch size, making it advantageous in situations where batch sizes need to be variable or small due to memory constraints. • Simplicity and Reduced Overhead: WN is computationally simpler than Batch Normalization as it doesn’t require maintaining running statistics or extra operations during the forward and backward • Enhanced Stability: By focusing on the direction of the weight vectors, WN can provide a more stable training trajectory, especially when used with adaptive learning rate methods. Limitations and Considerations: • Not a Panacea: While WN can offer benefits in many scenarios, it's not always superior to other normalization methods in all contexts. The choice between WN, Batch Normalization, and other techniques often depends on the specific task and data at hand. • Potential for Suboptimal Solutions: In some situations, the reparameterization introduced by WN might lead the optimizer to converge to suboptimal solutions. In summary, Weight Normalization emerges as a compelling normalization technique, especially when faster convergence and reduced computational overhead are of prime importance. However, as with any technique in deep learning, it’s crucial to understand its intricacies and consider the problem context before deploying it. For many practitioners, WN offers a simpler, yet effective alternative to some of the more complex normalization methodologies. In the dynamic landscape of deep learning, normalization techniques have risen as pivotal tools to stabilize training, enhance convergence rates, and improve model performance. Each of the discussed methods offers unique benefits and potential challenges: • Batch Normalization (BN), with its focus on normalizing activations across mini-batches, has revolutionized training dynamics in deep networks, making deeper models feasible and improving • Divisive Normalization (DN), inspired by neuroscience, addresses intra-layer dependencies, ensuring that neurons aren't overly influenced by a small subset of strong activations. • Group Normalization (GN) strikes a middle ground between instance-based and batch-based methods, bringing in the best of both worlds, especially when working with smaller batch sizes. • Instance Normalization (IN) finds its strength in style transfer and generative tasks, emphasizing the distinct features of individual data instances. • Layer Normalization (LN) offers intra-instance normalization, ensuring consistent activations across features or neurons of a layer for each specific instance, making it particularly suitable for recurrent networks. • Switchable Normalization (SNorm) stands out as a flexible approach, learning to adaptively combine multiple normalization strategies, ensuring optimal contributions from each based on the data and task at hand. • Spectral Normalization (SN) zeroes in on the stability of training, especially for Generative Adversarial Networks, by normalizing the weight matrix of a layer using its largest singular value, which has implications for model stability and generalization. • Weight Normalization (WN), by decoupling the length and direction of weight vectors, provides a consistent optimization landscape leading to potentially faster convergence and reduced computational overhead. In wrapping up, the choice of a normalization technique isn't a one-size-fits-all decision. It hinges on the architecture in use, the nature of the data, the specific problem at hand, and the computational constraints. As the field of deep learning evolves, these techniques underscore the importance of understanding and adjusting the internal dynamics of neural networks. Informed choices among these methods can significantly impact training efficiency, model robustness, and overall performance. Kind regards
{"url":"https://schneppat.com/normalization-technique.html","timestamp":"2024-11-11T10:30:09Z","content_type":"text/html","content_length":"272938","record_id":"<urn:uuid:c3cc5a4f-e658-478a-b662-30c632392037>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00107.warc.gz"}
Efficient dual-sphere microphone-array design based on generalized sampling theory The generalized sampling expansion, introduced by Papoulis. facilitates the reconstruction of a band- limited signal that has been sampled at a rate slower than the Nyquist rate - provided that additional information about the signal is available in the form of the sampled outputs of known linear systems with the original signal at their input. In this work, the generalized sampling expansion, originally- developed for time-domain signals, is formulated for functions over the sphere, using the spherical harmonics transform. The paper presents the theory of the new expansion that has been developed for the equal-angle sampling scheme. In the second part of the paper, the theory is applied to the design of an efficient dual-sphere microphone-array. A known problem, when sampling sound fields using an open-sphere microphone-array, is the ill-conditioning at specific frequencies due to the nulls of the spherical Bessel function. To overcome this problem, a dual-sphere design, which requires twice as many microphones compared to a single-sphere design, has previously been proposed. Applying the generalized sampling theory developed here, it is shown that a dual-sphere design with half the number of samples at each sphere can replace a single sphere, but only if the two spheres are rotated relative to each other in a specific manner. Reconstruction of the sound pressure on the sphere is then possible without increasing the total number of microphones, while at the same time countering the effect of the nulls. ASJC Scopus subject areas • Acoustics and Ultrasonics Dive into the research topics of 'Efficient dual-sphere microphone-array design based on generalized sampling theory'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/efficient-dual-sphere-microphone-array-design-based-on-generalize","timestamp":"2024-11-01T20:33:18Z","content_type":"text/html","content_length":"56981","record_id":"<urn:uuid:bbc4d97b-54a0-470f-8c89-6308230b5e8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00440.warc.gz"}
Christian Books by Joseph Herrin Judgment is Coming! The disciples asked Yahshua what would be the sign of His return and He declared there would be “signs in Sun and Moon and stars and upon the earth.” The signs are being given now, though few are observing them.
{"url":"https://www.heart4god.ws/judgment-is-coming.htm","timestamp":"2024-11-02T14:14:49Z","content_type":"text/html","content_length":"9879","record_id":"<urn:uuid:fcd11691-bf3b-4102-a511-af9f534ec366>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00050.warc.gz"}
General Aspects of Fitting Regression Models 2 General Aspects of Fitting Regression Models Regression modeling meets many analytic needs: • Prediction, capitalizing on efficient estimation methods such as maximum likelihood and the predominant additivity in a variety of problems □ E.g.: effects of age, smoking, and air quality add to predict lung capacity □ When effects are predominantly additive, or when there aren’t too many interactions and one knows the likely interacting variables in advance, regression can beat machine learning techniques that assume interaction effects are likely to be as strong as main effects • Separate effects of variables (especially exposure and treatment) • Hypothesis testing • Deep understanding of uncertainties associated with all model components □ Simplest example: confidence interval for the slope of a predictor □ Confidence intervals for predicted values; simultaneous confidence intervals for a series of predicted values ☆ E.g.: confidence band for \(Y\) over a series of values of \(X\) Alternative: Stratification • Cross-classify subjects on the basis of the \(X\)s, estimate a property of \(Y\) for each stratum • Only handles a small number of \(X\)s • Does not handle continuous \(X\) Alternative: Single Trees (recursive partitioning/CART) • Interpretable because they are over-simplified and usually wrong • Cannot separate effects • Finds spurious interactions • Require huge sample size • Do not handle continuous \(X\) effectively; results in very heterogeneous nodes because of incomplete conditioning • Tree structure is unstable so insights are fragile Alternative: Machine Learning • E.g. random forests, bagging, boosting, support vector machines, neural networks, deep learning • Allows for high-order interactions and does not require pre-specification of interaction terms • Almost automatic; can save analyst time and do the analysis in one step (long computing time) • Uninterpretable black box • Effects of individual predictors are not separable • Interaction effects (e.g., differential treatment effect = precision medicine = personalized medicine) not available • Because of not using prior information about dominance of additivity, can require 200 events per candidate predictor when \(Y\) is binary (van der Ploeg et al. (2014)) □ Logistic regression may require 20 events per candidate predictor □ Can create a demand for “big data” where additive statistical models can work on moderate-size data □ See this article in Harvard Business Review for more about regression vs. complex methods 2.1 Notation for Multivariable Regression Models • Weighted sum of a set of independent or predictor variables • Interpret parameters and state assumptions by linearizing model with respect to regression coefficients • Analysis of variance setups, interaction effects, nonlinear effects • Examining the 2 regression assumptions Symbol Meaning \(Y\) response (dependent) variable \(X\) \(X_{1},X_{2},\ldots,X_{p}\) – list of predictors \(\beta\) \(\beta_{0},\beta_{1},\ldots,\beta_{p}\) – regression coefficients \(\beta_0\) intercept parameter(optional) \(\beta_{1},\ldots,\beta_{p}\) weights or regression coefficients \(X\beta\) \(\beta_{0}+\beta_{1}X_{1}+\ldots+\beta_{p}X_{p}, X_{0}=1\) Model: connection between \(X\) and \(Y\) \(C(Y|X)\) : property of distribution of \(Y\) given \(X\), e.g. \(C(Y|X) = {\rm E}(Y|X)\) or \(\Pr(Y=1|X)\). 2.2 Model Formulations General regression model \[C(Y|X) = g(X) .\] General linear regression model \[C(Y|X) = g(X\beta) .\] Examples \[\begin{array}{ccc} C(Y|X) =& E(Y|X) =& X\beta, \\ Y|X &\sim N(X\beta,\sigma^{2}) & \\ C(Y|X) =& \Pr(Y=1|X) =& (1+\exp(-X\beta))^{-1} \\ \end{array}\] Linearize: \(h(C(Y|X))=X\beta, h(u)=g^{-1}(u)\) \[\begin{array}{ccc} C(Y|X)=\Pr(Y=1|X)&=&(1+\exp(-X\beta))^{-1} \\ h(u)=\mathrm{logit}(u)&=&\log(\frac{u}{1-u}) \\ h(C(Y|X)) &=& C'(Y|X)\ {\rm (link)} \end{array}\] General linear regression model: 2.3 Interpreting Model Parameters Suppose that \(X_{j}\) is linear and doesn’t interact with other \(X\)’s^1. ^1 Note that it is not necessary to “hold constant” all other variables to be able to interpret the effect of one predictor. It is sufficient to hold constant the weighted sum of all the variables other than \(X_{j}\). And in many cases it is not physically possible to hold other variables constant while varying one, e.g., when a model contains \(X\) and \(X^{2}\) (David Hoaglin, personal \[\begin{array}{ccc} C'(Y|X) &=& X\beta = \beta_{0}+\beta_{1}X_{1}+\ldots+\beta_{p}X_{p} \\ \beta_{j} &=& C'(Y|X_{1}, X_{2}, \ldots, X_{j}+1, \ldots, X_{p}) \\ &-& C'(Y|X_{1}, X_{2}, \ldots, X_{j}, \ ldots, X_{p}) \end{array}\] Drop \('\) from \(C'\) and assume \(C(Y|X)\) is property of \(Y\) that is linearly related to weighted sum of \(X\)’s. 2.3.1 Nominal Predictors Nominal (polytomous) factor with \(k\) levels : \(k-1\) indicator variables. E.g. \(T=J,K,L,M\): \[\begin{array}{ccc} C(Y|T=J) &=& \beta_{0} \nonumber\\ C(Y|T=K) &=& \beta_{0}+\beta_{1}\\ C(Y|T=L) &=& \beta_{0}+\beta_{2}\nonumber\\ C(Y|T=M) &=& \beta_{0}+\beta_{3}\nonumber . \end{array}\] \[C(Y|T) = X\beta= \beta_{0}+\beta_{1} X_{1}+\beta_{2} X_{2}+\beta_{3} X_{3},\] \[\begin{array}{ccc} X_{1} = 1 & {\rm if} \ \ T=K, & 0 \ \ {\rm otherwise} \nonumber\\ X_{2} = 1 & {\rm if} \ \ T=L, & 0 \ \ {\rm otherwise} \\ X_{3} = 1 & {\rm if} \ \ T=M, & 0 \ \ {\rm otherwise} \ nonumber. \end{array}\] The test for any differences in the property \(C(Y)\) between treatments is \(H_{0}:\beta_{1}=\beta_{2}=\beta_{3}=0\). 2.3.2 Interactions \(X_{1}\) and \(X_{2}\), effect of \(X_{1}\) on \(Y\) depends on level of \(X_{2}\). One way to describe interaction is to add \(X_{3}=X_{1}X_{2}\) to model: \[C(Y|X) = \beta_{0}+\beta_{1}X_{1}+\ beta_{2}X_{2}+\beta_{3}X_{1}X_{2} .\] \[\begin{array}{ccc} C(Y|X_{1}+1, X_{2})&-&C(Y|X_{1}, X_{2})\nonumber\\ &=&\beta_{0}+\beta_{1} (X_{1}+1)+\beta_{2}X_{2}\nonumber\\ &+&\beta_{3} (X_{1}+1)X_{2}\\ &-&[\beta_{0}+\beta_{1}X_{1}+\beta_{2} X_{2}+\beta_{3}X_{1}X_{2}]\nonumber\\ &=&\beta_{1}+\beta_{3}X_{2} .\nonumber \end{array}\] One-unit increase in \(X_{2}\) on \(C(Y|X)\) : \(\beta_{2}+\beta_{3} X_{1}\). Worse interactions: If \(X_{1}\) is binary, the interaction may take the form of a difference in shape (and/or distribution) of \(X_{2}\) vs. \(C(Y)\) depending on whether \(X_{1}=0\) or \(X_{1}=1\) (e.g. logarithm vs. square root). This paper describes how interaction effects can be misleading. 2.3.3 Example: Inference for a Simple Model Postulate the model \(C(Y|age,sex) = \beta_{0}+\beta_{1} age + \beta_{2} [sex=f] + \beta_{3} age [sex=f]\) where \([sex=f]\) is an indicator indicator variable for sex=female, i.e., the reference cell is sex=male^2. ^2 You can also think of the last part of the model as being \(\beta_{3} X_{3}\), where \(X_{3} = age \times [sex=f]\). Model assumes 1. age is linearly related to \(C(Y)\) for males, 2. age is linearly related to \(C(Y)\) for females, and 3. interaction between age and sex is simple 4. whatever distribution, variance, and independence assumptions are appropriate for the model being considered. Interpretations of parameters: Parameter Meaning \(\beta_{0}\) \(C(Y | age=0, sex=m)\) \(\beta_{1}\) \(C(Y | age=x+1, sex=m) - C(Y | age=x, sex=m)\) \(\beta_{2}\) \(C(Y | age=0, sex=f) - C(Y | age=0, sex=m)\) \(\beta_{3}\) \(C(Y | age=x+1, sex=f) - C(Y | age=x, sex=f) -\) \([C(Y | age=x+1, sex=m) - C(Y | age=x, sex=m)]\) \(\beta_{3}\) is the difference in slopes (female – male). When a high-order effect such as an interaction effect is in the model, be sure to interpret low-order effects by finding out what makes the interaction effect ignorable. In our example, the interaction effect is zero when age=0 or sex is male. Hypotheses that are usually inappropriate: 1. \(H_{0}: \beta_{1}=0\): This tests whether age is associated with \(Y\) for males 2. \(H_{0}: \beta_{2}=0\): This tests whether sex is associated with \(Y\) for zero year olds More useful hypotheses follow. For any hypothesis need to • Write what is being tested • Translate to parameters tested • List the alternative hypothesis • Not forget what the test is powered to detect □ Test against nonzero slope has maximum power when linearity holds □ If true relationship is monotonic, test for non-flatness will have some but not optimal power □ Test against a quadratic (parabolic) shape will have some power to detect a logarithmic shape but not against a sine wave over many cycles • Useful to write e.g. “\(H_{a}:\) age is associated with \(C(Y)\), powered to detect a linear relationship” Most Useful Tests for Linear age \(\times\) sex Model Null or Alternative Hypothesis Mathematical Statement Effect of age is independent of sex or Effect of sex is independent of age or \(H_{0}: \beta_{3}=0\) age and sex are additive age effects are parallel age interacts with sex age modifies effect of sex \(H_{a}: \beta_{3} \neq 0\) sex modifies effect of age sex and age are non-additive (synergistic) age is not associated with \(Y\) \(H_{0}: \beta_{1}=\beta_{3}=0\) age is associated with \(Y\) \(H_{a}: \beta_{1} \neq 0 \textrm{~or~} \beta_{3} \neq 0\) age is associated with \(Y\) for either females or males sex is not associated with \(Y\) \(H_{0}: \beta_{2}=\beta_{3}=0\) sex is associated with \(Y\) \(H_{a}: \beta_{2} \neq 0 \textrm{~or~} \beta_{3} \neq 0\) sex is associated with \(Y\) for some value of age Neither age nor sex is associated with \(Y\) \(H_{0}: \beta_{1}=\beta_{2}=\beta_{3}=0\) Either age or sex is associated with \(Y\) \(H_{a}: \beta_{1} \neq 0 \textrm{~or~} \beta_{2} \neq 0 \textrm{~or~} \beta_{3} \neq 0\) Note: The last test is called the global test of no association. If an interaction effect present, there is both an age and a sex effect. There can also be age or sex effects when the lines are parallel. The global test of association (test of total association) has 3 d.f. instead of 2 (age + sex) because it allows for unequal slopes. 2.3.4 Review of Composite (Chunk) Tests we may want to jointly test the association between all body measurements and response, holding age and sex constant. • This 3 d.f. test may be obtained two ways: □ Remove the 3 variables and compute the change in \(SSR\) or \(SSE\) □ Test \(H_{0}:\beta_{3}=\beta_{4}=\beta_{5}=0\) using matrix algebra (e.g., anova(fit, weight, waist, tricep) if fit is a fit object created by the R rms package) 2.4 Relaxing Linearity Assumption for Continuous Predictors 2.4.1 Avoiding Categorization Natura non facit saltus (Nature does not make jumps) — Gottfried Wilhelm Leibniz • Relationships seldom linear except when predicting one variable from itself measured earlier • Categorizing continuous predictors into intervals is a disaster; see Royston et al. (2006), D. G. Altman (1991), Hilsenbeck & Clark (1996), Lausen & Schumacher (1996), D. G. Altman et al. (1994), Belcher (1992), Faraggi & Simon (1996), Ragland (1992), Suissa & Blais (1995), Buettner et al. (1997), Maxwell & Delaney (1993), Schulgen et al. (1994), Douglas G. Altman (1998), Holländer et al. (2004), Moser & Coombs (2004), Wainer (2006), Fedorov et al. (2009), Giannoni et al. (2014), Collins et al. (2016), Bennette & Vickers (2012) and Biostatistics for Biomedical Research, Chapter • Some problems caused by this approach: 1. Estimated values have reduced precision, and associated tests have reduced power 2. Categorization assumes relationship between predictor and response is flat within intervals; far less reasonable than a linearity assumption in most cases 3. To make a continuous predictor be more accurately modeled when categorization is used, multiple intervals are required 4. Because of sample size limitations in the very low and very high range of the variable, the outer intervals (e.g., outer quintiles) will be wide, resulting in significant heterogeneity of subjects within those intervals, and residual confounding 5. Categorization assumes that there is a discontinuity in response as interval boundaries are crossed. Other than the effect of time (e.g., an instant stock price drop after bad news), there are very few examples in which such discontinuities have been shown to exist. 6. Categorization only seems to yield interpretable estimates. E.g. odds ratio for stroke for persons with a systolic blood pressure \(> 160\) mmHg compared to persons with a blood pressure \(\leq 160\) mmHg \(\rightarrow\) interpretation of OR depends on distribution of blood pressures in the sample (the proportion of subjects \(> 170\), \(> 180\), etc.). If blood pressure is modeled as a continuous variable (e.g., using a regression spline, quadratic, or linear effect) one can estimate the ratio of odds for exact settings of the predictor, e.g., the odds ratio for 200 mmHg compared to 120 mmHg. 7. Categorization does not condition on full information. When, for example, the risk of stroke is being assessed for a new subject with a known blood pressure (say 162~mmHg), the subject does not report to her physician “my blood pressure exceeds 160” but rather reports 162 mmHg. The risk for this subject will be much lower than that of a subject with a blood pressure of 200 mmHg. 8. If cutpoints are determined in a way that is not blinded to the response variable, calculation of \(P\)-values and confidence intervals requires special simulation techniques; ordinary inferential methods are completely invalid. E.g.: cutpoints chosen by trial and error utilizing \(Y\), even informally \(\rightarrow\) \(P\)-values too small and CLs not accurate^3. 9. Categorization not blinded to \(Y\) \(\rightarrow\) biased effect estimates (D. G. Altman et al. (1994), Schulgen et al. (1994)) 10. “Optimal” cutpoints do not replicate over studies. Holländer et al. (2004) state that “… the optimal cutpoint approach has disadvantages. One of these is that in almost every study where this method is applied, another cutpoint will emerge. This makes comparisons across studies extremely difficult or even impossible. Altman et al. point out this problem for studies of the prognostic relevance of the S-phase fraction in breast cancer published in the literature. They identified 19 different cutpoints used in the literature; some of them were solely used because they emerged as the ‘optimal’ cutpoint in a specific data set. In a meta-analysis on the relationship between cathepsin-D content and disease-free survival in node-negative breast cancer patients, 12 studies were in included with 12 different cutpoints Interestingly, neither cathepsin-D nor the S-phase fraction are recommended to be used as prognostic markers in breast cancer in the recent update of the American Society of Clinical Oncology.” Giannoni et al. (2014) demonstrated that many claimed “optimal cutpoints” are just the observed median values in the sample, which happens to optimize statistical power for detecting a separation in outcomes. 11. Disagreements in cutpoints (which are bound to happen whenever one searches for things that do not exist) cause severe interpretation problems. One study may provide an odds ratio for comparing body mass index (BMI) \(> 30\) with BMI \(\leq 30\), another for comparing BMI \(> 28\) with BMI \(\leq 28\). Neither of these has a good definition and the two estimates are not comparable. 12. Cutpoints are arbitrary and manipulable; cutpoints can be found that can result in both positive and negative associations Wainer (2006). 13. If a confounder is adjusted for by categorization, there will be residual confounding that can be explained away by inclusion of the continuous form of the predictor in the model in addition to the categories. ^3 If a cutpoint is chosen that minimizes the \(P\)-value and the resulting \(P\)-value is 0.05, the true type I error can easily be above 0.5 Holländer et al. (2004). • To summarize: The use of a (single) cutpoint \(c\) makes many assumptions, including: 1. Relationship between \(X\) and \(Y\) is discontinuous at \(X=c\) and only \(X=c\) 2. \(c\) is correctly found as the cutpoint 3. \(X\) vs. \(Y\) is flat to the left of \(c\) 4. \(X\) vs. \(Y\) is flat to the right of \(c\) 5. The choice of \(c\) does not depend on the values of other predictors Interactive demonstration of power loss of categorization vs. straight line and quadratic fits in OLS, with varying degree of nonlinearity and noise added to \(X\) (must run in RStudio) Example^4 of misleading results from creating intervals (here, deciles) of a continuous predictor. Final interval is extremely heterogeneous and is greatly influenced by very large glycohemoglobin values, creating the false impression of an inflection point at 5.9. ^4 From NHANES III; Diabetes Care 32:1327-34; 2009 adapted from Diabetes Care 20:1183-1197; 1997. See this for excellent graphical examples of the harm of categorizing predictors, especially when using quantile groups. 2.4.2 Simple Nonlinear Terms \[C(Y|X_{1}) = \beta_{0}+\beta_{1} X_{1}+\beta_{2} X_{1}^{2} .\] • \(H_{0}:\) model is linear in \(X_{1}\) vs. \(H_{a}:\) model is quadratic in \(X_{1} \equiv H_{0}: \beta_{2}=0\). • Test of linearity may be powerful if true model is not extremely non-parabolic • Predictions not accurate in general as many phenomena are non-quadratic • Can get more flexible fits by adding powers higher than 2 • But polynomials do not adequately fit logarithmic functions or “threshold” effects, and have unwanted peaks and valleys. 2.4.3 Splines for Estimating Shape of Regression Function and Determining Predictor Transformations • Draftsman’s spline: flexible strip of metal or rubber used to trace curves. • Spline Function: piecewise polynomial • Linear Spline Function: piecewise linear function □ Bi-linear regression: model is \(\beta_{0}+\beta_{1}X\) if \(X \leq a\), \(\beta_{2}+\beta_{3}X\) if \(X > a\). □ Problem with this notation: two lines not constrained to join □ To force simple continuity: \(\beta_{0} + \beta_{1}X + \beta_{2}(X-a)\times [X>a] = \beta_{0} + \beta_{1}X_{1} + \beta_{2}X_{2}\), where \(X_{2} = (X_{1}-a) \times [X_{1} > a]\). □ Slope is \(\beta_{1}, X \leq a\), \(\beta_{1}+\beta_{2}, X > a\). □ \(\beta_{2}\) is the slope increment as you pass \(a\) QSee this for a nice review and information about resources in R. More generally: \(x\)-axis divided into intervals with endpoints \(a,b,c\) (knots). \[\begin{array}{ccc} f(X) &=& \beta_{0}+\beta_{1}X+\beta_{2}(X-a)_{+}+\beta_{3}(X-b)_{+} \nonumber\\ &+& \beta_{4}(X-c)_{+} , \end{array}\] \[\begin{array}{ccc} (u)_{+}=&u,&u>0 ,\nonumber\\ &0,&u\leq0 . \end{array}\] \[\begin{array}{ccc} f(X) & = \beta_{0}+\beta_{1}X, & X\leq a \nonumber \\ & = \beta_{0}+\beta_{1}X+\beta_{2}(X-a) & a<X\ leq b \\ & = \beta_{0}+\beta_{1}X+\beta_{2}(X-a)+\beta_{3}(X-b) & b<X\leq c \nonumber\\ & = \beta_{0}+\beta_{1}X+\beta_{2}(X-a) & \nonumber\\ & + \beta_{3}(X-b)+\beta_{4}(X-c) & c<X. \nonumber \end Figure 2.1: A linear spline function with knots at \(a = 1, b = 3, c = 5\). \[C(Y|X) = f(X) = X\beta,\] where \(X\beta = \beta_{0}+\beta_{1} X_{1}+\beta_{2} X_{2}+\beta_{3}X_{3}+\beta_{4} X_{4}\), and \[\begin{array}{cc} X_{1}=X & X_{2} = (X-a)_{+}\nonumber\\ X_{3}=(X-b)_{+} & X_{4} = (X-c)_{+} . \end{array}\] Overall linearity in \(X\) can be tested by testing \(H_{0} : \beta_{2} = \beta_{3} = \beta_{4} = 0\). 2.4.4 Cubic Spline Functions Cubic splines are smooth at knots (function, first and second derivatives agree) — can’t see joins. \[\begin{array}{ccc} f(X) &=& \beta_{0}+\beta_{1}X+\beta_{2}X^{2}+\beta_{3}X^{3}\nonumber\\ &+&\beta_{4}(X-a)_{+}^{3}+ \beta_{5}(X-b)_{+}^{3}+\beta_{6}(X-c)_{+}^{3}\\ &=& X\beta\nonumber \end{array} \] \[\begin{array}{cc} X_{1}=X & X_{2}=X^{2}\nonumber\\ X_{3}=X^{3} & X_{4}=(X-a)_{+}^{3}\\ X_{5}=(X-b)_{+}^{3} & X_{6}=(X-c)_{+}^{3}\nonumber . \end{array}\] \(k\) knots \(\rightarrow k+3\) coefficients excluding intercept. \(X^2\) and \(X^3\) terms must be included to allow nonlinearity when \(X < a\). stats.stackexchange.com/questions/421964 has some useful descriptions of what happens at the knots, e.g.: Knots are where different cubic polynomials are joined, and cubic splines force there to be three levels of continuity (the function, its slope, and its acceleration or second derivative (slope of the slope) do not change) at these points. At the knots the jolt (third derivative or rate of change of acceleration) is allowed to change suddenly, meaning the jolt is allowed to be discontinuous at the knots. Between knots, jolt is constant. The following graphs show the function and its first three derivatives (all further derivatives are zero) for the function given by \(f(x) = x + x^{2} + 2x^{3} + 10(x - 0.25)^{3}_{+} - 50(x - 0.5)^ {3}_{+} -100(x - 0.75)^{3}_{+}\) for \(x\) going from 0 to 1, where there are three knots, at \(x=0.25, 0.5, 0.75\). spar(bty='l', mfrow=c(4,1), bot=-1.5) x <- seq(0, 1, length=500) x1 <- pmax(x - .25, 0) x2 <- pmax(x - .50, 0) x3 <- pmax(x - .75, 0) b1 <- 1; b2 <- 1; b3 <- 2; b4 <- 10; b5 <- -50; b6 <- -100 y <- b1 * x + b2 * x^2 + b3 * x^3 + b4 * x1^3 + b5 * x2^3 + b6 * x3^3 y1 <- b1 + 2*b2*x + 3*b3*x^2 + 3*b4*x1^2 + 3*b5*x2^2 + 3*b6*x3^2 y2 <- 2*b2 + 6*b3*x + 6*b4*x1 + 6*b5*x2 + 6*b6*x3 y3 <- 6*b3 + 6*b4*(x1>0)+ 6*b5*(x2>0) + 6*b6*(x3>0) g <- function() abline(v=(1:3)/4, col=gray(.85)) plot(x, y, type='l', ylab=''); g() text(0, 1.5, 'Function', adj=0) plot(x, y1, type='l', ylab=''); g() text(0, -15, 'First Derivative: Slope\nRate of Change of Function', plot(x, y2, type='l', ylab=''); g() text(0, -125, 'Second Derivative: Acceleration\nRate of Change of Slope', plot(x, y3, type='l', ylab=''); g() text(0, -400, 'Third Derivative: Jolt\nRate of Change of Acceleration', Figure 2.2: A regular cubic spline function with three levels of continuity that prevent the human eye from detecting the knots. Also shown is the function’s first three derivatives. Knots are located at \(x=0.25, 0.5, 0.75\). For \(x\) beyond the outer knots, the function is not restricted to be linear. Linearity would imply an acceleration of zero. Vertical lines are drawn at the knots. 2.4.5 Restricted Cubic Splines Stone and Koo Stone & Koo (1985): cubic splines poorly behaved in tails. Constrain function to be linear in tails. \(k+3 \rightarrow k-1\) parameters Devlin & Weeks (1986). To force linearity when \(X < a\): \(X^2\) and \(X^3\) terms must be omitted To force linearity when \(X\) is beyond the last knot: last two \(\beta\) s are redundant, i.e., are just combinations of the other \(\beta\) s. The restricted spline function with \(k\) knots \(t_{1}, \ldots, t_{k}\) is given by Devlin & Weeks (1986) \[f(X) = \beta_{0}+\beta_{1} X_{1}+\beta_{2} X_{2}+\ldots+\beta_{k-1} X_{k-1},\] where \(X_ {1} = X\) and for \(j=1, \ldots, k-2\), \[\begin{array}{ccc} X_{j+1} &= &(X-t_{j})_{+}^{3}-(X-t_{k-1})_{+}^{3} (t_{k}-t_{j})/(t_{k}-t_{k-1})\nonumber\\ &+&(X-t_{k})_{+}^{3} (t_{k-1}-t_{j})/(t_{k}-t_{k-1}). \end{array} \tag{2.1}\] \(X_{j}\) is linear in \(X\) for \(X\geq t_{k}\). For numerical behavior and to put all basis functions for \(X\) on the same scale, R Hmisc and rms package functions by default divide the terms above by \(\tau = (t_{k} - t_{1})^{2}\). Figure 2.3: Restricted cubic spline component variables for \(k = 5\) and knots at \(X = .05, .275, .5, .725\), and \(.95\). Nonlinear basis functions are scaled by \(\tau\). The left panel is a \(y \)–magnification of the right panel. Fitted functions such as those in Figure 2.4 will be linear combinations of these basis functions as long as knots are at the same locations used here. spar(left=-2, bot=2, mfrow=c(2,2), ps=13) x <- seq(0, 1, length=300) for(nk in 3:6) { knots <- seq(.05, .95, length=nk) xx <- rcspline.eval(x, knots=knots, inclx=T) for(i in 1 : (nk - 1)) xx[,i] <- (xx[,i] - min(xx[,i])) / (max(xx[,i]) - min(xx[,i])) for(i in 1 : 20) { beta <- 2*runif(nk-1) - 1 xbeta <- xx %*% beta + 2 * runif(1) - 1 xbeta <- (xbeta - min(xbeta)) / (max(xbeta) - min(xbeta)) if(i == 1) { plot(x, xbeta, type="l", lty=1, xlab=expression(X), ylab='', bty="l") title(sub=paste(nk,"knots"), adj=0, cex=.75) for(j in 1 : nk) arrows(knots[j], .04, knots[j], -.03, angle=20, length=.07, lwd=1.5) else lines(x, xbeta, col=i) Figure 2.4: Some typical restricted cubic spline functions for \(k = 3, 4, 5, 6\). The \(y\)–axis is \(X\beta\). Arrows indicate knots. These curves were derived by randomly choosing values of \(\ beta\) subject to standard deviations of fitted functions being normalized. Interactive demonstration of linear and cubic spline fitting, plus ordinary \(4^{th}\) order polynomial. This can be run with RStudio or in an ordinary R session. Paul Lambert’s excellent self-contained interactive demonstrations of continuity restrictions, cubic polynomial, linear spline, cubic spline, and restricted cubic spline fitting is at pclambert.net/ interactivegraphs. Jordan Gauthier has another nice interactive demonstration at drjgauthier.shinyapps.io/spliny. See also the excellent resources from Michael Clark here and here. Once \(\beta_{0}, \ldots, \beta_{k-1}\) are estimated, the restricted cubic spline can be restated in the form \[\begin{array}{ccc} f(X) &=& \beta_{0}+\beta_{1}X+\beta_{2}(X-t_{1})_{+}^{3}+\beta_{3}(X-t_{2})_{+}^{3}\nonumber\\ && +\ldots+ \beta_{k+1}(X-t_{k})_{+}^{3} \end{array} \tag{2.2}\] by dividing \(\beta_{2},\ldots,\beta_{k-1}\) by \(\tau\) and computing \[\begin{array}{ccc} \beta_{k} &=& [\beta_{2}(t_{1}-t_{k})+\beta_{3}(t_{2}-t_{k})+\ldots\nonumber\\ && +\beta_{k-1}(t_{k-2}-t_{k})]/(t_{k}-t_{k-1})\nonumber\\ \beta_{k+1} &= & [\beta_{2}(t_{1}-t_ {k-1})+\beta_{3}(t_{2}-t_{k-1})+\ldots\\ && + \beta_{k-1}(t_{k-2}-t_{k-1})]/(t_{k-1}-t_{k})\nonumber . \end{array}\] A test of linearity in X can be obtained by testing \[H_{0} : \beta_{2} = \beta_{3} = \ldots = \beta_{k-1} = 0.\] Example: Selvin et al. (2010) 2.4.6 Choosing Number and Position of Knots • Knots are specified in advance in regression splines • Locations not important in most situations— Stone (1986), Durrleman & Simon (1989) • Place knots where data exist — fixed quantiles of predictor’s marginal distribution • Fit depends more on choice of \(k\) 3 .10 .5 .90 4 .05 .35 .65 .95 5 .05 .275 .5 .725 .95 6 .05 .23 .41 .59 .77 .95 7 .025 .1833 .3417 .5 .6583 .8167 .975 \(n<100\) – replace outer quantiles with 5th smallest and 5th largest \(X\) (Stone & Koo (1985)). Choice of \(k\): • Flexibility of fit vs. \(n\) and variance • Usually \(k=3,4,5\). Often \(k=4\) • Large \(n\) (e.g. \(n\geq 100\)) – \(k=5\) • Small \(n\) (\(<30\), say) – \(k=3\) • Can use Akaike’s information criterion (AIC) (Atkinson (1980), van Houwelingen & le Cessie (1990)) to choose \(k\) • This chooses \(k\) to maximize model likelihood ratio \(\chi^{2} - 2k\). See Govindarajulu et al. (2007) for a comparison of restricted cubic splines, fractional polynomials, and penalized splines. 2.4.7 Nonparametric Regression • Estimate tendency (mean or median) of \(Y\) as a function of \(X\) • Few assumptions • Especially handy when there is a single \(X\) • Plotted trend line may be the final result of the analysis • Simplest smoother: moving average \(X\): 1 2 3 5 8 \(Y\): 2.1 3.8 5.7 11.1 17.2 \[\begin{array}{ccc} \hat{E}(Y | X=2) &=& \frac{2.1+3.8+5.7}{3} \\ \hat{E}(Y | X=\frac{2+3+5}{3}) &=& \frac{3.8+5.7+11.1}{3} \end{array}\] • overlap OK • problem in estimating \(E(Y)\) at outer \(X\)-values • estimates very sensitive to bin width • Moving linear regression far superior to moving avg. (moving flat line) • Cleveland (1979) moving linear regression smoother loess (locally weighted least squares) is the most popular smoother. To estimate central tendency of \(Y\) at \(X=x\): □ take all the data having \(X\) values within a suitable interval about \(x\) (default is \(\frac{2}{3}\) of the data) □ fit weighted least squares linear regression within this neighborhood □ points near \(x\) given the most weight^5 □ points near extremes of interval receive almost no weight □ loess works much better at extremes of \(X\) than moving avg. □ provides an estimate at each observed \(X\); other estimates obtained by linear interpolation □ outlier rejection algorithm built-in • loess works for binary \(Y\) — just turn off outlier detection • Other popular smoother: Friedman’s “super smoother” • For loess or supsmu amount of smoothing can be controlled by analyst • Another alternative: smoothing splines^6 • Smoothers are very useful for estimating trends in residual plots ^5 Weight here means something different than regression coefficient. It means how much a point is emphasized in developing the regression coefficients. ^6 These place knots at all the observed data points but penalize coefficient estimates towards smoothness. 2.4.8 Advantages of Regression Splines over Other Methods Regression splines have several advantages (Harrell et al. (1988)): • Parametric splines can be fitted using any existing regression program • Regression coefficients estimated using standard techniques (ML or least squares), formal tests of no overall association, linearity, and additivity, confidence limits for the estimated regression function are derived by standard theory. • The fitted function directly estimates transformation predictor should receive to yield linearity in \(C(Y|X)\). • Even when a simple transformation is obvious, spline function can be used to represent the predictor in the final model (and the d.f. will be correct). Nonparametric methods do not yield a prediction equation. • Extension to non-additive models. Multi-dimensional nonparametric estimators often require burdensome computations. 2.5 Recursive Partitioning: Tree-Based Models Breiman et al. (1984): CART (Classification and Regression Trees) — essentially model-free • Find predictor so that best possible binary split has maximum value of some statistic for comparing 2 groups • Within previously formed subsets, find best predictor and split maximizing criterion in the subset • Proceed in like fashion until \(<k\) obs. remain to split • Summarize \(Y\) for the terminal node (e.g., mean, modal category) • Prune tree backward until it cross-validates as well as its “apparent” accuracy, or use shrinkage Advantages/disadvantages of recursive partitioning: • Does not require functional form for predictors • Does not assume additivity — can identify complex interactions • Can deal with missing data flexibly • Interactions detected are frequently spurious • Does not use continuous predictors effectively • Penalty for overfitting in 3 directions • Often tree doesn’t cross-validate optimally unless pruned back very conservatively • Very useful in messy situations or those in which overfitting is not as problematic (confounder adjustment using propensity scores Cook & Goldman (1988); missing value imputation) See Austin et al. (2010). 2.5.1 New Directions in Predictive Modeling The approaches recommended in this course are • fitting fully pre-specified models without deletion of “insignificant” predictors • using data reduction methods (masked to \(Y\)) to reduce the dimensionality of the predictors and then fitting the number of parameters the data’s information content can support • use shrinkage (penalized estimation) to fit a large model without worrying about the sample size. The data reduction approach can yield very interpretable, stable models, but there are many decisions to be made when using a two-stage (reduction/model fitting) approach, Newer approaches are evolving, including the following. These new approach handle continuous predictors well, unlike recursive partitioning. • lasso (shrinkage using L1 norm favoring zero regression coefficients) - Tibshirani (1996), Steyerberg et al. (2000) • elastic net (combination of L1 and L2 norms that handles the \(p > n\) case better than the lasso) Zou & Hastie (2005) • adaptive lasso H. H. Zhang & Lu (2007), H. Wang & Leng (2007) • more flexible lasso to differentially penalize for variable selection and for regression coefficient estimation (Radchenko & James (2008)) • group lasso to force selection of all or none of a group of related variables (e.g., indicator variables representing a polytomous predictor) • group lasso-like procedures that also allow for variables within a group to be removed (S. Wang et al. (2009)) • sparse-group lasso using L1 and L2 norms to achieve spareness on groups and within groups of variables (N. Simon et al. (2013)) • adaptive group lasso (Wang & Leng) • Breiman’s nonnegative garrote (Xiong (2010)) • “preconditioning”, i.e., model simplification after developing a “black box” predictive model - Paul et al. (2008),Nott & Leng (2010) • sparse principal components analysis to achieve parsimony in data reduction Witten & Tibshirani (2008),Zhou et al. (2006),Leng & Wang (2009),Lee et al. (2010) • bagging, boosting, and random forests T. Hastie et al. (2008) One problem prevents most of these methods from being ready for everyday use: they require scaling predictors before fitting the model. When a predictor is represented by nonlinear basis functions, the scaling recommendations in the literature are not sensible. There are also computational issues and difficulties obtaining hypothesis tests and confidence intervals. When data reduction is not required, generalized additive models T. J. Hastie & Tibshirani (1990), Wood (2006) should also be considered. 2.5.2 Choosing Between Machine Learning and Statistical Modeling • Statistical models allow for complexity (nonlinearity, interaction) • Easy to allow every predictor to have nonlinear effect • Easy to handle unlimited numbers of candidate predictors if assume additivity (e.g., using ridge regression, lasso, elastic net) • Interactions should be pre-specified • Machine learning is gaining attention but is oversold in some settings • Researchers are under the mistaken impression that machine learning can be used on small samples Considerations in Choosing One Approach over Another A statistical model may be the better choice if • Uncertainty is inherent and the signal:noise ratio is not large; even with identical twins, one twin may get colon cancer and the other not; model tendencies instead of doing classification • One could never have perfect training data, e.g., cannot repeatedly test one subject and have outcomes assessed without error • One wants to isolate effects of a small number of variables • Uncertainty in an overall prediction or the effect of a predictor is sought • Additivity is the dominant way that predictors affect the outcome, or interactions are relatively small in number and can be pre-specified • The sample size isn’t huge • One wants to isolate (with a predominantly additive effect) the effects of “special” variables such as treatment or a risk factor • One wants the entire model to be interpretable Machine learning may be the better choice if • The signal:noise ratio is large and the outcome being predicted doesn’t have a strong component of randomness; e.g., in visual pattern recognition an object must be an “E” or not an “E” • The learning algorithm can be trained on an unlimited number of exact replications (e.g., 1000 repetitions of each letter in the alphabet or of a certain word to be translated to German) • Overall prediction is the goal, without being able to succinctly describe the impact of any one variable (e.g., treatment) • One is not very interested in estimating uncertainty in forecasts or in effects of select predictors • Non-additivity is expected to be strong and can’t be isolated to a few pre-specified variables (e.g., in visual pattern recognition the letter “L” must have both a dominating vertical component and a dominating horizontal component) • The sample size is huge (van der Ploeg et al. (2014)) • One does not need to isolate the effect of a special variable such as treatment • One does not care that the model is a “black box” 2.6 Multiple Degree of Freedom Tests of Association \[C(Y|X) = \beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\beta_{3}X_{2}^{2} ,\] \(H_{0}: \beta_{2}=\beta_{3}=0\) with 2 d.f. to assess association between \(X_{2}\) and outcome. In the 5-knot restricted cubic spline model \[C(Y|X) = \beta_{0}+\beta_{1}X+\beta_{2}X'+\beta_{3}X''+\beta_{4}X''' ,\] \(H_{0}: \beta_{1}=\ldots=\beta_{4}=0\) • Test of association: 4 d.f. • Insignificant \(\rightarrow\) dangerous to interpret plot • What to do if 4 d.f. test insignificant, 3 d.f. test for linearity insig., 1 d.f. test sig. after delete nonlinear terms? Grambsch & O’Brien (1991) elegantly described the hazards of pretesting • Studied quadratic regression • Showed 2 d.f. test of association is nearly optimal even when regression is linear if nonlinearity entertained • Considered ordinary regression model \(E(Y|X) =\beta_{0}+\beta_{1}X+\beta_{2}X^{2}\) • Two ways to test association between \(X\) and \(Y\) • Fit quadratic model and test for linearity (\(H_{0}: \beta_{2}=0\)) • \(F\)-test for linearity significant at \(\alpha=0.05\) level \(\rightarrow\) report as the final test of association the 2 d.f. \(F\) test of \(H_{0}: \beta_{1}=\beta_{2}=0\) • If the test of linearity insignificant, refit without the quadratic term and final test of association is 1 d.f. test, \(H_{0}:\beta_{1}=0 | \beta_{2}=0\) • Showed that type I error \(> \alpha\) • Fairly accurate \(P\)-value obtained by instead testing against \(F\) with 2 d.f. even at second stage • Cause: are retaining the most significant part of \(F\) • BUT if test against 2 d.f. can only lose power when compared with original \(F\) for testing both \(\beta\)s • \(SSR\) from quadratic model \(> SSR\) from linear model 2.7 Assessment of Model Fit 2.7.1 Regression Assumptions The general linear regression model is \[C(Y|X) = X\beta =\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\ldots+\beta_{k}X_{k} .\] Verify linearity and additivity. Special case: \[C(Y|X) = \beta_{0}+\beta_ {1}X_{1}+\beta_{2}X_{2},\] where \(X_{1}\) is binary and \(X_{2}\) is continuous. Figure 2.5: Regression assumptions for one binary and one continuous predictor Methods for checking fit: 1. Fit simple linear additive model and check examine residual plots for patterns • For OLS: box plots of \(e\) stratified by \(X_{1}\), scatterplots of \(e\) vs. \(X_{2}\) and \(\hat{Y}\), with trend curves (want flat central tendency, constant variability) • For normality, qqnorm plots of overall and stratified residuals Advantage: Simplicity • Can only compute standard residuals for uncensored continuous response • Subjective judgment of non-randomness • Hard to handle interaction • Hard to see patterns with large \(n\) (trend lines help) • Seeing patterns does not lead to corrective action 1. Scatterplot of \(Y\) vs. \(X_{2}\) using different symbols according to values of \(X_{1}\) Advantages: Simplicity, can see interaction • Scatterplots cannot be drawn for binary, categorical, or censored \(Y\) • Patterns difficult to see if relationships are weak or \(n\) large 1. Stratify the sample by \(X_{1}\) and quantile groups (e.g. deciles) of \(X_{2}\); estimate \(C(Y|X_{1},X_{2})\) for each stratum Advantages: Simplicity, can see interactions, handles censored \(Y\) (if you are careful) • Requires large \(n\) • Does not use continuous var. effectively (no interpolation) • Subgroup estimates have low precision • Dependent on binning method 1. Separately for levels of \(X_{1}\) fit a nonparametric smoother relating \(X_{2}\) to \(Y\) Advantages: All regression aspects of the model can be summarized efficiently with minimal assumptions • Does not apply to censored \(Y\) • Hard to deal with multiple predictors 1. Fit flexible nonlinear parametric model • One framework for examining the model assumptions, fitting the model, drawing formal inference • d.f. defined and all aspects of statistical inference “work as advertised” • Complexity • Generally difficult to allow for interactions when assessing patterns of effects Confidence limits, formal inference can be problematic for methods 1-4. Restricted cubic spline works well for method 5. \[\begin{array}{ccc} \hat{C}(Y|X) &=& \hat{\beta}_{0}+\hat{\beta}_{1}X_{1}+\hat{\beta}_{2}X_{2}+\hat{\beta}_{3}X_{2}'+\hat{\beta}_{4}X_{2}'' \nonumber\\ &=& \hat{\beta}_{0}+\hat{\beta}_{1}X_{1}+\hat {f}(X_{2}) , \end{array}\] where \[\hat{f}(X_{2}) = \hat{\beta}_{2}X_{2}+\hat{\beta}_{3}X_{2}'+\hat{\beta}_{4}X_{2}'' ,\] \(\hat{f}(X_{2})\) spline-estimated transformation of \(X_{2}\). • Plot \(\hat{f}(X_{2})\) vs. \(X_{2}\) • \(n\) large \(\rightarrow\) can fit separate functions by \(X_{1}\) • Test of linearity: \(H_{0}:\beta_{3}=\beta_{4}=0\) • Few good reasons to do the test other than to demonstrate that linearity is not a good default assumption • Nonlinear \(\rightarrow\) use transformation suggested by spline fit or keep spline terms • Tentative transformation \(g(X_{2})\) \(\rightarrow\) check adequacy by expanding \(g(X_{2})\) in spline function and testing linearity • Can find transformations by plotting \(g(X_{2})\) vs. \(\hat{f}(X_{2})\) for variety of \(g\) • Multiple continuous predictors \(\rightarrow\) expand each using spline • Example: assess linearity of \(X_{2}, X_{3}\) \[\begin{array}{ccc} C(Y|X) &=& \beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\beta_{3}X_{2}'+\beta_{4}X_{2}'' \nonumber\\ &+& \beta_{5}X_{3}+\beta_{6}X_{3}'+\beta_{7}X_{3}'' , \end{array}\] Overall test of linearity \(H_{0}: \beta_{3}=\beta_{4}=\beta_{6}=\beta_{7}=0\), with 4 d.f. 2.7.2 Modeling and Testing Complex Interactions Note: Interactions will be misleading if main effects are not properly modeled (M. Zhang et al. (2020)). Suppose \(X_1\) binary or linear, \(X_2\) continuous: \[\begin{array}{ccc} C(Y|X) & = & \beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\beta_{3}X_{2}'+\beta_{4}X_{2}'' \\ \nonumber &&+\beta_{5}X_{1}X_{2}+\beta_{6}X_{1}X_{2}'+\beta_{7}X_{1}X_{2}'' \end{array}\] Simultaneous test of linearity and additivity: \(H_{0}: \beta_{3} = \ldots = \beta_{7} = 0\). • 2 continuous variables: could transform separately and form simple product • But transformations depend on whether interaction terms adjusted for, so it is usually not possible to estimate transformations and interaction effects other than simultaneously • Compromise: Fit interactions of the form \(X_{1} f(X_{2})\) and \(X_{2} g(X_{1})\): \[\begin{array}{ccc} C(Y|X) & = & \beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{1}'+\beta_{3}X_{1}'' \nonumber\\ &+& \beta_{4}X_{2}+\beta_{5}X_{2}'+\beta_{6}X_{2}'' \nonumber\\ &+& \beta_{7}X_{1}X_{2}+\beta_ {8}X_{1}X_{2}'+\beta_{9}X_{1}X_{2}'' \\ &+& \beta_{10}X_{2}X_{1}'+\beta_{11}X_{2}X_{1}'' \nonumber \end{array} \tag{2.3}\] • Test of additivity is \(H_{0}: \beta_{7} = \beta_{8} = \ldots = \beta_{11} = 0\) with 5 d.f. • Test of lack of fit for the simple product interaction with \(X_{2}\) is \(H_{0}: \beta_{8} = \beta_{9} = 0\) • Test of lack of fit for the simple product interaction with \(X_{1}\) is \(H_{0}: \beta_{10} = \beta_{11} = 0\) General spline surface: • Cover \(X_{1} \times X_{2}\) plane with grid and fit patch-wise cubic polynomial in two variables • Restrict to be of form \(aX_{1}+bX_{2}+cX_{1}X_{2}\) in corners • Uses all \((k-1)^{2}\) cross-products of restricted cubic spline terms • See Gray (1992), Gray (1994) for penalized splines allowing control of effective degrees of freedom. See Berhane et al. (2008) for a good discussion of tensor splines. Figure 2.6: Logistic regression estimate of probability of a hemorrhagic stroke for patients in the GUSTO-I trial given \(t\)-PA, using a tensor spline of two restricted cubic splines and penalization (shrinkage). Dark (cold color) regions are low risk, and bright (hot) regions are higher risk. Figure 2.6 is particularly interesting because the literature had suggested (based on approximately 24 strokes) that pulse pressure was the main cause of hemorrhagic stroke whereas this flexible modeling approach (based on approximately 230 strokes) suggests that mean arterial blood pressure (roughly a \(45^\circ\) line) is what is most important over a broad range of blood pressures. At the far right one can see that pulse pressure (axis perpendicular to \(45^\circ\) line) may have an impact although a non-monotonic one. Other issues: • \(Y\) non-censored (especially continuous) \(\rightarrow\) multi-dimensional scatterplot smoother (Chambers & Hastie (1992)) • Interactions of order \(>2\): more trouble • 2-way interactions among \(p\) predictors: pooled tests • \(p\) tests each with \(p-1\) d.f. Some types of interactions to pre-specify in clinical studies: • Treatment \(\times\) severity of disease being treated • Age \(\times\) risk factors • Age \(\times\) type of disease • Measurement \(\times\) state of a subject during measurement • Race \(\times\) disease • Calendar time \(\times\) treatment • Quality \(\times\) quantity of a symptom • Measurement \(\times\) amount of deterioration of the measurement The last example is worth expanding as an example in model formulation. Consider the following study. • A sample of patients seen over several years have a blood sample taken at time of hospitalization • Blood samples are frozen • Long after the last patient was sampled, the blood samples are thawed all in the same week and a blood analysis is done • It is known that the quality of the blood analysis deteriorates roughly logarithmically by the age of the sample; blood measurements made on old samples are assumed to be less predictive of • This is reflected in an interaction between a function of sample age and the blood measurement B^7 • Patients were followed for an event, and the outcome variable of interest is the time from hospitalization to that event • To not assume a perfect logarithmic relationship for sample age on the effect of the blood measurement, a restricted cubic spline model with 3 default knots will be fitted for log sample age • Sample age is assumed to not modify the effects of non-blood predictors patient age and sex • Model may be specified the following way using the R rms package to fit a Cox proportional hazards model • Test for nonlinearity of sampleAge tests the adequacy of assuming a plain logarithmic trend in sample age ^7 For continuous \(Y\) one might need to model the residual variance of \(Y\) as increasing with sample age, in addition to modeling the mean function. The B \(\times\) sampleAge interaction effects have 6 d.f. and tests whether the sample deterioration affects the effect of B. By not assuming that B has the same effect for old samples as for young samples, the investigator will be able to estimate the effect of B on outcome when the blood analysis is ideal by inserting sampleAge = 1 day when requesting predicted values as a function of B. 2.7.3 Fitting Ordinal Predictors • Small no. categories (3-4) \(\rightarrow\) polytomous factor, indicator variables • Design matrix for easy test of adequacy of initial codes \(\rightarrow\) \(k\) original codes + \(k-2\) indicators • More categories \(\rightarrow\) score using data-driven trend. Later tests use \(k-1\) d.f. instead of 1 d.f. • E.g., compute logit(mortality) vs. category • Much better: used penalized maximum likelihood estimation (R ordSmooth package) or Bayesian shrinkage (R brms package). 2.7.4 Distributional Assumptions • Some models (e.g., logistic): all assumptions in \(C(Y|X)=X\beta\) (implicitly assuming no omitted variables!) • Linear regression: \(Y \sim X\beta + \epsilon, \epsilon \sim n(0,\sigma^{2})\) • Examine distribution of residuals • Some models (Weibull, Cox (1972)): \(C(Y|X) = C(Y=y|X) = d(y)+X\beta\) \(C =\) log hazard • Check form of \(d(y)\) • Show \(d(y)\) does not interact with \(X\) 2.8 Complex Curve Fitting Example • Restricted cubic spline • Discontinuity (interrupted time series analysis) • Cyclic trend (seasonality) • Data from academic.oup.com/ije/article/46/1/348/2622842 by Bernal et al. (2017) • Rates of hospital admissions for acute coronary events in Sicily before and after a smoking ban on 2005-01 • Poisson regression on case counts, adjusted for population size as an offset variable (analyzes event rate) (see stats.stackexchange.com/questions/11182) • Classic interrupted time series puts a discontinuity at the intervention point and assesses statistical evidence for a nonzero jump height • We will do that and also fit a continuous cubic spline but with multiple knots near the intervention point • All in context of long-term and seasonal trends • Uses the rms package gTrans function documented at hbiostat.org/R/rms/gtrans.html • Can do this without gTrans but harder to plot predicted values, get contrasts, and get chunk tests • Time variable is months after 2001-12-01 Start with a standard restricted cubic spline fit, 6 knots at default quantile locations. From the fitted Poisson model we estimate the number of cases per a constant population size of 100,000. g <- function(x) exp(x) * 100000 off <- list(stdpop=mean(d$stdpop)) # offset for prediction (383464.4) w <- geom_point(aes(x=time, y=rate), data=d) v <- geom_vline(aes(xintercept=37, col=I('red'))) yl <- ylab('Acute Coronary Cases Per 100,000') f <- Glm(aces ~ offset(log(stdpop)) + rcs(time, 6), data=d, family=poisson) • To add seasonality to the model can add sine and/or cosine terms • See pmean.com/07/CyclicalTrends.html by Steve Simon • If you knew the month at which incidence is a minimum, could just add a sine term to the model • Adding both sine and cosine terms effectively allows for a model parameter that estimates the time origin • Assume the period (cycle length) is 12m [1] 5.00 14.34 24.78 35.22 45.66 55.00 Code Code Next add more knots near intervention to allow for sudden change Now make the slow trend simpler (6 knots) but add a discontinuity at the intervention. More finely control times at which predictions are requested, to handle discontinuity. Look at fit statistics especially evidence for the jump General Linear Model Glm(formula = aces ~ offset(log(stdpop)) + gTrans(time, h), family = poisson, data = d, x = TRUE, y = TRUE) Model Likelihood Ratio Test Obs59 LR χ^2169.64 Residual d.f.51 d.f.7 g0.080 Pr(>χ^2)<0.0001 β S.E. Wald Z Pr(>|Z|) Intercept -6.2118 0.0095 -656.01 <0.0001 time 0.0635 0.0113 5.63 <0.0001 time' -0.1912 0.0433 -4.41 <0.0001 time'' 0.2653 0.0760 3.49 0.0005 time''' -0.2409 0.0925 -2.61 0.0092 sin 0.0343 0.0067 5.11 <0.0001 cos 0.0380 0.0065 5.86 <0.0001 jump -0.1268 0.0313 -4.06 <0.0001 • Evidence for an intervention effect (jump) • Evidence for seasonality • Could have analyzed rates using a semiparametric model Compute likelihood ratio \(\chi^2\) test statistics for this model Likelihood Ratio Statistics for aces χ^2 d.f. P time 169.64 7 <0.0001 Nonlinear 127.03 6 <0.0001 TOTAL 169.64 7 <0.0001 Get a joint LR test of seasonality and discontinuity by omitting 3 terms from the model 2.9 Bayesian Modeling There are many advantages to fitting models with a Bayesian approach when compared to the frequentist / maximum likelihood approach that receives more coverage in this text. These advantages include • the ability to use outside information, e.g. the direction or magnitude of an effect, magnitude of interaction effect, degree of nonlinearity • getting exact inference (to within simulation error) regarding model parameters without using any Gaussian approximations as used so heavily in frequentist inference • getting exact inference about derived parameters that are nonlinear transformations of the original model parameters, without using the \(\delta\)-method approximation so often required in frequentist procedures • devoting less than one degree of freedom to a parameter by borrowing information • getting exact inference using ordinary Bayesian procedures when penalization/shrinkage/regularization is used to limit overfitting • allowing one to fit distributional models, e.g., using a regression model for the log standard deviation in addition to one for the mean, and getting exact inference for the joint models • obtaining automatic inference about any interval of possible parameter values instead of just using \(p\)-values to bring evidence against a null value • obtaining exact inference about unions and intersections of various assertions about parameters, e.g., the probability that a treatment reduces mortality by any amount or reduces blood pressure by \(\geq 5\) mmHg • getting exact uncertainty intervals for complex derived parameters such as the \(c\)-index or Somers’ \(D_{xy}\) that quantify the model’s predictive discrimination Bayesian approaches do not tempt analysts to mistakenly assume that the central limit theorem protects them.The \(\delta\)-method fails when the sampling distribution of the parameter is not symmetric.It is just as easy to compute the Bayesian probability that an odds ratio exceeds 2.0 as it is to calculate the probability that the odds ratio exceeds 1.0. Uncertainies in Model Performance Metrics As seen in example output form the blrm function below, one can automatically obtain highest posterior density uncertainty intervals for any parameter including overall model performance metrics. These are derived from the \(m\) posterior draws of the model’s parameters by computing the model performance metric for each draw. The uncertainties captured by this process relate to the ability to well-estimate model parameters, which relates also to within-training-sample model fit. So the uncertainties reflect a similar phenomenon to what \(R^{2}_\text{adj}\) measures. Adjusted \(R^2\) other than McFadden’s penalize for \(p\), the number of regression parameters estimated, other than the intercept. This is very similar to considering likelihood ratio \(\chi^2\) statistics minus the number of degrees of freedom involved in the LR test. On the other hand, AIC approximates out-of-sample model performance by using a penalty of twice the degrees of freedom (like the seldom-used McFadden \(R^{2}_\text{adj}\)) So uncertainties computed by the blrm function come solely from the spread of the posterior distributions, i.e., the inability of the analysis to precisely estimate the unknown parameters. They condition on the observed design matrix \(X\) and do not consider other samples as would be done with out-of-sample predictive accuracy assessment with AIC, the bootstrap, or cross-validation. When \(p=1\) a rank measure of predictive discrimination such as \(D_{xy}\) will have no uncertainty unless the sign of the one regression coefficient often flips over posterior draws. A major part of the arsenal of Bayesian modeling weapons is Stan based at Columbia University. Very general R statistical modeling packages such as brms and rstanarm are based on Stan. RMS has several fully worked-out Bayesian modeling case studies. The purpose of the remainder of this chapter is to show the power of Bayes in general regression modeling. 2.9.1 A General Model Specification Approach With a Bayesian approach one can include a parameter for each aspect of the model you know exists but are unsure about. This leads to results that are not overly optimistic, because uncertainty intervals will be a little wider to acknowledge what you don’t know. A good example is the Bayesian \(t\)-test, which has a parameter for the amount of non-normality and a parameter for how unequal the variances may be in the two groups being compared. Prior distributions can favor normality and equal variance, and modeling becomes more flexible as \(n \uparrow\). Other examples of borrowing information and adding flexibility: • include a parameter for time-varying treatment effect to not assume proportional hazards • include a parameter for a \(Y\)-varying effect to not assume proportional odds • include an interaction effect for a treatment \(\times\) sex interaction and use a prior that favors the treatment effect for females being simlar to the treatment effect for males but that allows the effects to become arbitrarily different as \(n \uparrow\). Interactions bring special problems to estimation and inference. In the best of cases, an interaction requires \(4 \times\) larger sample size to estimate and may require \(16 \times\) the sample size to achieve the same power as a main effect test. We need a way to borrow information, essentially having an interaction term “half in” and “half out” of the model. This has been elegantly described by R. Simon & Freedman (1997) who show how to put priors on interaction terms. Using a Bayesian approach to have an interaction “half in” the model is a much more rational approach than prevailing approaches that • use a sample that was sized for estimating main effects and not interaction effects • use a (low power) test for interaction to decide how to analyze the treatment effect, and ignore such pre-testing when doing a statistical test (that will not preserve \(\alpha\)) or computing a confidence interval (that will not have the nominal \(1 - \alpha\) coverage) • use the pre-specified model with interaction, resulting in low precision because of having no way to borrow information across levels of the interacting factor 2.9.2 Help in Specifying Priors To gain the advantages of Bayesian modeling described above, doing away with binary decisions and allowing the use of outside information, one must specify prior distributions for parameters. It is often difficult to do this, especially when there are nonlinear effects (e.g., splines) and interactions in the model. We need a way to specify priors on the original \(X\) and \(Y\) scales. Fortunately Stan provides an elegant solution. As discussed here Stan allows one to specify priors on transformations of model parameters, and these priors propagate back to the original parameters. It is easier to specify a prior for the effect of increasing age from 30 to 60 that it is to specify a prior for the age slope. It may be difficult to specify a prior for an age \(\times\) treatment interaction (especially when the age effect is nonlinear), but much easier to specify a prior for how different the treatment effect is for a 30 year old and a 60 year old. By specifying priors on one or more contrasts one can easily encode outside information / information borrowing / shrinkage. The rms contrast function provides a general way to implement contrasts up to double differences, and more details about computations are provided in that link. The approach used for specifying priors for contrast in rmsb::blrm uses the same process but is even more general. Both contrast and blrm compute design matrices at user-specified predictor settings, and the contrast matrices (matrices multipled by \(\hat{\beta}\)) are simply differences in such design matrices. Thinking of contrasts as differences in predicted values frees the user from having to care about how parameters map to estimands, and allows an R predict(fit, type='x') function do the hard work. Examples of types of differences are below. rmsb implements priors on contrasts starting with version 1.0-0. • no difference: compute an absolute predicted value (not implemented in blrm for priors) • single difference □ treatment main effect □ continuous predictor effects computed by subtracting predictions at two values of the predictor • double difference □ amount of nonlinearity (differences in slopes over intervals of the predictor) □ interaction effect (e.g., age slope for females minus age slope for males) • triple difference □ amount of nonlinearity in an interaction effect For predictors modeled linearly, the slope is the regression coefficient. For nonlinear effects where \(x\) is transformed by \(f(x)\), the slope at \(x=\frac{a+b}{2}\) is proportionally approximated by \(f(b) - f(a)\), and the slope at \(x=\frac{b+c}{2}\) by \(f(c) - f(b)\). The amount of nonlinearity is reflected by the difference in the two slopes, or \(f(c) - f(b) -[f(b) - f(a)] = f(a) + f(c) - 2f(b)\). You’ll see this form specified in the contrast part of the pcontrast argument to blrm below. This is a numerical approximation to the second derivative (slope of the slope; acceleration). It would be easy to use more accurate Lagrange interpolation derivative approximators here. 2.9.3 Examples of Priors on Contrasts Semiparametic models are introduced in Chapter 13 but we will use one of the models—the proportional odds (PO) ordinal logistic model—in showcasing the utility of specifying priors on contrasts in order to use external information or to place restrictions on model fits. The blrm function in the rmsb package implements this semiparametric model using Stan. Because it does not depend on knowing how to transform \(Y\), I almost always use the more robust ordinal models instead of linear models. The linear predictor \(X\beta\) is on the logit (log odds) scale for the PO model. This unitless scale typically ranges from -5 to 5, corresponding to a range of probabilities of 0.007 to 0.993. Default plotting uses the intercept corresponding to the marginal median of \(Y\), so the log odds of the probability that \(Y\) exceeds or equals this level, given \(X\), is plotted. Estimates can be converted to means, quantiles, or exceedance probabilities using the Mean, Quantile, and ExProb functions in the rms and rmsb packages. Ordinal models in the cumulative probability class such as the PO model have \(k\) intercepts for \(k+1\) distinct values of \(Y\). These intercepts encode the entire empirical distribution of \(Y\) for one covariate setting, hence the term semiparametric and freedom from having to choose a \(Y\) distribution. Effects for the PO model are usually expressed as odds ratios (OR). For the case where the prior median for the OR is 1.0 (prior mean or median log(OR)=0.0) it is useful to solve for the prior SD \(\ sigma\) so that \(\Pr(\text{OR} > r) = a = \Pr(\text{OR} < \frac{1}{r})\), leading to \(a = \frac{|\log(r)|}{\Phi^{-1}(1-a)}\), computed by the psigma function below. Another function . is defined as an abbreviation for list() for later usage. psigma <- function(r, a, inline=FALSE, pr=! inline) { sigma <- abs(log(r)) / qnorm(1 - a) dir <- if(r > 1.) '>' else '<' x <- if(inline) paste0('$\\Pr(\\text{OR}', dir, r, ') =', a, ' \\Rightarrow \\sigma=', round(sigma, 3), '$') else paste0('Pr(OR ', dir, ' ', r, ') = ', a, ' ⇒ σ=', round(sigma, 3)) if(inline) return(x) if(pr) { cat('\n', x, '\n\n', sep='') . <- function(...) list(...) Start with a simple hypothetical example: • model has a quadratic effect of age • age interacts with treatment • sex is also in the model, not interacting with anything We wish to specify a prior on the treatment effect at age 50 so that there is only a 0.05 chance that the \(\text{OR} < 0.5\). \(\Pr(\text{OR}<0.5) =0.05 \Rightarrow \sigma=0.421\). The covariate settings specified in pcontrast below do not mention sex, so predictions are evaluated at the default sex (the mode). Since sex does not interact with anything, the treatment difference of interest makes the sex setting irrelevant anyway. Note that the notation needed for pcontrast need not consider how age is modeled. Consider a more complicated situation. Let’s simulate data for one continuous predictor where the true model is a sine wave. The response variable is a slightly rounded version of a conditionally normal \(Y\). The rounding is done just to lower the number of intercepts from 199 to 52 to speed up the Bayesian PO model fits. options(mc.cores=4, # See https://hbiostat.org/r/examples/blrm/blrm rmsb.backend='cmdstan', rmsbdir='~/.rmsb', # cmdstan.loc is defined in ~/.Rprofile n <- 200 x <- rnorm(n) y <- round(sin(2*x) + rnorm(n), 1) dd <- datadist(x, q.display=c(.005, .995)); options(datadist='dd') f <- blrm(y ~ rcs(x, 6)) Running MCMC with 4 parallel chains... Chain 1 finished in 1.1 seconds. Chain 2 finished in 1.1 seconds. Chain 3 finished in 1.1 seconds. Chain 4 finished in 1.1 seconds. All 4 chains finished successfully. Mean chain execution time: 1.1 seconds. Total execution time: 1.3 seconds. Bayesian Proportional Odds Ordinal Logistic Model Dirichlet Priors With Concentration Parameter 0.052 for Intercepts blrm(formula = y ~ rcs(x, 6)) Mixed Calibration/ Discrimination Rank Discrim. Discrimination Indexes Indexes Indexes Obs200 LOO log L-785.46±14.44 g1.415 [1.075, 1.718] C0.699 [0.691, 0.704] Draws4000 LOO IC1570.91±28.88 g[p]0.295 [0.253, 0.348] D[xy]0.397 [0.383, 0.409] Chains4 Effective p76.37±6.65 EV0.277 [0.179, 0.365] Time1.7s B0.197 [0.193, 0.203] v1.579 [0.905, 2.293] p5 vp0.069 [0.043, 0.089] Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry x -3.1223 -3.1041 -3.1188 0.7941 -4.7295 -1.5537 0.0000 1.09 x' 17.0372 16.9141 16.9973 7.9323 2.7141 33.4192 0.9830 1.03 x'' -20.1302 -19.6242 -19.5780 37.3180 -93.7603 48.7925 0.3030 1.00 x''' -37.4103 -37.9721 -38.7512 57.5486 -150.9883 73.5840 0.2562 1.01 x'''' 50.7703 50.7299 51.1767 49.3163 -47.9964 146.4685 0.8490 0.99 Code Code Now suppose that there is strong prior knowledge that the effect of x is linear when x is in the interval \([-1, 0]\). Let’s reflect that by putting a very sharp prior to tilt the difference in slopes within that interval towards 0.0. pcontrast= specifies two separate contrasts to pull towards zero to more finely detect nonlinearity. The examples that follow use atypically small prior standard deviations so that constraints will be obvious. Pr(OR > 1.05) = 0.01 ⇒ σ=0.021 Running MCMC with 4 parallel chains... Chain 1 finished in 2.2 seconds. Chain 2 finished in 2.2 seconds. Chain 3 finished in 2.2 seconds. Chain 4 finished in 2.2 seconds. All 4 chains finished successfully. Mean chain execution time: 2.2 seconds. Total execution time: 2.3 seconds. Bayesian Proportional Odds Ordinal Logistic Model Dirichlet Priors With Concentration Parameter 0.052 for Intercepts blrm(formula = y ~ rcs(x, 6), pcontrast = con) Mixed Calibration/ Discrimination Rank Discrim. Discrimination Indexes Indexes Indexes Obs200 LOO log L-800.23±14.9 g0.933 [0.633, 1.223] C0.658 [0.646, 0.684] Draws4000 LOO IC1600.46±29.8 g[p]0.206 [0.149, 0.26] D[xy]0.317 [0.291, 0.369] Chains4 Effective p74.37±6.48 EV0.14 [0.067, 0.208] Time2.7s B0.214 [0.208, 0.22] v0.736 [0.329, 1.201] p5 vp0.035 [0.017, 0.052] Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry x 0.2739 0.2729 0.2743 0.3194 -0.3734 0.8792 0.8048 1.02 x' 1.3744 1.3716 1.3814 0.9138 -0.4345 3.1143 0.9330 0.95 x'' -6.0063 -6.0101 -6.0694 3.4247 -12.3450 0.7771 0.0437 1.08 x''' 24.1414 24.3565 24.4090 10.5960 3.2711 44.7410 0.9895 1.01 x'''' -65.1747 -65.9951 -66.4852 21.9036 -107.2615 -20.6480 0.0008 0.99 Contrasts Given Priors [1] list(sd = c(0.0209728582358081, 0.0209728582358081), c1 = list( [2] x = -1), c2 = list(x = -0.75), c3 = list(x = -0.5), c4 = list( [3] x = -0.25), c5 = list(x = 0), contrast = expression(c1 + [4] c3 - 2 * c2, c3 + c5 - 2 * c4)) rcs(x, 6)x rcs(x, 6)x' rcs(x, 6)x'' rcs(x, 6)x''' rcs(x, 6)x'''' 1 0 0.02911963 0.002442679 0.00000000 0 1 0 0.04851488 0.020842754 0.00300411 0 What happens if we moderately limit the acceleration (second derivative; slope of the slope) at 7 equally-spaced points? con <- list(sd=rep(0.5, 7), c1=.(x=-2), c2=.(x=-1.5), c3=.(x=-1), c4=.(x=-.5), c5=.(x=0), c6=.(x=.5), c7=.(x=1), c8=.(x=1.5), c9=.(x=2), contrast=expression(c1 + c3 - 2 * c2, c2 + c4 - 2 * c3, c3 + c5 - 2 * c4, c4 + c6 - 2 * c5, c5 + c7 - 2 * c6, c6 + c8 - 2 * c7, c7 + c9 - 2 * c8) ) f <- blrm(y ~ rcs(x, 6), pcontrast=con) Running MCMC with 4 parallel chains... Chain 2 finished in 1.0 seconds. Chain 1 finished in 1.0 seconds. Chain 3 finished in 1.1 seconds. Chain 4 finished in 1.1 seconds. All 4 chains finished successfully. Mean chain execution time: 1.0 seconds. Total execution time: 1.3 seconds. Bayesian Proportional Odds Ordinal Logistic Model Dirichlet Priors With Concentration Parameter 0.052 for Intercepts blrm(formula = y ~ rcs(x, 6), pcontrast = con) Mixed Calibration/ Discrimination Rank Discrim. Discrimination Indexes Indexes Indexes Obs200 LOO log L-786.24±13.9 g1.066 [0.791, 1.335] C0.695 [0.684, 0.704] Draws4000 LOO IC1572.48±27.79 g[p]0.238 [0.189, 0.288] D[xy]0.389 [0.367, 0.408] Chains4 Effective p74.18±6.39 EV0.18 [0.113, 0.261] Time1.7s B0.198 [0.193, 0.203] v0.897 [0.493, 1.384] p5 vp0.045 [0.028, 0.065] Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry x -2.0432 -2.0209 -2.0230 0.6279 -3.2044 -0.6882 0.0003 1.07 x' 12.6730 12.6258 12.5548 4.9519 2.3065 21.7363 0.9958 1.01 x'' -21.0727 -21.1349 -20.9773 21.6269 -63.2009 21.0176 0.1665 0.99 x''' -9.4519 -9.0149 -9.0901 32.2901 -70.6231 53.9443 0.3902 1.00 x'''' 14.4344 13.8028 13.7531 28.5098 -41.8008 70.0044 0.6845 1.05 Contrasts Given Priors [1] list(sd = c(0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5), c1 = list(x = -2), [2] c2 = list(x = -1.5), c3 = list(x = -1), c4 = list(x = -0.5), [3] c5 = list(x = 0), c6 = list(x = 0.5), c7 = list(x = 1), c8 = list( [4] x = 1.5), c9 = list(x = 2), contrast = expression(c1 + [5] c3 - 2 * c2, c2 + c4 - 2 * c3, c3 + c5 - 2 * c4, c4 + [6] c6 - 2 * c5, c5 + c7 - 2 * c6, c6 + c8 - 2 * c7, c7 + [7] c9 - 2 * c8)) rcs(x, 6)x rcs(x, 6)x' rcs(x, 6)x'' rcs(x, 6)x''' rcs(x, 6)x'''' 1 0 0.01298375 0.000000000 0.000000000 0.000000000 1 0 0.07768802 0.002453429 0.000000000 0.000000000 1 0 0.15526901 0.045575694 0.003046185 0.000000000 1 0 0.23285000 0.122161511 0.048638084 0.001537246 1 0 0.30602473 0.196347206 0.122778948 0.037926277 1 0 0.24969458 0.170741274 0.117781863 0.055476864 1 0 0.06289905 0.043179908 0.029952920 0.014391805 Next simulate data with one continuous predictor x1 and a 3-level grouping variable x2. Start with almost flat priors that allow arbitrary interaction patterns as long as x1 has a smooth effect. Code Code Running MCMC with 4 parallel chains... Chain 2 finished in 0.5 seconds. Chain 1 finished in 0.6 seconds. Chain 3 finished in 0.6 seconds. Chain 4 finished in 0.5 seconds. All 4 chains finished successfully. Mean chain execution time: 0.5 seconds. Total execution time: 0.6 seconds. Bayesian Proportional Odds Ordinal Logistic Model Dirichlet Priors With Concentration Parameter 0.105 for Intercepts blrm(formula = y ~ rcs(x1, 4) * x2) Frequencies of Responses 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.7 Mixed Calibration/ Discrimination Rank Discrim. Discrimination Indexes Indexes Indexes Obs90 LOO log L-251.19±9.55 g3.854 [3.103, 4.658] C0.847 [0.834, 0.86] Draws4000 LOO IC502.38±19.09 g[p]0.441 [0.408, 0.471] D[xy]0.694 [0.668, 0.72] Chains4 Effective p43.18±4.25 EV0.649 [0.543, 0.758] Time0.9s B0.11 [0.098, 0.121] v11.435 [7.287, 16.385] p11 vp0.161 [0.132, 0.19] Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry x1 1.5725 1.5194 1.4650 4.7772 -8.1123 10.4638 0.6210 1.00 x1' -5.8973 -5.4386 -5.3742 15.7241 -37.4971 24.2191 0.3642 1.02 x1'' 44.1463 42.1596 41.9364 46.2025 -50.5379 130.7056 0.8225 0.95 x2=b -2.5975 -2.6163 -2.6116 1.5388 -5.7443 0.2554 0.0425 0.99 x2=c 6.5727 6.4505 6.4426 1.9814 2.6367 10.3823 0.9998 1.03 x1 × x2=b -0.1123 -0.2525 -0.3360 6.9047 -13.4689 13.5054 0.4828 0.98 x1' × x2=b 9.7404 10.3344 10.2014 22.1000 -33.7043 53.5533 0.6865 0.99 x1'' × x2=b -28.6650 -30.2328 -29.5727 63.3870 -155.5945 92.4755 0.3170 1.03 x1 × x2=c -11.4685 -11.1197 -11.1542 8.8466 -28.0706 7.3013 0.1008 1.00 x1' × x2=c 39.1944 37.7782 38.3079 27.0003 -15.5181 91.7781 0.9190 1.02 x1'' × x2=c -119.1338 -114.2335 -113.9769 74.3172 -263.5373 29.8254 0.0608 0.96 Put priors specifying that groups b and c have a similar x1-shape (no partial interaction between x1 and b vs. c). contrast below encodes parallelism with respect to b and c. con <- list(sd=rep(psigma(1.5, 0.05), 4), c1=.(x1=0, x2='b'), c2=.(x1=0, x2='c'), c3=.(x1=.25, x2='b'), c4=.(x1=.25, x2='c'), c5=.(x1=.5, x2='b'), c6=.(x1=.5, x2='c'), c7=.(x1=.75, x2='b'), c8=.(x1=.75, x2='c'), c9=.(x1=1, x2='b'), c10=.(x1=1, x2='c'), contrast=expression(c1 - c2 - (c3 - c4), # gap between b and c curves at x1=0 vs. x1=.25 c1 - c2 - (c5 - c6), c1 - c2 - (c7 - c8), c1 - c2 - (c9 - c10) )) Pr(OR > 1.5) = 0.05 ⇒ σ=0.247 Running MCMC with 4 parallel chains... Chain 2 finished in 1.6 seconds. Chain 1 finished in 1.7 seconds. Chain 3 finished in 1.7 seconds. Chain 4 finished in 2.0 seconds. All 4 chains finished successfully. Mean chain execution time: 1.7 seconds. Total execution time: 2.1 seconds. Bayesian Proportional Odds Ordinal Logistic Model Dirichlet Priors With Concentration Parameter 0.105 for Intercepts blrm(formula = y ~ rcs(x1, 4) * x2, pcontrast = con) Frequencies of Responses 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.7 Mixed Calibration/ Discrimination Rank Discrim. Discrimination Indexes Indexes Indexes Obs90 LOO log L-252.58±9.51 g3.481 [2.749, 4.127] C0.844 [0.829, 0.855] Draws4000 LOO IC505.17±19.03 g[p]0.421 [0.384, 0.454] D[xy]0.687 [0.659, 0.71] Chains4 Effective p40.8±4.54 EV0.573 [0.475, 0.678] Time2.5s B0.109 [0.1, 0.125] v9.514 [5.82, 13.194] p11 vp0.142 [0.117, 0.169] Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry x1 1.3880 1.3904 1.3546 4.6215 -7.7824 10.2990 0.6182 1.01 x1' -5.3234 -5.0778 -5.0930 15.2229 -33.6980 25.4461 0.3650 1.02 x1'' 39.7520 38.5396 38.3255 44.8915 -52.0455 124.3734 0.8045 0.98 x2=b -1.1474 -1.1701 -1.1665 1.3625 -3.8016 1.4959 0.1930 1.03 x2=c 4.2596 4.2179 4.1792 1.4570 1.3260 6.9370 0.9978 1.00 x1 × x2=b -4.8023 -4.7743 -4.6732 6.3315 -17.1917 7.5277 0.2255 0.97 x1' × x2=b 23.4119 23.0551 22.5829 20.1925 -16.8795 62.1125 0.8740 1.04 x1'' × x2=b -72.8426 -71.4570 -70.5729 57.7082 -188.5653 36.3407 0.1042 0.95 x1 × x2=c -4.6838 -4.6529 -4.6090 6.3572 -16.6446 8.1771 0.2343 0.97 x1' × x2=c 23.0069 22.6108 22.3320 20.2852 -16.0061 62.6346 0.8688 1.06 x1'' × x2=c -72.1962 -70.6461 -69.5391 57.9075 -193.3944 34.4799 0.1080 0.95 Contrasts Given Priors [1] list(sd = c(0.246505282576203, 0.246505282576203, 0.246505282576203, [2] 0.246505282576203), c1 = list(x1 = 0, x2 = "b"), c2 = list(x1 = 0, [3] x2 = "c"), c3 = list(x1 = 0.25, x2 = "b"), c4 = list(x1 = 0.25, [4] x2 = "c"), c5 = list(x1 = 0.5, x2 = "b"), c6 = list(x1 = 0.5, [5] x2 = "c"), c7 = list(x1 = 0.75, x2 = "b"), c8 = list(x1 = 0.75, [6] x2 = "c"), c9 = list(x1 = 1, x2 = "b"), c10 = list(x1 = 1, [7] x2 = "c"), contrast = expression(c1 - c2 - (c3 - c4), c1 - [8] c2 - (c5 - c6), c1 - c2 - (c7 - c8), c1 - c2 - (c9 - c10))) rcs(x1, 4)x1 rcs(x1, 4)x1' rcs(x1, 4)x1'' x2b x2c rcs(x1, 4)x1:x2b 1 0 0 0 0 0 -0.25 1 0 0 0 0 0 -0.50 1 0 0 0 0 0 -0.75 1 0 0 0 0 0 -1.00 rcs(x1, 4)x1':x2b rcs(x1, 4)x1'':x2b rcs(x1, 4)x1:x2c rcs(x1, 4)x1':x2c 1 -0.006089308 0.000000000 0.25 0.006089308 1 -0.091848739 -0.002089092 0.50 0.091848739 1 -0.372867191 -0.062281932 0.75 0.372867191 1 -0.879577838 -0.236978793 1.00 0.879577838 rcs(x1, 4)x1'':x2c 1 0.000000000 1 0.002089092 1 0.062281932 1 0.236978793 2.10 Study Questions 1. Contrast statistical models vs. machine learning in a few ways Section 2.1-2.2 1. What is always linear in the linear model? Section 2.3 1. Define an interaction effect in words 2. In a model with main effects and two-way interactions, which regression parameter is the most “universal” by virtue of being most independent of coding? 3. When a predictor is involved in an interaction, which test of association involving that predictor is preferred? 4. What are two ways of doing chunk (composite) tests? 5. An analyst intends to use least squares multiple regression to predict follow-up blood pressure from baseline blood pressure, age, and sex. She wants to use this model for estimation, prediction, and inference (statistical tests and confidence limits). List all of the assumptions she makes by fitting the model \(Y = \beta_{0} + \beta_{1} \text{bp} + \beta_{2} \text{age} + \beta_{3} [\text 6. List as many methods as you can for checking the assumptions you listed. Section 2.4.1 1. Provide an example where categorizing a continuous predictor is a good idea 2. If you dichotomize a continuous predictor that has a smooth relationship with Y, how can you arbitrarily change its estimated regression coefficients? 3. What is a general term used for the statistical problem caused by categorizing continuous variables? 4. What tends to happen when data are used to find cutpoints for relationships that are in fact smooth? 5. What other inferential distortion is caused by categorization? 6. Is there an amount of noise that could be added to a continuous X that makes a categorized analysis better than a continuous one? Section 2.4.2 1. What is the biggest drawback to using regular polynomials to fit smooth nonlinear relationships? 2. Why does a spline function require one to use a truncated term? 3. What advantages does a linearly tail-restricted cubic spline have? 4. Why are its constructed variables X’ X’’ X’’’ etc. so complex? 5. What do you need to do to handle a sudden curvature change when fitting a nonlinear spline? 6. Barring a sudden curvature change, why are knot locations not so important in a restricted cubic spline (rcs)? 7. What is the statistical term for the delimma that occurs when choosing the number of knots? 8. What is a Bayesian way of handling this? 9. What is the core idea behind Cleveland’s loess smoother? 10. Can fitted spline functions for some of the predictors be an end in themselves? Section 2.5 1. What is the worst thing about recursive partitioning? 2. What is a major tradeoff between the modeling strategy we will largely use and ML? 3. What are some determinants of the best choice between SM and ML? Section 2.6 1. What is the main message of the Grambsch and O’Brien paper? 2. What notational error do many analysts make that lead them to ignore model uncertainty? Section 2.7.2 1. Why is it a good idea to pre-specify interactions, and how is this done? 2. Why do we not fiddle with transformations of a predictor before considering interactions? 3. What 3-dimensional restriction does the cross-products of two restricted cubic splines force? 4. What is an efficient and likely-to-fit approach to allowing for the effect of measurement degradement?
{"url":"https://hbiostat.org/rmsc/genreg","timestamp":"2024-11-07T06:02:35Z","content_type":"application/xhtml+xml","content_length":"577418","record_id":"<urn:uuid:65ec8874-9fb1-43fc-87f9-53ae56645fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00403.warc.gz"}
In Question 4, point C is called a mid-point of line segment AB. Prove that every line segment has one and only one mid-point.Q.5 - Ask TrueMaths!In Question 4, point C is called a mid-point of line segment AB. Prove that every line segment has one and only one mid-point.Q.5 This is very important question of ncert class 9^th of chapter Introduction To Euclid’s Geometry . How I solve the best solution of exercise 5.1 question number 5. Please help me to solve this in a easy and best way.In Question 4, point C is called a mid-point of line segment AB. Prove that every line segment has one and only one mid-point.
{"url":"https://ask.truemaths.com/question/in-question-4-point-c-is-called-a-mid-point-of-line-segment-ab-prove-that-every-line-segment-has-one-and-only-one-mid-point-q-5/","timestamp":"2024-11-14T00:59:04Z","content_type":"text/html","content_length":"128268","record_id":"<urn:uuid:c96b51b9-2e42-49eb-ac34-260766db6d63>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00124.warc.gz"}
A spinner is divided into three equal sections labeled a B and C. Erik spins the spinner and rolls a fair number cube. The tree diagram shows the sample space for the possible outcomes. A spinner is divided into three equal sections labeled a B and C. Erik spins the spinner and rolls a fair number cube. The tree diagram shows the sample space for the possible outcomes. Which of the following statements is correct? The probability of the spinner landing on B or rolling a 4 is 1/8 The probability of the spinner not landing on C and rolling a 3 is 13/18 The probability of the spinner landing on an A or rolling a number greater than two is 2/9 The probability of the spinner not landing on A and rolling a prime number is 1/3 Find an answer to your question 👍 “A spinner is divided into three equal sections labeled a B and C. Erik spins the spinner and rolls a fair number cube. The tree diagram ...” in 📗 Mathematics if the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions. Search for Other Answers
{"url":"https://cpep.org/mathematics/1147188-a-spinner-is-divided-into-three-equal-sections-labeled-a-b-and-c-erik-.html","timestamp":"2024-11-09T04:04:29Z","content_type":"text/html","content_length":"24426","record_id":"<urn:uuid:1a935b6a-2f0b-4240-81d3-4c07895da49f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00150.warc.gz"}
Factor Investing A new approach known as factor investing has recently emerged in investment practice, which recommends that allocation decisions be expressed in terms of risk factors, as opposed to standard asset class decompositions. While intuitively appealing, this approach poses a major challenge, namely the choice of the meaningful factors and the corresponding investable proxies. Simply put, factor investing proposes to regard each constituent in an investor’s portfolio, and therefore the whole portfolio, as a bundle of factor exposures. Obviously, factor models, such as those of Sharpe (1963) and Fama and French (1993), have long been used for performance measurement purposes, and several factors correspond to classical investment styles, such as value-growth investing, trend following or short volatility, that were in use in the industry before they were formally identified as asset pricing factors. In this context, the question arises as to whether factor investing is truly a new welfare-improving investment paradigm or merely another marketing fad. A first remark is that if there are as many factors as individual securities and the factors are themselves portfolios of such securities, then thinking in terms of factors is strictly equivalent to thinking in terms of asset classes, and therefore would not add any value. More relevant is the situation where a parsimonious factor model is used, with a number of factors smaller than the number of constituents. The first challenge posed for investors who decide to express their decisions in terms of factor exposures is then the identification of meaningful factors. In this perspective, the theoretical section of recent research that we have conducted as part of the Lyxor “Risk Allocation Solutions” research chair at EDHEC-Risk Institute reviews the academic literature on asset pricing and makes a list of conditions that such factors should satisfy. We then survey a (vast) empirical literature in order to identify the most consensual factors in three major asset classes, namely stocks, bonds and commodities. The second challenge in factor investing is the implementation of decisions in a cost-efficient way with investable proxies. On the empirical front, our paper provides an analysis of the welfare gains that can be expected from the use of proxies for factors, and discusses the choice of long-only versus long-short factors, which is relevant for many investors given that they are not allowed to take short positions. In asset pricing theory, the relevant and important factors are the “pricing factors,” the exposures to which explain all differences in expected returns across assets. Asset pricing theory makes a distinction between “pricing factors,” which explain differences in expected returns across assets, and “priced factors,” which earn a premium over the long run. The theory, exposed in the textbooks of Duffie (2001) and Cochrane (2005), expresses the risk premium on an asset, i.e., the expected return on this asset in excess of the risk-free rate, as a function of the covariance between the payoff and an abstract quantity known as the stochastic discount factor. The goal of theoretical and empirical asset pricing models is to find a representation for this random variable in terms of economically interpretable variables. For instance, in consumption-based models, the stochastic discount factor is proportional to the marginal utility of consumption. This conveys an important economic intuition: risk premia exist as rewards required by investors in exchange for holding assets that have a low payoff in “bad times,” defined as times where investors’ wealth is low and marginal utility is high. Pricing factors arise when one attempts to find observable proxies for the aggregate investor’s marginal utility. In a factor model, the risk premium on an asset is a linear combination of the factor risk premia, weighted by the betas of the asset with respect to the factor. As a consequence, all alphas are zero and the cross-sectional differences between expected returns are entirely explained by the differences in factor exposures. As for an asset, the premium of a factor is determined by its covariance with the stochastic discount factor, so that a factor deserves a positive premium if, and only if, it is high in “bad times” and low in “good times.” A factor is said to be “priced” if it has a non-zero premium. It can be shown that there is no loss of generality from searching for pricing factors among returns, but further assumptions are needed to identify their economic nature. Two main classes of theoretical models have been developed to this end. A first category uses economic equilibrium arguments. In the static Capital Asset Pricing Model (CAPM) of Sharpe (1964), the only factor, or “market factor,” is the return on aggregate wealth. The intertemporal version (ICAPM) of Merton (1973) adds as new factors the variables that predict changes in expected returns and volatilities. A second class of models refers to the Arbitrage Pricing Theory (APT) of Ross (1976) and characterizes factors as variables that explain returns from a statistical standpoint. One of the questions studied in the recent asset pricing literature is whether the factors proposed in empirical asset pricing models do meet these theoretical criteria. The property that assets have zero alphas with respect to the factors has an interesting implication. It can be shown that a theoretical single-step solution to a mean-variance optimization problem coincides with an optimal linear combination of mean-variance efficient benchmark portfolios invested in individual securities if each individual security has a zero alpha when regressed on the benchmark portfolios. This result therefore suggests that the most meaningful way for grouping individual securities is not by forming arbitrary asset class indices, but instead by forming factor indices, that is replicating portfolios for a set of indices that can collectively be regarded as linear proxies for the unobservable stochastic discount factor, thus providing a theoretical justification for factor investing. Empirically, the search for pricing factors in asset classes such as stocks, bonds and commodities begins with the identification of persistent and economically interpretable patterns in average returns. Recent research has subsequently started to look for multi-class factors. While the CAPM is relatively explicit about the nature of the underlying pricing factor (the return on aggregate wealth), multi-factor models derived from the ICAPM or the APT do not provide an explicit definition of their factors. Thus, the traditional approach in empirical asset pricing has been to examine the determinants of cross-sectional differences in expected returns and to find sound economic interpretations for regular patterns (presence of a risk factor, market frictions or behavioral biases). Most of the empirical asset pricing literature has focused on factors explaining equity returns. This literature starts in the early 1970s with empirical verifications of the CAPM and concludes that the model’s central prediction, namely the positive and linear relationship between expected excess return and the covariance with aggregate wealth, is not well validated by the data for two reasons. First, there exist patterns, or “anomalies,” that are not explained by the market exposure, and second, the relation between expected returns and the market betas is at best flat, or even negative. The most consensual patterns are those that have shown to be robust to various statistical tests, to exist in almost all international equity markets, to persist over time, in particular after their discovery, and to admit plausible economic explanations. They include the size and value effects, which are historically among the first reported anomalies: small-cap stocks tend to outperform their large-cap counterparts, and there is a positive relationship between the book-to-market ratio and future average returns. The size and value factors are used together with the market factor in the model of Fama and French (1993). Another remarkably robust pattern is the momentum effect: the winners (resp., losers) of the past three to twelve months tend to outperform (resp., underperform) over the next three to twelve months. The number of reported empirical regularities has grown fast in the recent literature, and a survey by Harvey et al. (2013) lists at least 315 of them. Among them is the controversial “low volatility puzzle,” namely the documented outperformance of low volatility stocks over high volatility stocks, the existence and persistence of which remains somewhat debated in the academic literature. Among the other noticeable patterns are the investment andprofitability effects. In fixed-income, the two main traditional factors are term and credit. As put by Fama and French (1993), unexpected changes in interest rates and in the probability of default are “common risk factors” for bonds, so it is expected that they should be rewarded. Given that long-term bonds are more exposed to interest rate risk than short-term bills through their longer duration, one expects them to earn higher returns on average. Similarly, defaultable bonds are expected to earn a premium over default-free ones because default events are more likely to happen in bad economic conditions. However, a mathematical decomposition of the term premium, such as those performed in the studies of bond return predictability, suggests that it varies in sign and magnitude with changes in the slope of the term structure and changes in expectations about future interest rates. Historical evidence confirms this variation. In addition to these two standard factors, recent literature has found patterns similar to those encountered in the equity class, such as momentum, value (which refers to a long-term reversal effect) and low risk. For commodities, one can obtain a first important factor by examining the determinants of the performance of passive strategies that roll over futures contracts. Research has shown that the long-term returns to such strategies are mainly driven by the roll returns, which are the positive or negative returns earned by replacing the nearest contract with the second nearest when the former matures. Hence, the prevailing shape of the term structure of futures prices is an essential determinant of long-term returns. Specifically, backwardated futures markets (for which the term structure is decreasing) outperform contangoed futures markets (for which it is increasing). This calls for the introduction of a term structure factor defined as the excess return of backwardated contracts over contangoed contracts. A related factor is the hedging pressure, which is suggested by the eponymous theory and indirectly captures the shape of the term structure. Beyond these “fundamental” factors, a momentum factor has also been empirically reported for commodities. A class-by-class study reveals that some patterns exist repeatedly in various classes. Asness et al. (2013) show that this is the case for short-term momentum and long-term reversal in equities, bonds, commodities and currencies. Furthermore, the single-class momentum factors are positively correlated, and the same goes for value factors. Taken together, these findings justify a new approach, which is the construction of multi-class value and momentum factors, obtained by aggregating the corresponding single-class components. Empirical tests show that investable proxies for factors add value in single-class or multi-class portfolios when they are used as complements or substitutes for broad asset class indices. Moreover, in the equity class, a portfolio of factor indices dominates a portfolio of sector indices. Our empirical study focuses on the following factors, which have been selected because they have well-documented historical performance, are theoretically grounded and are widely accepted by practitioners: size, value, momentum and volatility for equities; term and credit for bonds; term structure and momentum for commodities. In addition, we test multi-class value and momentum factors computed after the methodology of Asness et al. (2013). A first analysis of the descriptive statistics for these factors highlights a few simple but important facts. Each long-only factor outperforms its opposite tilt, in line with the theoretical and empirical literature, and outperforms the corresponding broad asset class index. Correlations within a class are high (above 75%), although they are lower across classes, and they are much lower for long-short versions of the factors. The benchmark universes that we consider contain the broad indices of one or more asset class(es), which represent the market factors. For equities, we also test the benefits of using a standard sector classification, as an alternative to grouping securities according to their factor exposures. A first method for assessing the usefulness of factors is to compare the efficient frontiers in the benchmark universe and in an extended universe that also contains the factors. Formal mean-variance spanning tests (see Kan and Zhou (2012)) reject the null hypothesis that the efficient frontier of the benchmark universe is included in that of the extended universe for all long-short factors and for most long-only factors. This is first evidence that the introduction of factors improves the efficient frontier, even though these tests rely on in-sample long-short efficient frontiers, so that they may give an overly optimistic picture. For this reason, we also conduct a series of out-of-sample tests, where we compare portfolios of traditional indices (asset class indices or equity sectors) and portfolios of factors, by imposing long-only constraints and by estimating parameters without a look-ahead bias. Since a test of the relevance of factor investing is a joint test of the relevance of the chosen factors and the chosen allocation methodology, we run the comparison for various diversification schemes that avoid the estimation of expected returns: equally-weighted, minimum variance, risk parity and “factor risk parity,” where the implicit factors extracted from the covariance matrix. The relative counterparts of these schemes, which focus on the tracking error as opposed to the absolute volatility, are also Table 1 shows a sample of results for the equity class: for most weighting schemes, the four-factor portfolios have higher average return, higher Sharpe ratio and higher information ratio compared to their ten-sector equivalents. In addition, they have a lower turnover. Table 2 extends the analysis to a multi-class context by comparing “policy-neutral” portfolios of equity, bond and commodity factors to a fixed-mix policy portfolio of 60% equities, 30% bonds and 10% commodities. Again, both the average return and the Sharpe ratio are improved. As a conclusion, there exist theoretical arguments in favor of factor investing, i.e., in favor of grouping individual securities into factor indices as opposed to arbitrary forms of indices. Extensive empirical literature has documented a number of recurring patterns in the returns of equities, bonds and commodity futures, and provides investors with a rich list of insights regarding the choice of meaningful factors in each of these classes. The identification of a parsimonious set of factors capturing the largest possible number of sources of risk is an ongoing task on the academic side. On the practical side, a challenge is to develop factor indices that aim to capture factor risk premia at reasonable implementation costs. It is being addressed in the equity class with a new generation of “smart beta” indices, but similar products are not as widely developed in other classes and no multiple-class products are available to date. Lionel Martellini is Professor of Finance at EDHEC Business School and Scientific Director of EDHEC-Risk Institute. He has graduate degrees in economics, statistics, and mathematics, as well as a PhD in finance from the University of California at Berkeley. Lionel is a member of the editorial board of the Journal of Portfolio Management and the Journal of Alternative Investments. An expert in quantitative asset management and derivatives valuation, his work has been widely published in academic and practitioner journals. Vincent Milhau is Deputy Scientific Director of EDHEC-Risk Institute. He holds master’s degrees in statistics (ENSAE) and financial mathematics (Université Paris VII), as well as a PhD in finance (Université de Nice-Sophia Antipolis). His research focus is on portfolio selection problems and continuous-time asset-pricing models. Asness, C. S., T. J. Moskowitz, and L. H. Pedersen (2013). Value and Momentum Everywhere. Journal of Finance 68 (3), 929-985. Cochrane, J. 2005. Asset Pricing Theory. Princeton University Press. Revised Edition. Duffie, D. 2001. Dynamic Asset Pricing Theory. Princeton University Press, Princeton. Fama, E. F. and K. R. French. 1993. Common Risk Factors in the Returns on Stocks and Bonds. Journal of Financial Economics 33 (1), 3-56. Harvey, C., Y. Liu, and H. Zhu. 2013. … and the cross-section of expected returns. Available at SSRN 2249314. Kan, R. and G. Zhou (2012). Tests of Mean-Variance Spanning. Annals of Economics and Finance 13 (1), 139-187. Martellini L. and V. Milhau, July 2015, Factor Investing, EDHEC-Risk Institute Publication produced as part of the Lyxor “Risk Allocation Solutions” research chair at EDHEC-Risk Institute. Merton, R. 1973. An Intertemporal Capital Asset Pricing Model. Econometrica 41 (5), 867-887. Ross, S. 1976. The Arbitrage Theory of Capital Asset Pricing. Journal of Economic Theory 13 (3), 341-360. Sharpe, W. 1963. A Simplified Model for Portfolio Analysis. Management Science 9 (2), 277-293. Sharpe, W. 1964. Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. Journal of Finance 19 (3), 425-442.
{"url":"https://thehedgefundjournal.com/factor-investing/","timestamp":"2024-11-03T22:29:21Z","content_type":"text/html","content_length":"59164","record_id":"<urn:uuid:55bd3a34-0951-4dde-98ac-86fce4f89f17>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00858.warc.gz"}
ctan-ann - nicematrixCTAN update: nicematrixCTAN update: nicematrixCTAN update: nicematrixCTAN update: nicematrix Announcements of the Comprehensive TeX Archive Network ctan-ann-nicematrix 2024-11-11T02:10:13Z mailman.4495.1678287933.3715.ctan-ann@ctan.org 2023-03-08T16:05:17Z François Pantigny submitted an update to the nicematrix package. ... François Pantigny submitted an update to the nicematrix package. Version: 6.16 License: lppl1.3 Summary description: Improve the typesetting of mathematical matrices with PGF Announcement text: It's now possible to put any LaTeX extensible delimiter (\lgroup, \langle, etc.) in the preamble of an environment with preamble (such as {NiceArray}) by prefixing them by \left and \right. ... ]]> CTAN Announcements ctan-ann at ctan.org mailman.6475.1707916205.3764.ctan-ann@ctan.org 2024-02-14T14:09:53Z François Pantigny submitted an update to the nicematrix package. ... François Pantigny submitted an update to the nicematrix package. Version: 6.27 2024-02-13 License: lppl1.3 Summary description: Improve the typesetting of mathematical matrices with PGF Announcement text: New key 'light-syntax-expanded'. Behaves like 'light-syntax' but the body of the environment is expanded first. ... ]]> CTAN Announcements ctan-ann at ctan.org mailman.9144.1717093823.3764.ctan-ann@ctan.org 2024-05-30T20:30:13Z François Pantigny submitted an update to the nicematrix package. ... François Pantigny submitted an update to the nicematrix package. Version number: 6.28 2024-05-29 License type: lppl1.3 Summary description: Improve the typesetting of mathematical matrices with PGF Announcement text:It's now possible to use '&' in a \ Block in order to split a block in sub-blocks. ... ]]> CTAN Announcements ctan-ann at ctan.org ZxudbTmr4pfyzX-7@prptp 2024-10-25T13:30:21Z François Pantigny submitted an update to the nicematrix package. ... François Pantigny submitted an update to the nicematrix package. Version: 6.29 2024-10-24 License: lppl1.3 Summary description: Improve the typesetting of mathematical matrices with PGF Announcement text: Modification to be compatible with the next version of 'array' (v. 2.6g). ... ]]> CTAN Announcements ctan-ann at ctan.org
{"url":"https://ctan.org/ctan-ann/atom/nicematrix.xml","timestamp":"2024-11-11T01:10:13Z","content_type":"application/atom+xml","content_length":"5036","record_id":"<urn:uuid:0561481a-8c05-4400-a8d9-756022ff0538>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00639.warc.gz"}
What Is the Difference Between a Cube and a Cuboid? A cube is a six-faced, three-dimensional figure composed of square-shaped faces of the same size that meet at 90-degree angles, whereas a cuboid is a box-shaped object made of six faces that all meet at 90-degree angles. A cuboid shape can also be a cube if all sides are the same length, but not all cuboids are cubes. Cubes and cuboids contain eight vertices and 12 edges. A cuboid shape has three pairs of rectangular faces placed opposite of each other. Opposite faces are exactly the same. Two of the six faces of a cuboid shape can be squares. Cuboids are also called right prisms, rectangular parallelepipeds and rectangular boxes. The volume of a cuboid is described by the formula “a*b*c” where “a,” “b” and “c” are the lengths of each side. The surface area of a cuboid is “2(a*b + b*c + c*a)” where each letter is a side length. A cube is a Platonic solid also known as a regular hexahedron. The surface area of a cube is delineated by the formula “6*a^2,” where “a” is the length of a side. The volume of a cube is “a^3.” Examples of cuboid shapes include boxes, doors and mattresses. Cuboids can be flattened into six two-dimensional rectangles.
{"url":"https://www.reference.com/world-view/difference-between-cube-cuboid-5d72cf2ec6caf5a0","timestamp":"2024-11-06T17:54:44Z","content_type":"text/html","content_length":"63279","record_id":"<urn:uuid:921742a8-6045-4ed8-be18-152ebacf0089>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00320.warc.gz"}
Singapore Math Learning Center | Online Tutoring and Classes Singapore Math Learning Center Unlock Your Child’s Math Potential! What is Singapore Math? The Singapore Math Method uses the concrete to pictorial to abstract learning approach to encourage active thinking, understanding, and communication of mathematical concepts and problem-solving. This proven method has been adopted by parents and schools in the United States and around the world and has helped students understand and excel in mathematics. Many have heard the good news that the NEW Primary Mathematics 2022 (Grades K to 5) is now available! History of Primary Mathematics The original Primary Mathematics was introduced to schools and used for homeschooling in... Which is the best Singapore Math curriculum? This is one of the questions we get asked quite often. The question should be, which is the best Singapore Mathematics curriculum for you? And by Singapore Math is a method of learning that uses three distinct approaches to understanding mathematical concepts and problem-solving. It uses the concrete to pictorial to abstract approach and encourages students to actively think, understand,... Math in Focus, Primary Mathematics, and Dimensions Math are three Singapore Math curriculums used in schools and homeschooling families in the United States. In this post, we review these three different Singapore Math curriculums... One common question we get from parents is how the Singapore Math curriculum prepares students for Algebra and what pre-algebra topics are covered in the Singapore Mathematics curriculum. In this article, we will explore... Singapore math follows the Concrete-Pictorial-Abstract method. They start with teaching with hands-on, or manipulative objects, move on to drawing bar models and other visuals, and then finally to mathematical symbols and abstract learning. According... Experienced Tutors All our tutors and instructors are experienced in the Singapore Math method and the Singapore Math curriculum. Tutoring is Online which means you don’t have to travel and can learn anywhere, anytime. We have been tutoring since 2013 and recognized as one of the Top 100 Educational Resources by Homeschool.com. Tutor Matching We match you with a tutor which means you don’t have to change tutors every session.. Join Our Newsletter Sign up for our newsletter. It only takes a second to be the first to find out about our latest news and promotions and printables!
{"url":"https://singaporemathlearningcenter.com/","timestamp":"2024-11-09T00:00:58Z","content_type":"text/html","content_length":"148933","record_id":"<urn:uuid:846a68d3-53bc-4d6a-bc88-e8d2fbd75c76>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00341.warc.gz"}
Context-free grammar Jump to navigation Jump to search This article needs additional citations for verification (February 2012) (Learn how and when to remove this template message) In formal language theory, a context-free grammar (CFG) is a certain type of formal grammar: a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the rule ${\displaystyle A\ \to \ \alpha }$ replaces ${\displaystyle A}$ with ${\displaystyle \alpha }$. There can be multiple replacement rules for any given value. For example, ${\displaystyle A\ \to \ \alpha }$ ${\displaystyle A\ \to \ \beta }$ means that ${\displaystyle A}$ can be replaced with either ${\displaystyle \alpha }$ or ${\displaystyle \beta }$. In context-free grammars, all rules are one-to-one, one-to-many, or one-to-none. These rules can be applied regardless of context. The left-hand side of the production rule is always a nonterminal symbol. This means that the symbol does not appear in the resulting formal language. So in our case, our language contains the letters ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ but not $ {\displaystyle A.}$^[1] Rules can also be applied in reverse to check whether a string is grammatically correct according to the grammar. Here is an example context-free grammar that describes all two-letter strings containing the letters ${\displaystyle \alpha }$ or ${\displaystyle \beta }$. ${\displaystyle S\ \to \ AA}$ ${\displaystyle A\ \to \ \alpha }$ ${\displaystyle A\ \to \ \beta }$ If we start with the nonterminal symbol ${\displaystyle S}$ then we can use the rule ${\displaystyle S\ \to \ AA}$ to turn ${\displaystyle S}$ into ${\displaystyle AA}$. We can then apply one of the two later rules. For example, if we apply ${\displaystyle A\ \to \ \beta }$ to the first ${\displaystyle A}$ we get ${\displaystyle \beta A}$. If we then apply ${\displaystyle A\ \to \ \alpha }$ to the second ${\displaystyle A}$ we get ${\displaystyle \beta \alpha }$. Since both ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are terminal symbols, and in context-free grammars terminal symbols never appear on the left hand side of a production rule, there are no more rules that can be applied. This same process can be used, applying the last two rules in different orders in order to get all possible strings within our simple context-free grammar. Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable. Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were in fact invented by the linguist Noam Chomsky for this purpose, but have not really lived up to their original expectation. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the Document Type Definition.^[2] In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF. Since the time of Pāṇini, at least, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence: John, whose blue car was in the garage, walked to the grocery store. can be logically parenthesized as follows: (John, ((whose blue car) (was (in the garage))), (walked (to (the grocery store)))). A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly. Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue. The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky,^[3] and also their classification as a special type of formal grammar (which he called phrase-structure grammars ).^[4] What Chomsky called a phrase structure grammar is also known now as a constituency grammar, whereby constituency grammars stand in contrast to dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules. Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee.^[3] The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars. Formal definitions[edit] A context-free grammar G is defined by the 4-tuple:^[5] ${\displaystyle G=(V,\Sigma ,R,S)}$ where 1. V is a finite set; each element ${\displaystyle v\in V}$ is called a nonterminal character or a variable. Each variable represents a different type of phrase or clause in the sentence. Variables are also sometimes called syntactic categories. Each variable defines a sub-language of the language defined by G. 2. Σ is a finite set of terminals, disjoint from V, which make up the actual content of the sentence. The set of terminals is the alphabet of the language defined by the grammar G. 3. R is a finite relation from V to ${\displaystyle (V\cup \Sigma )^{*}}$, where the asterisk represents the Kleene star operation. The members of R are called the (rewrite) rules or productions of the grammar. (also commonly symbolized by a P) 4. S is the start variable (or start symbol), used to represent the whole sentence (or program). It must be an element of V. Production rule notation[edit] A production rule in R is formalized mathematically as a pair ${\displaystyle (\alpha ,\beta )\in R}$, where ${\displaystyle \alpha \in V}$ is a nonterminal and ${\displaystyle \beta \in (V\cup \ Sigma )^{*}}$ is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with α as its left hand side and β as its right hand side: ${\displaystyle \alpha \rightarrow \beta }$. It is allowed for β to be the empty string, and in this case it is customary to denote it by ε. The form ${\displaystyle \alpha \rightarrow \varepsilon }$ is called an ε-production.^[6] It is common to list all right-hand sides for the same left-hand side on the same line, using | (the pipe symbol) to separate them. Rules ${\displaystyle \alpha \rightarrow \beta _{1}}$ and ${\ displaystyle \alpha \rightarrow \beta _{2}}$ can hence be written as ${\displaystyle \alpha \rightarrow \beta _{1}\mid \beta _{2}}$. In this case, ${\displaystyle \beta _{1}}$ and ${\displaystyle \ beta _{2}}$ is called the first and second alternative, respectively. Rule application[edit] For any strings ${\displaystyle u,v\in (V\cup \Sigma )^{*}}$, we say u directly yields v, written as ${\displaystyle u\Rightarrow v\,}$, if ${\displaystyle \exists (\alpha ,\beta )\in R}$ with ${\ displaystyle \alpha \in V}$ and ${\displaystyle u_{1},u_{2}\in (V\cup \Sigma )^{*}}$ such that ${\displaystyle u\,=u_{1}\alpha u_{2}}$ and ${\displaystyle v\,=u_{1}\beta u_{2}}$. Thus, v is a result of applying the rule ${\displaystyle (\alpha ,\beta )}$ to u. Repetitive rule application[edit] For any strings ${\displaystyle u,v\in (V\cup \Sigma )^{*},}$ we say u yields v, written as ${\displaystyle u{\stackrel {*}{\Rightarrow }}v}$ (or ${\displaystyle u\Rightarrow \Rightarrow v\,}$ in some textbooks), if ${\displaystyle \exists k\geq 1\,\exists \,u_{1},\cdots ,u_{k}\in (V\cup \Sigma )^{*}}$ such that ${\displaystyle u=\,u_{1}\Rightarrow u_{2}\Rightarrow \cdots \Rightarrow u_{k}\,= v}$. In this case, if ${\displaystyle k\geq 2}$ (i.e., ${\displaystyle ueq v}$), the relation ${\displaystyle u{\stackrel {+}{\Rightarrow }}v}$ holds. In other words, ${\displaystyle ({\stackrel {*} {\Rightarrow }})}$ and ${\displaystyle ({\stackrel {+}{\Rightarrow }})}$ are the reflexive transitive closure (allowing a word to yield itself) and the transitive closure (requiring at least one step) of ${\displaystyle (\Rightarrow )}$, respectively. Context-free language[edit] The language of a grammar ${\displaystyle G=(V,\Sigma ,R,S)}$ is the set ${\displaystyle L(G)=\{w\in \Sigma ^{*}:S{\stackrel {*}{\Rightarrow }}w\}}$ A language L is said to be a context-free language (CFL), if there exists a CFG G, such that ${\displaystyle L\,=\,L(G)}$. Non-deterministic pushdown automata recognize exactly the context-free languages. Proper CFGs[edit] A context-free grammar is said to be proper,^[7] if it has • no unreachable symbols: ${\displaystyle \forall N\in V:\exists \alpha ,\beta \in (V\cup \Sigma )^{*}:S{\stackrel {*}{\Rightarrow }}\alpha {N}\beta }$ • no unproductive symbols: ${\displaystyle \forall N\in V:\exists w\in \Sigma ^{*}:N{\stackrel {*}{\Rightarrow }}w}$ • no ε-productions: ${\displaystyle eg \exists N\in V:(N,\varepsilon )\in R}$ • no cycles: ${\displaystyle eg \exists N\in V:N{\stackrel {+}{\Rightarrow }}N}$ Every context-free grammar can be effectively transformed into a weakly equivalent one without unreachable symbols,^[8] a weakly equivalent one without unproductive symbols,^[9] and a weakly equivalent one without cycles.^[10] Every context-free grammar not producing ε can be effectively transformed into a weakly equivalent one without ε-productions;^[11] altogether, every such grammar can be effectively transformed into a weakly equivalent proper CFG. Words concatenated with their reverse[edit] The grammar ${\displaystyle G=(\{S\},\{a,b\},P,S)}$, with productions S → aSa, S → bSb, S → ε, is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is S → aSa → aaSaa → aabSbaa → aabbaa. This makes it clear that ${\displaystyle L(G)=\{ww^{R}:w\in \{a,b\}^{*}\}}$. The language is context-free, however, it can be proved that it is not regular. If the productions S → a, S → b, are added, a context-free grammar for the set of all palindromes over the alphabet { a, b } is obtained.^[12] Well-formed parentheses[edit] The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols "(" and ")" and one nonterminal symbol S. The production rules are S → SS S → (S) S → () The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion.^[13] Well-formed nested parentheses and square brackets[edit] A second canonical example is two different kinds of matching nested parentheses, described by the productions: S → SS S → () S → (S) S → [] S → [S] with terminal symbols [ ] ( ) and nonterminal S. The following sequence can be derived in that grammar: ([ [ [ ()() [ ][ ] ] ]([ ]) ]) Matching pairs[edit] In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example: S → aSb S → ab This grammar generates the language ${\displaystyle \{a^{n}b^{n}:n\geq 1\}}$, which is not regular (according to the pumping lemma for regular languages). The special character ε stands for the empty string. By changing the above grammar to S → aSb | ε we obtain a grammar generating the language ${\displaystyle \{a^{n}b^{n}:n\geq 0\}}$ instead. This differs only in that it contains the empty string while the original grammar did not. Distinct number of a's and b's[edit] A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's: S → U | V U → TaU | TaT | UaT V → TbV | TbT | VbT T → aTbT | bTaT | ε Here, the nonterminal T can generate all strings with the same number of a's as b's, the nonterminal U generates all strings with more a's than b's and the nonterminal V generates all strings with fewer a's than b's. Omitting the third alternative in the rule for U and V doesn't restrict the grammar's language. Second block of b's of double size[edit] Another example of a non-regular language is ${\displaystyle \{b^{n}a^{m}b^{2n}:n\geq 0,m\geq 0\}}$. It is context-free as it can be generated by the following context-free grammar: S → bSbb | A A → aA | ε First-order logic formulas[edit] The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. Examples of languages that are not context free[edit] In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example: [ ( ] ) [ [ [ [(((( ] ] ] ]))))(([ ))(([ ))([ )( ])( ])( ]) The fact that this language is not context free can be proven using Pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form ${\displaystyle {(}^{n} {[}^{n}{)}^{n}{]}^{n}}$ should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form ${\displaystyle a^{n}b^{n}c^{n}}$. Regular grammars[edit] Every regular grammar is context-free, but not all context-free grammars are regular.^[14] The following context-free grammar, however, is also regular. S → a S → aS S → bS The terminals here are a and b, while the only nonterminal is S. The language described is all nonempty strings of ${\displaystyle a}$s and ${\displaystyle b}$s that end in ${\displaystyle a}$. This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using pipe symbols, the grammar above can be described more tersely as follows: S → a | aS | bS Derivations and syntax trees[edit] A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's A derivation is fully determined by giving, for each step: • the rule applied in that step • the occurrence of its left-hand side to which it is applied For clarity, the intermediate string is usually given as well. For instance, with the grammar: (1) S → S + S (2) S → 1 (3) S → a the string 1 + 1 + a can be derived with the derivation: → (rule 1 on the first S) → (rule 1 on the second S) → (rule 2 on the second S) → (rule 3 on the third S) → (rule 2 on the first S) Often, a strategy is followed that deterministically determines the next nonterminal to rewrite: • in a leftmost derivation, it is always the leftmost nonterminal; • in a rightmost derivation, it is always the rightmost nonterminal. Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, the leftmost derivation → (rule 1 on the first S) → (rule 2 on the first S) → (rule 1 on the first S) → (rule 2 on the first S) → (rule 3 on the first S) can be summarized as rule 1, rule 2, rule 1, rule 2, rule 3 The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation: S → S + S (1) → 1 + S (2) → 1 + S + S (1) → 1 + 1 + S (2) → 1 + 1 + a (3) the structure of the string would be: { { 1 }[S] + { { 1 }[S] + { a }[S] }[S] }[S] where { ... }[S] indicates a substring recognized as belonging to S. This hierarchy can also be seen as a tree: This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another (rightmost) derivation of the same string S → S + S (1) → S + a (3) → S + S + a (1) → S + 1 + a (2) → 1 + 1 + a (2) and this defines the following parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: S → S + S (1) → S + S + S (1) → 1 + S + S (2) → 1 + 1 + S (2) → 1 + 1 + a (3) If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages. Example: Algebraic expressions[edit] Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z: 1. S → x 2. S → y 3. S → z 4. S → S + S 5. S → S - S 6. S → S * S 7. S → S / S 8. S → ( S ) This grammar can, for example, generate the string ( x + y ) * x - z * y / ( x + x ) as follows: S (the start symbol) → S - S (by rule 5) → S * S - S (by rule 6, applied to the leftmost S) → S * S - S / S (by rule 7, applied to the rightmost S) → ( S ) * S - S / S (by rule 8, applied to the leftmost S) → ( S ) * S - S / ( S ) (by rule 8, applied to the rightmost S) → ( S + S ) * S - S / ( S ) (etc.) → ( S + S ) * S - S * S / ( S ) → ( S + S ) * S - S * S / ( S + S ) → ( x + S ) * S - S * S / ( S + S ) → ( x + y ) * S - S * S / ( S + S ) → ( x + y ) * x - S * y / ( S + S ) → ( x + y ) * x - S * y / ( x + S ) → ( x + y ) * x - z * y / ( x + S ) → ( x + y ) * x - z * y / ( x + x ) Note that many choices were made underway as to which rewrite was going to be performed next. These choices look quite arbitrary. As a matter of fact, they are, in the sense that the string finally generated is always the same. For example, the second and third rewrites → S * S - S (by rule 6, applied to the leftmost S) → S * S - S / S (by rule 7, applied to the rightmost S) could be done in the opposite order: → S - S / S (by rule 7, applied to the rightmost S) → S * S - S / S (by rule 6, applied to the leftmost S) Also, many choices were made on which rule to apply to each selected S. Changing the choices made and not only the order they were made in usually affects which terminal string comes out at the end. Let's look at this in more detail. Consider the parse tree of this derivation: Starting at the top, step by step, an S in the tree is expanded, until no more unexpanded Ses (nonterminals) remain. Picking a different order of expansion will produce a different derivation, but the same parse tree. The parse tree will only change if we pick a different rule to apply at some position in the tree. But can a different parse tree still produce the same terminal string, which is ( x + y ) * x - z * y / ( x + x ) in this case? Yes, for this particular grammar, this is possible. Grammars with this property are called ambiguous. For example, x + y * z can be produced with these two different parse trees: However, the language described by this grammar is not inherently ambiguous: an alternative, unambiguous grammar can be given for the language, for example: T → x T → y T → z S → S + T S → S - T S → S * T S → S / T T → ( S ) S → T (once again picking S as the start symbol). This alternative grammar will produce x + y * z with a parse tree similar to the left one above, i.e. implicitly assuming the association (x + y) * z, which is not according to standard operator precedence. More elaborate, unambiguous and context-free grammars can be constructed that produce parse trees that obey all desired operator precedence and associativity rules. Normal forms[edit] Every context-free grammar that does not generate the empty string can be transformed into one in which there is no ε-production (that is, a rule that has the empty string as a product). If a grammar does generate the empty string, it will be necessary to include the rule ${\displaystyle S\rightarrow \epsilon }$, but there need be no other ε-rule. Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties[edit] Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations: They are not closed under general intersection (hence neither under complementation) and set difference.^[19] Decidable problems[edit] The following are some decidable problems about context-free grammars. The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n^2.3728639).^ [20]^[21]^[note 1] Conversely, Lillian Lee has shown O(n^3−ε) boolean matrix multiplication to be reducible to O(n^3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.^[22] Reachability, productiveness, nullability[edit] It is decidable whether a given non-terminal of a context-free grammar is reachable,^[23] whether it is productive,^[24] and whether it is nullable (that is, it can derive the empty string).^[25] Regularity and LL(k) checks[edit] It is decidable whether a given grammar is a regular grammar,^[26] as well as whether it is an LL(k) grammar for a given k≥0.^[27]^:233 If k is not given, the latter problem is undecidable.^[27]^:252 Given a context-free language, it is neither decidable whether it is regular,^[28] nor whether it is an LL(k) language for a given k.^[27]^:254 Emptiness and finiteness[edit] There are algorithms to decide whether a language of a given context-free language is empty, as well as whether it is finite.^[29]. Undecidable problems[edit] Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars. However, many problems are undecidable even for context-free grammars. Examples are: Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules?^[30]^[31] A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine doesn't accept that input. Language equality[edit] Given two CFGs, do they generate the same language?^[31]^[32] The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings. Language inclusion[edit] Given two CFGs, can the first one generate all strings that the second one can generate?^[31]^[32] If this problem was decidable, then language equality could be decided too: two CFGs G1 and G2 generate the same language if L(G1) is a subset of L(G2) and L(G2) is a subset of L(G1). Being in a lower or higher level of the Chomsky hierarchy[edit] Using Greibach's theorem, it can be shown that the two following problems are undecidable: Grammar ambiguity[edit] Given a CFG, is it ambiguous? The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. Language disjointness[edit] Given two CFGs, is there any string derivable from both grammars? If this problem was decidable, the undecidable Post correspondence problem could be decided, too: given strings ${\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}}$ over some alphabet ${\displaystyle \{a_{1},\ldots ,a_{k}\}}$, let the grammar ${\displaystyle G_{1}}$ consist of the rule ${\displaystyle S\to \alpha _{1}S\beta _{1}^{rev}|\cdots |\alpha _{N}S\beta _{N}^{rev}|b}$; where ${\displaystyle \beta _{i}^{rev}}$ denotes the reversed string ${\displaystyle \beta _{i}}$ and ${\displaystyle b}$ doesn't occur among the ${\displaystyle a_{i}}$; and let grammar ${\ displaystyle G_{2}}$ consist of the rule ${\displaystyle T\to a_{1}Ta_{1}|\cdots |a_{k}Ta_{k}|b}$; Then the Post problem given by ${\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}}$ has a solution if and only if ${\displaystyle L(G_{1})}$ and ${\displaystyle L(G_{2})}$ share a derivable string. An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics. An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages.^[33] Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars. There are a number of important subclasses of the context-free grammars: LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day. Linguistic applications[edit] Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules.^[4] Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion). Chomsky's general position regarding the non-context-freeness of natural language has held up since then,^[34] although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved.^[35] Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German^[34] and reduplication in Bambara^[36]), the vast majority of forms in natural language are indeed context-free.^[35] See also[edit] • Hopcroft, John E.; Ullman, Jeffrey D. (1979), Introduction to Automata Theory, Languages, and Computation, Addison-Wesley. Chapter 4: Context-Free Grammars, pp. 77–106; Chapter 6: Properties of Context-Free Languages, pp. 125–137. • Sipser, Michael (1997), Introduction to the Theory of Computation, PWS Publishing, ISBN 0-534-94728-X. Chapter 2: Context-Free Grammars, pp. 91–122; Section 4.1.2: Decidable problems concerning context-free languages, pp. 156–159; Section 5.1.1: Reductions via computation histories: pp. 176–183. • J. Berstel, L. Boasson (1990). Jan van Leeuwen, ed. Context-Free Languages. Handbook of Theoretical Computer Science. B. Elsevier. pp. 59–102. External links[edit] Cite error: There are <ref group=note> tags on this page, but the references will not show without a {{reflist|group=note}} template (see the help page).
{"url":"https://static.hlt.bme.hu/semantics/external/pages/%C3%BCres_sor/en.wikipedia.org/wiki/Context-free_grammar.html","timestamp":"2024-11-11T14:37:30Z","content_type":"text/html","content_length":"249213","record_id":"<urn:uuid:32c75976-abc3-422c-b524-284881d0ee2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00057.warc.gz"}
Liu Hui overcomes a Monster - Puissance & Raison Reading time: 10 minutes Translation by AB – April 15, 2020 In a short article^1, Karine Chemla, sinologist and specialist in the history of mathematics, relates to the subject of irrational numbers^2 a very enlightening episode from China in the 3rd century which is worth a detour. Certain passages of this post are a little mathematical but we think that non-mathematicians will be able to commit a little and access our final proposal: the three “statutes” of the Number and the brief opening that follows. Cubes and sphere Liu Hui was a 3rd century Chinese mathematician, known among other things for having published in 263 AD. A commentary on the famous Chinese book “The Nine Chapters on Mathematical Art“^3. This book is more a compilation of practical algorithms, used to solve various concrete problems (architecture, commerce, taxes …), than a treatise on abstract mathematics. No one really knows where this algorithmic knowledge comes from. Historians seem to agree, however, that they were developed independently of Greek mathematics. Liu Hui notably presents an algorithm which makes it possible to deduce the side x of a cube from the diameter d of the sphere in which it is inscribed (this sphere is the smallest which contains the If we multiply the diameter d of the sphere by itself and divide the result by 3, and then extract the square root of this, this gives the side x of the cube inscribed in the sphere. This relation between x and d corresponds to the formula that we would write in the mathematical language of today x^2 = d^2 / 3. Liu Hui uses this relation to try to calculate the diameter d of the sphere in which is inscribed a cube of side x = 5. The previous formula gives $d=\sqrt{3x^2}=\sqrt{75}$. But Liu Hui notes: By extracting the square root of the large hypotenuse, you don’t get through it^4. Let’s say for simplicity that the calculation of $\sqrt{75}$ does not fall right. Like an endless division, this (irrational) number is written with an infinity of numbers after the decimal point. Liu Hui nevertheless wants to continue his calculation to get the volume of the circumscribed cube (the smallest cube containing the sphere). The side of this cube is equal to the diameter d of the sphere and therefore its volume is worth d^3. It was at this point in the story told by Karine Chemla that Liu Hui made a conceptual leap emblematic of mathematical thought, and of thought in Never mind … The extraction of the square root (the effective calculation of $\sqrt{75}$) is a process, an algorithm, which acts on a number (75) to produce another number: the sought result (our calculator gives the approximate result $\sqrt{75}=8.660254...$ but it too will never “finish” this calculation). The conceptual leap comes now. Liu Hui decides to keep $\sqrt{75}$as a latent number, or a potential number, and continues his calculation with this sign rather than 8.660254… incomplete and therefore inaccurate. It is possible that the sensitive figure of the segment to which this potential number corresponds, the great hypotenuse, calls on Liu Hui to grant to his measure, $\sqrt{75}$, the status of a completed number, a number in action, exact and legitimate as an element of a calculation. As Karine Chemla relates (we underline): Confronted with the impossibility of extracting the root of a number according to an algorithm, Liu Hui still gives a result to the operation. But this one is different in nature from what a calculation would have given. We refer to it, not by stating it explicitly, but by giving it a name, more precisely by giving the name of “root” to the number on which we operate. It will be “root of”. The integration of the name (irrational number) $\sqrt{75}$in the mathematical language is not yet the integration of a new type of number, conceptualized and theorized. Rather, it is a question of postponing the execution of the algorithm for an indefinite period and of considering the algorithm itself assigned to an “input” and, completed or not, as a mathematical object. Thus, the “monster”, this never completed number, remains locked in the name of the algorithm and does not obstruct the rest of the calculation. Indeed, Liu Hui, having accomplished this conceptual leap, can continue: Make the multiplication twice of its square, 75 chi, by itself [i.e. 75 × 75 × 75 = 421875] receive as root name [we therefore take $\sqrt{421875}$ ], we obtain the root of chi as volume of the cube circumscribed. Indeed, the volume of the circumscribed cube is worth d^3, as it was said above, that is$\sqrt{75}^3=\sqrt{75^3}=\sqrt{421875}$^5. The actual calculation may be postponed indefinitely; this name of the result is that of a perfectly exact value. As Liu Hui himself says: We cannot determine the value of this quantity, that is why it is only by giving it the name of root that we do not make a mistake. Liu Hui may not have perceived the irrationality of the number $\sqrt{75}$. In any case, there appeared to him a calculation which produced endless numbers (after the decimal point in our writing system). He himself makes the parallel with rational fractions, whose effective calculation can also never stop: By comparison, it’s like when, dividing 10 by 3, we consider its remainder as a third; we can then find the number. We can indeed write 10/3 = 1 + 1/3. The name “a third”, that is 1/3, potentiates the calculation and maintains in itself the infinite writing of the number it designates , that is 0.3333… It is at the same time exact in the sense where the applicable syntactic rules build programs or algorithms (symbolic writings) which “span” the calculations. Fortunately, there is no need to calculate 1/3 = 0.3333… to obtain this exact equality, which is quite disturbing in the end: $3 \times \frac{1}{3} = 1$ The three “statutes” of the Number Liu Hui seeking to calculate the volume of the circumscribed cube of a sphere containing a cube with side $x=5$, proposes to step over the impossible calculation of the intermediate results (which “ we do not get through”) to consider only the syntactic game leading to the exact solution $\sqrt{421875}$, expressed in the form of an “algorithm” (“$\sqrt{}$“) to be applied to a number (421875). This “trick” is emblematic of the conceptual leap made by mathematicians when unrepresentable mathematical “monsters” appeared: a symbol (algorithm) is assigned to them, in which they remain confined, and which allows their manipulation in an abstract language. This consideration opens the possibility of a triple “status” of the Number. First of all, Number as a sensitive, pre-mathematical shape: a quantity in the perceived world, obeying physical regularities. Then, Number as a procedure, one could say, that exists only mathematically and obeys syntactic rules reflecting its regularities as a shape. Finally, the Number as a written sign, as “money” in a way, tool of transactions in the real world and which therefore requires a translation respecting a convention of signs (a base). Let us remember for example the shape allowing to understand the procedure of the square root: If we assign a “currency value” (written sign) of 1 to the side of the blue square (we would then have to assign a unit to this value, as Liu Hui does with chi), the area of the blue square is 1. What is then the “currency value” on the side of the red square, i.e. the value with which it is possible to carry out an actual transaction (buy a fence for the red square, for example)? Obviously (bonding of the triangles), the surface of the red square is twice that of blue, and therefore the “currency value” of the side of the red square is the number which, multiplied by itself, gives 2. We must therefore extract the square root of 2 and this number is exactly the procedure $\sqrt{2}$, with which it is possible to chain, in the mathematical universe, other calculations obeying “grammatical” rules of the kind $\sqrt{x \times y}=\sqrt{x}\times\sqrt{y}$. However, we cannot carry out transactions in the real world with incomplete procedures. You must end up calculating $\sqrt{2}$ et écrire 1,41 (chi) and write 1.41 (chi) to order and pay for the fence ... Brief opening Many reflections have occupied philosophers and mathematicians since the Pythagoreans concerning, say, the ontology of Number. What is this Number and, we would be tempted to add, is it typical of humans? This question, in an all-digital era, seems to be a good question. We don’t have an answer (yet), but this short and emblematic reasoning of Liu Hui shows us how we naturally create symbolic forms (procedures, algorithms) to imprison “monsters”, and how these forms become empowered in a syntactically regulated world. The German philosopher Ernst Cassirer describes three stages of this empowerment: the mimetic stage (the square root procedure still mimics the sensitive figure), the analog stage (the procedure is no longer linked to the sensitive figure, the syntax reigns) and the stage of symbolic expression^7: The function of signification attains pure autonomy. The less the linguistic form still aspires to offer a copy, be it direct or indirect, of the world of objects, the less it identifies with the being of this world and the better it accesses its role and its proper meaning. That is the whole question, for us today, of number: the syntactic game (of the mind, then now of machines) constantly produces new symbolic forms (possibly calculated then written) whose agreement with the sensitive world is no longer guaranteed by mimicry. We grant these forms to our “figures” only afterwards and often by force. So much so that it has become less costly to conform our “figures” (including political and ethical) themselves to the powerful surge of digital forms. 1. ↑ Chemla Karine – Des nombres irrationnels en Chine entre le premier et le troisième siècle – In: Revue d’histoire des sciences, tome 45, n°1, 1992. pp. 135-140. Unless otherwise stated, all quotes are from this article. 2. ↑ An irrational number is a number that does not correspond to any proportion of the form a/b where a and b are whole numbers. The famous number π, which measures the ratio of the circumference of the circle to its diameter, is irrational. 3. ↑ The Nine Chapters on the Mathematical Art is a Chinese mathematics book, composed by several generations of scholars from the 10th–2nd century BCE, its latest stage being from the 2nd century CE. (Wikipedia). 4. ↑ Karine Chemla does not relate in her article what Liu Hui means by “we are not getting through it”. One can imagine that he executed an iterative algorithm of calculation of the square root a certain number of times. But did he really have the certainty that it was impossible “to get through it” (in the sense of a mathematical impossibility) or did he simply want to mean that he did not succeed in overcoming it and that he would postpone the possible further calculation until later? 5. ↑ The rule, implicit in the article, of root multiplication is used by Liu Hui: $\sqrt{x \times y}=\sqrt{x}\times\sqrt{y}$ 6. ↑ The infinity of writing is not inevitable for a rational number. For example, the writing of the number 1/3 is perfectly finished among Babylonians: it is 20 in base 60. 7. ↑ Ernst Cassirer – 1997 – Trois essais sur le symbolique – Œuvres VI September 16, 2017 – Back to Aristotle It is in the collective work “Philosophy and calculus of infinity” (in French, published by Maspero), that we find p.45 the following extract, an echo to what was revealed with Liu Hui: He [Aristotle] sometimes felt […] that the time involved in the composition or the division was not a clear notion – it is even there, still emerging, a fundamental moment of infinitesimal calculus, of the great philosophies which gave it been associated and mathematics in general: if indeed the operation should not be confused with its result, that is to say if the operation is apparently always in power when it comes to the division or the undefined composition for example, we can return the proposition and see that the operation itself can on the contrary be mastered as an operation, therefore (de)finite and infinite at the same time since it then controls a priori an indefinite handling quite distinct from it. Aristotle thus saw […] that the operation must be distinguished from its result, but also from its handling, the latter alone being undefined. It is therefore to Aristotle’s Physics that we should come back to strengthen the principle of a distinction between procedure (operation, number in power) and written sign (result, number in act). But let’s not forget that this distinction acts in the symbolic and says nothing about the number as it manifests itself, in the form of sensitive figures (shapes), in the real world. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.puissanceetraison.com/en/liu-hui-overcomes-a-monster/","timestamp":"2024-11-08T10:26:29Z","content_type":"text/html","content_length":"78366","record_id":"<urn:uuid:5857d522-3da7-43e4-8d66-62b260108ef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00482.warc.gz"}
Calculus Tutors in Hong Kong Island, Hong Kong | Math Tutors in Near Me - MyPrivateTutor Hong Kong Island, Hong Kong • 310 classes • Algebra, Calculus, Coordinate Geometry, Geometry, Trigonometry Oxford postgraduate as your academic companion for IB, GCE AL, IGCSE, SAT, AP, with a guaranteed grade A. Academic Champion, your Ace Exam Companion. Oxford postgraduate at your service! Math, physics, chemistry, biology, science. GCE AL, IGCSE, IBDP, S... • 310 classes • Algebra, Calculus, Coordinate Geometry, Geometry, Trigonometry Oxford postgraduate as your academic companion for IB, GCE AL, IGCSE, SAT, AP, with a guaranteed grade A. Academic Champion, your Ace Exam Companion. Oxford postgraduate at your service! Math, physics, chemistry, biology, science. GCE AL, IGCSE, IBDP, S...
{"url":"https://www.myprivatetutor.hk/mathematics/calculus-tutors-in-hong-kong-island","timestamp":"2024-11-13T09:20:39Z","content_type":"text/html","content_length":"785049","record_id":"<urn:uuid:6ad4c756-ca01-4cf2-b8f7-5c5f06f30002>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00230.warc.gz"}