content
stringlengths
86
994k
meta
stringlengths
288
619
Homework 1 CSCE 633 Question 1 1-dimensional linear regression: Assume a 1-dimensional linear regression model y = w0 + w1x. The residual sum of squares (RSS) of the training data Dtrain = {(x1, y1), . . . ,(xN , yN )} can be written as: RSS(w0, w1) = X (yn − w0 − w1xn) We estimate the weights w0, w1 by minimizing the above error. (a) Show that minimizing RSS results in the following closed-form expression: 1 = xnyn − N n − N 0 = − w1 Tip: Set the partial derivatives ϑRSS(w0,w1) and ϑRSS(w0,w1) equal to 0. Then solve a 2 × 2 system of linear equations with respect to w0 and w1. (b) Show that the above expressions for w and w are equivalent to the following: 1 = (xi − x¯)(yi − y¯) (xi − x¯) 0 = ¯y − w1x¯ where ¯x = n=1 xn and ¯y = n=1 yn are the sample means of input features and outcome values, respectively. (c) How would you interpret the above expression in terms of the descriptive statistics (e.g. sample mean, variance, co-variance) of populations {xn} n=1 and {yn} Question 2 Principled method for learning the step size in gradient descent: In class we discussed that when we use gradient descent to minimize target function J(w) with respect to w, the step size α(k) at iteration k is a crucial hyperparameter. We further said that we can experimentally determine α(k) through cross-validation. There is actually a principled way for computing the optimal α(k) in each iteration and we are going to derive the expression for that. (a) According to Taylor series expansion, a differentiable function f(x) can be written around x0 as follows: f(x) = f(x0) + (∇f|x=x0 · (x − x0) + 1 (x − x0) · Hf |x=x0 · (x − x0) + . . . where ∇f are the gradient vector and Hf Hessian matrix of f evaluated at x0. Let w(k) be the value of w at the k th iteration of gradient descent. Show that the second order Taylor expansion of the target function J(w) around w(k) is the following: J(w) ≈ J(w(k)) + (∇J|w=w(k) · (w − w(k)) + 1 (w − w(k))T · HJ |w=w(k) · (w − w(k)) where ∇J are the gradient vector and HJ Hessian matrix of J evaluated at w(k). (b) Show that the above expression of J(w) evaluated at w(k + 1) (i.e. at the (k + 1)th gradient descent iteration) can be written as: J(w(k + 1)) ≈J(w(k)) − · α(k)+ T HJ |w=w(k) · α Tip: Take into account the gradient descent update rule w(k + 1) = w(k) − α(k) · ∇J|w=w(k) (c) Show that minimizing the above expression with respect to the step size α(k) results in: α(k) = T HJ |w=w(k) The above expression gives a closed-form solution of the step size at iteration k (i.e. a(k)) that minimizes the target function at the next iteration. (d) What is the cost of computing a(k) at each iteration k using the above expression? Question 3 Predicting forest fires: Forest fires are a major environmental issue endangering human lives. This renders their fast detection a key element for controlling them and potentially preventing them. Since it is hard for humans to monitor all forests, we can use automatic tools based on local sensors to do that. Through these sensors we can get information regarding the meteorological conditions, such as temperature, wind, relative humidity (RH), and amount of rain. We can also compute several fire hazard indexes, such as the forest fire weather index (FWI), fine fuel moisture code (FFMC), duff moisture code (DMC), drought code (DC), and initial spread index (ISI). Using these measures, we can predict whether fire is going to occur in the forest, as well as to estimate the amount of burned area. Such data are part of the “Forest Fires Data Set” of the UCI Machine Learning Repository and their description can be found here: http://archive.ics.uci.edu/ml/datasets/Forest+Fires Inside “Homework 1” folder on Piazza you can find two files including the train and test data (named “train.csv” and “test.csv”) for our experiments. The rows of these files refer to the data samples, while the columns denote the features (columns 1-12) and the outcome variable (column 13), as describe bellow: 1. X: x-axis spatial coordinate of the forest: 1 to 9 2. Y: y-axis spatial coordinate of the forest: 2 to 9 3. month: month of the year: 1 to 12 to denote ”jan” to ”dec” 4. day: day of the week: 1 to 7 to denote ”mon” to ”sun” 5. FFMC: FFMC index from the FWI system 6. DMC: DMC index from the FWI system 7. DC: DC index from the FWI system 8. ISI: ISI index from the FWI system 9. temp: temperature in Celsius degrees 10. RH: relative humidity 11. wind: wind speed in km/h 12. rain: outside rain in mm/m2 13. area: the burned area of the forest (this is the outcome variable) (a) Data exploration: Inspect the input features (e.g. you can plot histograms, scatter plots, etc.). Which of the features are continuous and which categorical? (b) Classification: From data exploration, we can notice that the the outcome value (i.e. the burned area) is zero for many samples, meaning that the corresponding forests are not affected by fire. Therefore we can dichotomize the outcome variable, based on whether its corresponding value is zero or greater than zero. This creates the following two classes: Class 0: Forests not affected by the fire, i.e. area = 0 Class 1: Forests affected by the fire, i.e. area > 0 After dichotomizing the outcome variable, we can run a classification task to predict whether or not fire will occur in a certain forest based on the input features. (b.i) Implement a K-Nearest Neighbor classifier (K-NN) using the euclidean distance as a distance measure to perform the above binary classification task. Reminder: Don’t forget to normalize the features. (b.ii) Explore different values of K through cross-validation on the training set. Plot the classification accuracy, i.e. (#samples correctly classified) / (total #samples), against the different values of K. (b.iii) Report the classification accuracy on the test set using the best K from cross-validation. (b.iv) Bonus: Instead of using the euclidean distance for all features, experiment with different types of distances or distance combinations, i.e. Hamming distance for categorical features. Report your findings. (c) Linear Regression: Among the forests that were affected by the fire, we can use linear regression to predict the actual amount of area that was burned. For this task, we will only use the samples of the train and test set with burned area (column 13) greater than zero, i.e. area > 0. (c.i) Plot the histogram of the outcome variable. What do you observe? Plot the histogram of the logarithm of the outcome value, i.e. log(area). What do you observe now? (c.ii) Implement a linear regression model to fit the outcome data using the ordinary least squares (OLS) solution. (c.iii) Test your model on the test data and compute the residual sum of squares error (RSS) and the correlation between the actual and predicted outcome variable. (c.iii) Bonus: Experiment with different non-linear functions of the input features. Report your findings on the train and test sets.
{"url":"https://codingprolab.com/answer/homework-1-csce-633/","timestamp":"2024-11-02T15:32:13Z","content_type":"text/html","content_length":"118345","record_id":"<urn:uuid:4cf4b6de-6085-404c-9a4a-3a747c78edbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00718.warc.gz"}
Online calculator Synthetic division Synthetic division is a simplified way of dividing a polynomial using only the coefficients of the terms in the polynomial. It is a shortcut method that avoids the need for long division or the use of the polynomial remainder theorem. This calculator performs synthetic division of any polynomials. You may find the method description just below the calculator. Synthetic division preparation The synthetic division preparation by example: 3x^4+5x^3+2x+4 / x^2+2x+1. Preparation steps 1) Negate divisor coefficients 2) Write dividend coefficients at the top (zero for missed terms). 3) Remove highest divisor coefficient. 4) Write remained divisor coefficients diagonally on the left Synthetic division method for monic divisors The synthetic division with monic divisor by example: 3x^4+5x^3+2x+4 / x^2+2x+1. Division algorithm for monic divisor 1) Drop highest dividend coefficient in the first column of result row 2) Multiply divisor diagonal by the last column value of result row 3) Place multiplication result diagonally to the right from the last result column 4) Perform addition in the next column and write the sum in the same column of result row 5) Repeat steps 2-4 until you would go past the columns at the top row. 6) Sum values in any remaining columns and write the result in result row. 7) Separate result and remainder. A number of terms in the remainder equal a number of divisor terms minus one. Example 1 (monic divisor) Non-monic divisors The synthetic division with non-monic divisor example: 3x^3+5x^2+7x+2 / 3x^2-x-2. Division algorithm for non-monic divisor 1) Drop highest dividend coefficient in the first column of remainder row 2) Divide last column value in a remainder row by the first divisor coefficient, write the result in result row 3) Multiply divisor diagonal by the last column value of result row 4) Place multiplication result diagonally to the right from the last result column 5) Perform addition in the next column and write the sum in the same column of remainder row 6) Repeat steps 2-5 until you would go past the columns at the top row. 7) Sum values in any remaining columns and write the result in a remainder row. 8) Separate result and remainder. Result coefficient will be in the last row. Remainder coefficients are in the previous row. A number of terms in the remainder equal to a number of divisor terms minus one. Example 2 (non-monic divisor) Similar calculators PLANETCALC, Synthetic division
{"url":"https://planetcalc.com/8091/?thanks=1","timestamp":"2024-11-03T19:56:55Z","content_type":"text/html","content_length":"39911","record_id":"<urn:uuid:5eff844f-f65a-4440-bb60-5479f30770bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00736.warc.gz"}
Conic Sections – Circles and Pi – Mathigon Circles and PiConic Sections The circle is one of four different shapes which can be created using “slices” through a cone. This can be demonstrated using the light cone of a torch: If you point the torch vertically downwards, you see a of light. If you tilt the cone, you get an ellipse. If you tilt it even further, you get a parabola or a hyperbola. Collectively, these four shapes are called conic sections. Even though they all look very different, they are closely related: in fact, they can all be generated using the same equation! Conic sections were first studied by the ancient Greek mathematician Apollonius of Perga, who also gave them their unusual names. In later courses, you’ll learn much more about parabolas and hyperbolas. For now, let’s have a closer look at the ellipse. An ellipse just looks almost like an “elongated circle”. In fact, you could think about it as a circle with two centers – these are called focal points. Just like every point on a circle has the same distance from its center, every point on an ellipse has the same sum of distances to its two focal points. If you have a long string connected to two fixed points, you can draw a perfect ellipse by tracing the maximum reach of the strings: Coming soon: Ellipses drawing interactive There are many other physical representations of how you could draw an ellipse: Planetary Orbits You might remember from the very beginning of this course, that ancient Greek astronomers believed that the Earth is at the centre of the universe and that the sun, moon and planets move around Earth on circular orbits. Unfortunately, astronomical observation of the sky didn’t quite support this. For example, the sun appeared larger during some parts of the year and smaller during others. On a circle, every point should have distance from its center. To fix this, astronomers added Epicycles to their model of the solar system: planets move on a large circle around Earth, while simultaneously rotating on a smaller circle. While very complicated, this was the most widely accepted model of our universe for more than 1000 years: Over time, people realised that Earth was just one of many planets orbiting the sun (the Heliocentric model), but it wasn’t until 1609, that the astronomer Johannes Kepler discovered that planets actually move on elliptical orbits. The sun is in one of the two focal points of these ellipses. The planets speed up as they get closer to the sun, and slow down as they move further away. A few decades later, Isaac Newton was able to prove Kepler’s observations, using his newly developed laws of gravity. Newton realised that there is a force between any two masses in the universe – similar to the attraction between two magnets. Gravity is what makes everything fall to the ground and gravity is also what makes the planets move around the sun. It is only the great speed at which planets move, that prevents them from falling directly into the sun. Using Newton’s laws, you can derive the path that objects take when moving under the force of gravity. It turns out that planets move on ellipses, but other objects like comets can travel on parabolic or hyperbolic paths: they fly close to the sun before turning around and shooting off into the universe, never to come back. According to legend, a falling apple inspired Newton to think about gravity. He was one of the most influential scientists of all time, and his ideas shaped our understanding of the world for nearly 300 years – until Albert Einstein discovered relativity in 1905.
{"url":"https://uk.mathigon.org/course/circles/conic-sections","timestamp":"2024-11-14T16:48:58Z","content_type":"text/html","content_length":"64001","record_id":"<urn:uuid:2efabf43-04c7-4692-afb8-ab1b5e517e1e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00499.warc.gz"}
2nd Measurement | early-math-ca top of page Growing math minds... Changing math mindsets Registration now open for the Early Math Fall Forum - 10/29/24 Register here! Second grade students transition from measuring lengths with informal or non-standard units to mea­suring with standard units—inches, feet, centimeters, and meters—and using standard measurement Students learn the measure of length as a count of how many units are needed to match the length of the object or distance being measured. Using both customary (i.e. imperial) units (inches and feet) and metric units (centimeters and meters), students measure the length of objects with rulers, yardsticks, meter sticks, and tape measures. Students become familiar with standard units (e.g., 12 inches in a foot, 3 feet in a yard, and 100 centimeters in a meter) and how to estimate lengths. In grade two, students also connect the concept of the ruler to the concept of the number line. These understandings are essential to supporting work with number line diagrams. This is an example of using a number line drawing to solve a word problem. There were 27 students on the bus. Nineteen (19) students got off the bus. How many students are still on the bus? Student: I used a number line. I saw that 19 is really close to 20. Since 20 is a lot easier to work with, I took a jump of 20. But, that was one too many. So, I took a jump of 1 to make up for the extra. I landed on 8. So, there are 8 students still on the bus. What I did was 27 – 20 = 7, and then 7 + 1 = 8. bottom of page
{"url":"https://www.earlymathca.org/2nd-measurement","timestamp":"2024-11-06T15:05:33Z","content_type":"text/html","content_length":"810413","record_id":"<urn:uuid:0f76c7f2-8d98-4c5e-80c7-69b888d95bfd>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00468.warc.gz"}
Uncertainty, Confidence, and Hallucination in Large Language Models How to Spot When Your Large Language Model is Misleading You Table of Content LLM Is Just Making Stuff Up Ever have a conversation with a large language model that sounds super confident, spitting out facts that seem...well, a little fishy? 🐟 You're not alone. One of the biggest challenges in working with Large Language Models (LLMs) is verifying the correctness of their output. Despite their advanced capabilities, LLMs can sometimes generate information that appears accurate but is fabricated. This phenomenon, known as 👉 hallucination, can lead to misinformation and erode trust in AI systems. Hallucination in AI is not a new phenomenon. Deep learning models, in general, are notorious for their over-confidence in predictions. For instance, in classification tasks, these models can assign a very high probability to a label prediction, even when the prediction is incorrect [1]. Deep learning models can be misleading in how powerful they truly are. In the context of AI-generated text, large language models (LLMs) can produce content that appears real and coherent, yet is irrelevant and unacceptable. Recent papers [2] categorize LLM hallucinations into 2 main types: • Factuality Hallucination: This is like the LLM making stuff up entirely. It might sound convincing, but the information is just plain wrong. Think of it like telling you the capital of France is New York City (👉 factual inconsistency, i.e., simply wrong) or the Roman Empire was the first civilization to discover Antarctica (👉 factual fabrication, i.e., no evidence). • Faithfulness Hallucination: This happens when the LLM strays from your topic or instructions. It might weave a good story but doesn't answer your question or follow the original idea. Imagine asking for a recipe and getting a poem about kitchens instead (👉 instruction inconsistency). Another example is that LLMs can summarize an input document, perpetuating any incorrect or unsupported information it contains (👉 context inconsistency) or perform wrong mathematics derivation (👉 logical inconsistency). 👀 The dangerous thing is that the generated text looks really smooth and confident, which make it hard to know if the content is hallucinated or not. Imagine the catastrophic consequences of deploying LLMs in real-world applications without addressing their hallucination issues. For example, an LLM might mistakenly diagnose a benign condition when the symptoms indicate a serious illness, putting a patient's life at risk. The real-world impact of these errors is evident: Google's early LLM, BARD, cost the company $100 billion due to a critical hallucination error. A reliable AI system should be upfront about its limitations. Imagine asking a friend a question, and they spout out an answer with zero hesitation, even if they're not entirely sure. Not ideal, right? The same goes for AI. The best AI systems are those that can signal their uncertainty when they're unsure. We don't want an AI that's either arrogantly confident about everything or so timid it never takes a guess. Hallucination or not, the key issue is identifying when an LLM’s output is unreliable. Now, the real questions are: 🧠 Can we detect when LLMs are generating misleading content? Or even better, can we mitigate the hallucination or dehallucinate LLMs’ output? Today's focus is on the first question. We'll explore the second question in the next post. Detecting Deception: Tools and Methods for Identifying LLM Falsehoods We all know that feeling – you ask a language model a question, and it answers with booming confidence. So, 🧠 how can we tell if an LLM is just making stuff up? There are two key approaches to detect LLM "deception": • Score-based Methods: One way to sniff out an LLM's fib is to look at how uncertain it is about its answer. Imagine a friend who gives you an answer with a shrug and a mumbled "maybe." That might raise a red flag, right? Similarly, LLMs that express high uncertainty about their output are more likely to be unreliable. By measuring this uncertainty as a score, we can get a sense of how trustworthy the information might be. In this vein, several approaches emerge, some inspired by heuristic methods in uncertainty estimation for deep learning, while others rely more on theoretical principles. • Calling in the Backup: Another approach involves using external models called conformal predictors. Think of them as using AI to control AI. These models analyze the LLM's information and predict whether the output is real or fabricated (hallucinated). Two approaches to Conformal Prediction: 1. LLM Evaluator: This method utilizes another LLM (or potentially the same one) to evaluate the generated text itself. This approach bypasses the need for handcrafted features but potentially introduces additional complexity. 2. Simple Conformal Predictor: This approach leverages well-established methods like linear or logistic regression. However, it relies heavily on extracting informative features from the LLM's The effectiveness of these methods depends on what information we have access to about the LLM itself. For example, the prediction will be more accurate if we can peek "inside" the LLM and see its internal workings (like a white box). However, if the LLM is a black box (we can't see its inner workings), we might need to ask it multiple times to get a clearer picture. Crucially, all methods assume that LLMs have some awareness of their uncertainty or confidence levels, meaning they have a rough idea of how accurate their outputs are. Without this self-awareness, estimating uncertainty or predicting correctness solely by observing the LLMs is impossible. 👀 Fortunately, recent evidence has pointed out that this assumption is practical and LLMs are aware of what they know or don’t know. All we need to do is find good ways to extract or trigger this To sum up, in any method, we just need to craft a score to measure the uncertainty/confidence of LLMs. For methods that do not use external conformal predictors, the score can be computed as a scalar using different approaches. The score then can be calibrated with a training dataset to find a proper threshold for detection decision-making. For conformal prediction approaches, the score can be extracted from the predictor’s prediction logits. We also need a training dataset to train the predictor. Sometimes, if the logits or the score is good, we can just use a default threshold of 0.5 without calibration. The general framework for detection is depicted below: Score-based Approaches for Uncertainty Estimation in LLMs Heuristic Uncertainty as a Clue Just like other powerful AI models, LLMs have a built-in tool to estimate how likely their answers are to be correct. This tool is embedded in the final layer, known as the softmax layer, which calculates the probability of each token in the vocabulary appearing at the current timestep. Typically, the token with the highest probability is the one you see in the LLMs' output. Unfortunately, as discussed earlier, the built-in probability is not reliable. It does not reflect the reasonable confidence LLMs should have. For example, an LLM might assign a high probability to a factually incorrect answer, simply because the answer aligns with the patterns it has observed in its training data. Furthermore, the built-in probability only applies to individual tokens, not the overall coherence or accuracy of the entire response. This limitation hinders our ability to gauge the trustworthiness of a complete sentence. Ideally, we need methods to assess the probability of the entire content being factually sound and logically consistent. Fortunately, the field of deep learning offers established methodologies for uncertainty estimation. As Huang et al. (2023) highlight in their comprehensive survey, three key approaches can be directly adapted from this literature to quantify the uncertainty associated with LLM responses [2]. 👉 Probability Aggregation: This approach combines individual token-level probabilities (often in the form of log probabilities) to estimate a single probability score for the entire sentence and 👀 This approach is simple and economical, only requiring one forward pass of LLM to get the log probs. However, it needs access to softmax layer information, which may be unavailable for black box LLM services. For example, max and average aggregation can be used to estimate the uncertainty of a sentence i: where 𝑝[𝑖𝑗] is the probability of a token at position 𝑗 of a sentence i. 👀 Taking average of log probs is equivalent to measuring uncertainty as 👉Perplexity. Since LLMs inherently predict probabilities for all possible tokens at each step, they produce a distribution p(x[j]) for the j-th token. Hence we can leverage entropy, a well-established measure of uncertainty H(X[j]) = -Σ p(x[j]) * log(p(x[j])), to estimate sentence-level uncertainty as follows, 👉 Uncertainty through Voting: This trick involves generating multiple responses from the LLM for the same prompt. We then analyze the variance (how different or inconsistent the responses are from each other). The idea is that if the LLM keeps spitting out similar answers, it suggests a more consistent and potentially reliable thought process. The more the responses veer off course, the higher the uncertainty. Diving deeper, we can derive 2 metrics: (1) variation ratio (VR) and (2) variation ratio for original prediction (VRO). In particular, if we sample T sentence responses from the LLM and can measure the difference between 2 responses p[i] and p[j] via function dist(), we have: The main difference between the VRO and VR formulas is that VRO only considers the variance between the original response and any additional generated responses (assigning a weight of 1 to the original response and 0 to the others). Here, the distance function can be the BLEU score which captures lexical matching: We can also use BERT as the function to capture semantic similarity/difference between responses. SelfCheckGPT paper [3] proposes a similar uncertainty formula using BERT to measure the uncertainty of response r[i]: Here, for each of the N sampling iterations, we sample several sentences and select the one most similar to the original response in terms of BERT score then take the average. 👀 To sample different outputs from LLMs, we may need to access to the hyperparameter temperature t. t=0 means the generation will be deterministic and ends up with the same output. t>0 will enable more stochasticity in the generation process. Other than that, voting approach can work well with black box LLMs. 👉 Uncertainty through Perturbation: One fascinating aspect of LLMs is their inherent randomness during text generation. Like a chain reaction, a tiny change in one predicted word can ripple through the entire sequence, potentially leading to completely different meanings. This stochastic nature highlights the sensitivity of LLMs throughout the prediction process and we can measure it by: 1. Choose a token and replace (perturbed) it with other tokens (top-k highest probabilities). This leads to several responses. 2. Compute variance as the uncertainty score over the response as in Voting mechanisms above. 🧠 Which token should we mess with? The authors propose 3 ways: • Most Uncertain Spot: This refers to the place in the generated token where the LLM itself seems unsure about what word to pick next (highest entropy, Max). • Most Confident Spot: This is the opposite of point 1, where the LLM seems very certain about the word it chose (lowest entropy, Min). • Biggest Shift: This focuses on the point where the LLM's confidence level changes the most compared to the previous word (maximum change in entropy. MaxDiff). 👀 This approach requires major interference into the LLM computation process, and thus more suitable for white box setting. The research found that getting the LLM to vote on multiple responses is the best way to gauge uncertainty, followed by tweaking the text and looking at the changes, and lastly, simply looking at the probabilities the LLM assigns to each word. 🧪 Voting > Perturbation > Probability Aggregation Quantifying Uncertainty with Information Theory Continuing the line of reasoning that multiple samples help hallucination detection, we can investigate deeper into the hidden states of the LLMs instead of just probing outside. Concretely, Chen et al., (2024) sample responses multiple times, generating multiple hidden states and feature vectors, providing richer information about the LLM's confidence in its responses. The authors propose to use a metric based on the eigenvector of these feature vectors as an uncertainty metric, 👉 EigenScore [7]. In particular, given K hidden state vectors as composing a matrix Z, they compute the covariance matrix: where J[d] = I[d] − 1[d] 1[d]^⊤ is the centering matrix and 1[d] ∈ R^d is the all-one column vector. Then, EigenScore can be calculated as the logarithm of the determinant (LogDet) of the covariance 👀 The EigenScore represents the differential entropy in the sentence embedding space following Gaussian distribution. Hence, it is reasonable to use it for measuring uncertainty. The authors also suggest clipping the features during the computation of EigenScore to reduce overconfidence estimation: where h[min] and h[max] are hyperparameters that can be tuned or calibrated. More principled research [4] throws shade on LLMs' hallucinations by revisiting the basics of uncertainty in machine learning. Turns out, there are two big types: • Epistemic Uncertainty: When the LLM just doesn't know enough (think facts or grammar rules). This can happen because it hasn't seen enough training data or just isn't powerful enough yet. • Aleatoric Uncertainty: This is when the question itself is tricky. Imagine there are multiple right answers, making it a guessing game even for the smartest LLM. Note that this kind of uncertainty is common in LLM settings because there can be many ways to generate reasonable responses. So, the lower the epistemic uncertainty, the more likely the LLM's answer is on point. Since aleatoric uncertainty is not the fault of the model and we cannot do anything about it, it is important to differentiate the two sources of uncertainty. ❌ The problem with heuristic approaches is that they only measure LLM uncertainty as a whole, not inherent ambiguity in the problem itself (aleatoric uncertainty). This can be misleading. For example, a perfect predictor might have high aleatoric uncertainty, while a bad one might only have high epistemic uncertainty. Both would appear equally uncertain under heuristic methods. Therefore, the authors propose to focus on identifying instances where only the epistemic uncertainty is large, which would suggest that the response is likely hallucinated. To this end, they propose 👉 epistemic uncertainty via an iterative prompting procedure. Here's the trick to do that: first, they ask the model to respond to a query. Then, they ask for another response to the query plus the first response. After that, they request a third response to the query and the first two responses, continuing this process. If the LLM keeps changing its response across trials, it suggests a lack of confidence in its knowledge. In contrast, if the LLM consistently provides answers insensitive to the concatenation of its previous response, it indicates a stronger grasp of the topic. 👀 In other words, the responses should be independent. This means that the joint distribution of these responses, for a fixed query, must be a product distribution. To illustrate the point, the authors observe the probabilities of LLM on the correct answers when prompting with the iterative procedure: 👀 Why? Intuitively, if the question is seen during training, the attention key and query weights of the LLM are tuned to be able to project the question to higher attention scores than other sentences. Thus, the question will be attended the most regard less of the context length, and the LLM will always have a chance to look at the quetion to give the right answer. On the contrary, if the question is novel, the weights can not do any thing and as the context get longer, the attention can be anywhere, leading to a wrong attended input for the LLMs to answer. In short, the iterative prompting procedure gives us a hint at the uncertain behavior of the LLM. Given the right motivation, now we can derive a robust uncertainty score. Formerly, given a query x ∈ X and possible responses Y[1], . . . , Y[t], a family of prompts F = {F[t] : X → X | t ∈ N} is defined with the prompt function F[t](x, Y[1], . . . , Y[t]) as: Then, we can model the distribution of the sequence of responses given the query x: 👀 The chain rule is approximated because of the use of the prompt function F[t] to combine the random variables. Hence, it is pseudo join distribution. Given the formulation of the joint distribution, it is intuitive to say that the response of LLMs Y[1],…,Y[n]|x is wrong if the LLM’s probability of Y[1],…,Y[n]|x is unlike the ground truth probability of Y[1],…,Y[n]|x. Thus, a metric that can measure the truthfulness of LLM’s output is the KL divergence between LLM’s joint distribution and ground truth joint distribution. Yet, we don’t know ground truth distribution. Fortunately, we can replace the KL with an estimable lower bound: Computing the exact mutual information requires evaluating Q over its entire support, which can be infinite. Therefore, the authors propose to estimate the term by sampling-based approximation. In 1. Sample X[1], . . . , X[k] sequence of responses from the LLM 2. Construct a set of indices of unique elements S = i ∈ [k] : X[i] ≠ X[j] ∀j < i 3. Construct empirical distributions: for all i ∈ S: 4. Finally, compute the estimated mutual information: where 𝛾 and k are hyperparameters. Model-based Hallucination Detection LLMs as Evaluators It seems like a chicken-and-egg problem to use LLMs to detect LLMs’ falsehood – 🧠 how can an LLM identify a lie if it chose to generate the lie in the first place? Interestingly, early research has shown that they can do it [5]. It is reasonable since humans also exhibit this behavior. We often make mistakes and only realize them upon reflection. Similarly, when interacting with language models, they can acknowledge and correct their errors when shown their mistakes. Enhancing validation reliability is possible by employing a more robust LLM to verify the outputs of a less powerful one. This approach, commonly used by the open-source community for benchmarking LLM improvements, involves leveraging a larger or more advanced model to assess the accuracy of a smaller or less-developed model. The detection framework is very simple: 👀 Simple evaluation prompts may not work well all the time, especially when the LLM Evaluator is not stronger than the main LLM. Improving the accuracy of the LLM evaluator requires special methods [5]. One property the research found out is that LLMs excel at calibrating multiple-choice and true/false questions. Put simply, the probabilities they assign to the options are somewhat reliable. 👀 This property is more evident if the multiple choice has the suitable format. For example, if there is “None of the above” choice, the quality of calibration may be reduced. We may also need to tune the temperature t to have good probabilities. Thus, they propose a simple trick to use 👉 LLM Prompting without finetuning to make the evaluation more accurate: 1. Present the response to the LLM Evaluator and ask if the response is True or False. 2. Measure the the probability P(“True”) that the LLM Evaluator assigns to the token “True”. An example to illustrate the evaluation prompt: To enhance accuracy, the authors suggest presenting the model with additional examples for comparison. For example, we can generate a total of 5 responses, and then ask the model to assess the validity of one of them—the original response of LLM. 👀 The result can be further improved with few-shot prompting techniques. In short, we can say that Comparison (Few-shot) > Comparison > One Proposed Answer. In addition to prompting, the authors also propose 👉 finetuning LLMs for the detection task. Concretely, they train LLMs to predict whether they know the answer to any given free-form question, i.e., estimating P(IK) (“I know”), using 2 approaches: • Value Head Integration: This approach introduces an additional "head" to the LLM architecture. This head is specifically trained to predict P(IK) as a logit value. A key advantage of this method lies in its flexibility. We can probe the value head at any point during text generation, allowing for dynamic uncertainty assessment. • Natural Language Prompt-based Training: This approach leverages natural language processing (NLP) techniques. They train the LLM to respond to the prompt: "With what confidence could you answer this question?" The model's target output is a human-readable percentage value (0% - 100%) reflecting its estimated confidence level in answering the question. This method offers a more intuitive interpretation of uncertainty for users. ❌ Unfortunately, the Natural Language Prompt-based Training approach fails, so the authors only follow the Value Head Integration approach. When training a model, it's essential to prepare training data. Similar to training other conformal predictors, we need data in a binary classification format: • X: the input and response from the LLM • Y: whether the response is correct or not. In practice, they generated 30 response samples per question input. If 20 samples were deemed correct, they would have 20 positive-label data points in the training set, indicating the model "knew" the answer. Conversely, 10 incorrect samples resulted in 10 negative-label data points. The LLM is finetuned to output the value head following the ground-truth labels. The results indicate that finetuning generally helps the model distinguish between correct and incorrect responses. In the in-distribution setting across datasets, the LLM's predicted P(IK) aligns somewhat with the ground truth. However, when generalized to a different dataset (from TriviaQA to Mixed-Arithmetic), this differentiation becomes less clear. Despite the initial promising results, there are significant limitations with the approach: ❌ High Detection Cost: The cost of detection is high because it relies on LLMs. These models require substantial computational resources and energy, leading to increased expenses in terms of both hardware and operational costs. ❌ Insufficient Reliance on Textual Responses: Simply relying on the textual response to determine truthfulness is inadequate. Textual responses alone cannot comprehensively reveal the correctness of the information because LLMs are very good at making things look real. Simple Conformal Predictor The key to catching hallucinations might lie within the LLMs themselves. By peering deeper into their internal workings, we could extract valuable clues about their current state and what they "believe" to be true. This richer information would significantly boost the accuracy of hallucination detection. Think of it this way: with a clearer picture of the LLM's thought process, we wouldn't need such a complex detector. Even a simpler classifier could do the job if we have the right features to analyze. The workflow becomes: This simple idea is attractive because a simple conformal predictor such as a feed-forward neural network can be used to perform the detection. However, the nature of these features can pose several 🤔 Is it easy and cost-effective to extract these features? 🤔 Are these features informative and can they generalize well to new prompts and different large language models (LLMs)? Now, the main question is: 🧠 Which features should we extract? One candidate is 👉 the internal states of the LLMs. Recent works have investigated and declared that the internal states of LLMs are reliable sources of information for truthfulness detection on the final response [6,7]. 👀 It is important to note that this must be the internal states, not the response text or the response embedding vector. As quoted in their paper: We hypothesize that the truth or falsehood of a statement should be represented by, and therefore extractable from, the LLM’s internal state. Source: [6] Ok, let’s use the internal states, which are represented by the hidden layers of the LLMs.🧠 Which layers should we use? Intuitively, the last hidden layer seems like a good candidate – it should theoretically hold all the processed information. But there's a catch: this layer is primarily focused on predicting the next word in the sequence, not necessarily retaining long-term context. Conversely, layers closer to the input are better at extracting basic features from the data, but might not capture the bigger picture. To find out, the authors try out several hidden layers of the LLMs as the features. For each chosen layer, the feature vector can be simply the average across token timesteps or the last token’s hidden states. The results reveal that the middle layers perform best: Recently, a more detailed investigation into the hidden states of LLMs aims to find out if these internal states can signal the risk of hallucination based on the given queries [8]. The goal is to see if we can reliably estimate this risk even before the LLM generates a response, i.e., 👉 self-awareness. Self-awareness is the ability in humans that causes us to hesitate before responding to queries or making decisions in situations where we recognize our lack of knowledge (we know what we don’t know). The authors want to verify that ability in LLMs by studying LLMs’ internal states. Concretely, they use internal states corresponding to the last token of queries, denoted as x[q]. The conformal predictor, or estimator, employed is a variant of the multilayer perceptron (MLP) adapted from Llama’s. The estimator is mathematically formulated to predict the hallucination risk H as: They also prepare a dataset containing both known and unknown queries for the LLMs. The LLMs are expected to be uncertain about the unknown queries, which they have never encountered before. They train the estimator on the dataset and compare it with simple baselines such as Perplexity and Prompting to illustrate the point that internal states are really good indicators of uncertainty for unknown queries. 🧪 Internal-State Conformal Predictor > PPL > ICL Prompt > Zero-shot Prompt. Hidden states, while powerful tools within Large Language Models (LLMs), come with inherent limitations: ❌ Architecture Dependence: Extracting hidden states is intrinsically tied to specific LLM architectures and models. This creates a roadblock when transferring the extraction process across different LLMs. Each LLM architecture might require unique approaches to access and interpret its hidden states. ❌ Sensitivity and Generalizability: Hidden states are demonstrably sensitive to the input data and the specific LLM they are extracted from. This sensitivity poses a significant challenge to generalizability. Conformal predictors, for instance, trained in a particular dataset's hidden states might not perform well when applied to hidden states derived from a different dataset or LLMs. In a promising new direction, researchers have proposed an alternative approach that bypasses hidden states altogether. This method, 👉 Lookback Lens, focuses on extracting features directly from the attention scores generated by LLMs. We focus on attention because it reveals how much the LLM considers the given context when generating text. This is especially valuable compared to other internal model workings. Since attention provides a human-understandable measure, it becomes a powerful tool for catching and fixing made-up information (hallucinations) in the generated text. When a Transformer-based LLM performs attention, it attends to the context tokens and its newly generated tokens. The authors aggregate attention scores at attention head h and layer l, corresponding to the two types of attention: Here N is the number of tokens in the context and t is the timestep of newly generated tokens. Hence, the lookback ratio LR(l,h t) for head h in layer l at time step t is: 👀 Intuitively, if the LLM focuses more on the context (the ratio is higher), it tends to be more reliable, less hallucinated. Of course, we can combine different layers, heads, and timesteps to form a combined feature vector for a span of generated text Y={y[t] , y[t+1], ..., y[t+T −1]}. Given the feature vector, we can employ a simpler classifier (logistic regression) to detect if it is factual or hallucinated. The experimental results are promising, at least for summarization tasks where attending to the context is crucial to summarize. 🧪 Lookback Lens > Hidden States > Prompt The results also reveal that Lookback Lens might not always learn the training data perfectly, but it consistently performs better on completely different tasks (out-of-domain tasks). Lookback Lens analyzes attention maps (lookback ratio features), which is more robust than the hidden states, and thus, is powerful and adaptable, making it useful for a wider range of problems. Given the Lookback ratio as a score, we can measure the factuality or confidence of different generated candidates: Final Thoughts: The Future of LLM Hallucination Detection While the methods explored here offer promising avenues for detecting LLM hallucinations, there's still much room for exploration. Future research directions include: • Improved uncertainty estimation for LLMs: Refining techniques for LLMs to better quantify their uncertainty about generated content. • Novel methods for leveraging internal LLM states: Exploring techniques to analyze internal LLM representations to glean deeper insights into the generation process and identify potential • Integration with factual knowledge bases: Developing frameworks that seamlessly integrate LLM outputs with external knowledge sources to verify factual consistency and enhance detection accuracy. • Benchmarking and interpretability: Establishing standardized benchmarks for evaluating hallucination detection methods and fostering interpretable models that provide clear explanations for their By addressing these challenges, we can move towards a future where LLMs are reliable partners in human endeavors, offering creative and informative outputs while minimizing the risk of misleading information. This will be crucial for fostering trust and wider adoption of LLM technology across various domains. [1] Guo, Chuan, et al. “On calibration of modern neural networks.” International conference on machine learning. PMLR, 2017. [2] Huang, Lei, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen et al. "A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions." arXiv preprint arXiv:2311.05232 (2023). [3] Manakul, Potsawee, Adian Liusie, and Mark JF Gales. "Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models." arXiv preprint arXiv:2303.08896 (2023). [4] Yadkori, Yasin Abbasi, Ilja Kuzborskij, András György, and Csaba Szepesvári. "To Believe or Not to Believe Your LLM." arXiv preprint arXiv:2406.02543 (2024). [5] Kadavath, Saurav, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer et al. "Language models (mostly) know what they know." arXiv preprint arXiv:2207.05221 [6] Azaria, Amos, and Tom Mitchell. "The internal state of an LLM knows when it's lying." arXiv preprint arXiv:2304.13734 (2023). [7] Chen, Chao, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. "INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection." In The Twelfth International Conference on Learning Representations. [8] Ji, Ziwei, Delong Chen, Etsuko Ishii, Samuel Cahyawijaya, Yejin Bang, Bryan Wilie, and Pascale Fung. "LLM Internal States Reveal Hallucination Risk Faced With a Query." arXiv preprint arXiv:2407.03282 (2024). [9] Chuang, Yung-Sung, Linlu Qiu, Cheng-Yu Hsieh, Ranjay Krishna, Yoon Kim, and James Glass. "Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps." arXiv preprint arXiv:2407.07071 (2024). I hope you enjoy the article. Stay tuned for the newest and exclusive content by subscribing to Neurocoder Tales! Disclaimer: While every effort is made to provide accurate and unbiased information, errors may occur. Let me know if you catch any error.
{"url":"https://hungleai.substack.com/p/uncertainty-confidence-and-hallucination","timestamp":"2024-11-07T19:52:33Z","content_type":"text/html","content_length":"468661","record_id":"<urn:uuid:1f7aa596-8e99-45c1-a727-37bdfc707d2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00807.warc.gz"}
Modeling and Experimental Tools with Prof. Magnes Project Plan: A Guide To Convolution In Action Description of project Convolution is the theory behind interpreting the data presented. When physicists use optical tools for their experiments they need to understand whether the optics they are using will transmit the proper information. In practice one is limited to the material of the filters, as the wave fronts get distorted by the material they pass through. In order to be able to work with what you have, physicists model their possible optics layout before purchasing the optics. To retrieve the desired information physicists model the convoluted wave front, and then prepare a good deconvolution mechanism that in practice should yield the necessary data. This project is an attempt to understand convolution and deconvolution of electromagnetic waves through optical filters. I will be exploring the theory behind convolution, demonstrate examples of convolution, attempt deconvolution of a convoluted distributions, and then hopefully show a real world example of actual optical Convolution is the mixture of functions that individually have known Fourier transforms. A Fourier transform is the transformation of a square-integrable function Both the function and the Fourier transform describe the same system though under different but related spaces (See: Plancherel’s Theorem). The advantage is if you have a machine that measures time but you are trying the measure the system as a function of frequency, then you can measure the system with that machine and simply take the Fourier transform of your data. Unfortunately, the deviations of the Fourier pairs are bounded by the uncertainty principle. A function with a narrow spread will yield a transformation with a wider spread, and vice versa. The definition of a convolution of two functions is defined as follows: The convolution theorem then states that the Fourier transform of a convolution is the product of the Fourier transforms to a factor of The implication of this theorem is if any arbitrary curve can be expressed as a product of functions with given Fourier counterparts, then it can undergo deconvolution to yield the desired data as represented by known functions (3). All work will either be done on my computer or at the sci-vis lab. Week 1: I will become familiar with Professor Magnes’s MATLAB script and reconstruct it in Mathematica. I will use the information provided for me in the texts to create new examples of various convoluted distributions. I will also provide an introduction to the theory that pertains to my project. Week 2: By now I should be familiar enough with Mathematica to display some cool examples. I will attempt to provide an example of real world convolution of two optical filters from thorlabs. In practice convolution of two actual filters is not as easy as adding two equations. I will need to figure out how to generally apply my ideal examples to sets of data. Week 3: I will now attempt to demonstrate the deconvolution of a distribution. I will show this by using one of my previous examples. In theory I should be able to demonstrate proper deconvolution of two functions. Week 4: Now that I have demonstrated deconvolution, I should be able to demonstrate deconvolution of the data I had previously showed in convolution form. This would be very tricky but hopefully I should be able to succeed. Week 5: I may be setting the bar up too high, so possibly some of my work will take more time than expected. I should be finished with everything now. Here I will tweak my project, and conclude my Note: I may stray from my timeline, but that is only because I got caught up in an interesting phenomenon. Should I decide to alter my direction, I will update my timeline accordingly. Will be updated if necessary (1) Griffiths, David J. Introduction to Electrodynamics. Upper Saddle River, NJ: Prentics Hall, 1999. Print. (2) Hecht, Eugene. Optik. München: Oldenbourg, 2001. Print. (3) James, J. F. A Student’s Guide to Fourier Transforms: With Applications in Physics and Engineering. Cambridge: Cambridge UP, 2011. Print. (4) Pedrotti, Frank L., Leno S. Pedrotti, and Leno Matthew. Pedrotti. Introduction to Optics. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. Print. (5) Sadun, Lorenzo Adlai. Applied Linear Algebra: The Decoupling Principle. Providence, RI: American Mathematical Society, 2008. Print. Question to the reader: Is the verb of deconvolution: deconvolve or deconvolute? 1 thought on “Project Plan: A Guide To Convolution In Action” 1. Jenny Magnes I like your explanation of convolution. It might be beneficial to mention that the “machine” (instrument) itself is performing a FT transform. For example, a grating FT the light into its frequency domain. Thinking of the process that way might make conceptual sense. It might be good for you to have someone else proofread your post to avoid typos.
{"url":"https://pages.vassar.edu/magnes/2014/04/07/project-plan-a-guide-to-convolution-in-action/","timestamp":"2024-11-06T14:42:20Z","content_type":"text/html","content_length":"62388","record_id":"<urn:uuid:8884126a-7386-4b16-8690-498af9efc289>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00797.warc.gz"}
egularized ARX Estimate state-space model by reduction of regularized ARX model Estimate State-Space Model sys = ssregest(tt,nx) estimates a discrete-time state-space model by reduction of a regularized ARX model, using the all the input and output signals in the timetable tt. You can use this syntax for SISO and MIMO systems. The function assumes that the last variable in the timetable is the single output signal. You can also use this syntax to estimate a time-series model if tt contains a single variable that represents the sole output. For MIMO systems and for timetables that contain more variables than you plan to use for estimation, you must also use name-value arguments to specify the names of the input and output channels you want. For more information, see tt. To estimate a continuous-time model, set 'Ts' to 0 using name-value syntax. sys = ssregest(u,y,nx,'Ts',Ts) uses the time-domain input and output signals in the comma-separated matrices u,y and the model sample time Ts. The software assumes that the data sample time is also Ts seconds. You can use this syntax for SISO, MISO, and MIMO systems. Estimating continuous-time models from matrix-based data is not recommended. sys = ssregest(data,nx) uses the time-domain or frequency-domain data in the data object data. Use this syntax especially when you want to estimate a state-space model using frequency-domain or frequency-response data, or when you want to take advantage of the additional information, such as data sample time or experiment labeling, that data objects provide. Specify Additional Options sys = ssregest(___,nx,Name,Value) incorporates additional model options specified by one or more name-value arguments. For example, specify input and output signal variable names that correspond with the variables to use for MIMO timetable data using sys = ssregest(data,nx,'InputName',["u1","u2"],'OutputName',["y1","y3"]). Use the 'Form', 'Feedthrough', and 'DisturbanceModel' name-value arguments to modify the default behavior of the A, B, C, D, and K matrices. sys = ssregest(___,opt) specifies estimation options that configure the estimation objective, ARX orders, and order reduction options. This syntax can include any of the input argument combinations in the previous syntaxes. Return Estimated Initial States [sys,x0] = ssregest(___) returns the value of initial states computed during estimation. This syntax can include any of the input argument combinations in the previous syntaxes. Estimate State-Space Model by Reduction of Regularized ARX Model Load the time-domain estimation data, which is contained in the timetable tt. Identify a third-order state-space model. Estimate State-Space Model With Input Delay Load estimation data, which is contained in the input/output matrix pair umat2 and ymat2. load sdata2.mat umat2 ymat2 Estimate a third-order state-space model with input delay. Specify the sample time Ts as 0.1. sys = ssregest(umat2,ymat2,3,'InputDelay',2,'Ts',0.1); Configure the ARX Orders and Estimation Focus Load estimation data. Specify the order of the regularized ARX model used by the software during estimation. Also, set the estimation focus to simulation. opt = ssregestOptions('ARXOrder',[100 100 1],'Focus','simulation'); Identify a third-order state-space model. sys = ssregest(z2,3,opt); Return Initial State Values Computed During Estimation Load estimation data. Obtain the initial state values when identifying a third-order state-space model. [sys,x0] = ssregest(tt2,3); x0 = 3×1 Compare Regularized State-Space Models Estimated Using Impulse Response and Reduction of ARX Models Load data. load regularizationExampleData eData; Create a transfer function model used for generating the estimation data (true system). trueSys = idtf([0.02008 0.04017 0.02008],[1 -1.561 0.6414],1); Obtain regularized impulse response (FIR) model. opt = impulseestOptions('RegularizationKernel','DC'); m0 = impulseest(eData,70,opt); Convert the model into a state-space model and reduce the model order. m1 = balred(idss(m0),15); Obtain a second state-space model using regularized reduction of an ARX model. Compare the impulse responses of the true system and the estimated models. Input Arguments tt — Timetable-based estimation data timetable | cell array of timetables. Estimation data, specified as a uniformly sampled timetable that contains variables representing input and output channels or, for multiexperiment data, a cell array of timetables. Use Entire Timetable If you want to use all the variables in tt as input or output channels, and the variables are organized so that the set of input channel variables is followed by the set of output channel variables, • For SISO systems, specify tt as an N[s]-by-2 timetable, where N[s] is the number of samples and the two timetable variables represent the measured input channel and output channel respectively. • For MIMO systems, specify tt as an N[s]-by-(N[u]+N[y]) timetable, where N[u] is the number of inputs and N[y] is the number of outputs. The first N[u] variables must contain the input channels and the remaining N[y] variables must contain the output channels. When you are estimating state space or transfer function models, you must also explicitly specify the input and output channels, as the following section describes. • For multiexperiment data, specify data as an N[e]-by-1 cell array of timetables, where N[e] is the number of experiments. The sample times of all the experiments must match. Use Selected Variables from Timetable If you want to explicitly identify the input and output channels, such as when you want to use only a subset of the available channels, when the input and output channel variables are intermixed, or when you are estimating a MIMO state-space or transfer function model, use the 'InputName' and 'OutputName' name-value arguments to specify which variables to use as inputs and outputs. For example, suppose that tt contains six channel variables: "u1", "u2", "u3", and "y1", "y2", "y3". For estimation, you want to use the variables "u1" and "u2" as the inputs and the variables "y1" and "y3" as the outputs. Use the following command to perform the estimation: sys = ssregest(tt,__,'InputName',["u1" "u2"],'OutputName',["y1" "y3"]) Use Timetable to Estimate Time Series Models If you want to estimate a time series model rather than an input/output model, use only output variables from tt. You can either specify tt to contain only the output variables that you want, or extract the output variables from tt if tt also contains input variables. The specification approach is similar to that for input/output model estimation. • For a single-output system, specify tt as an N[s]-by-1 timetable. • For a multivariate system, specify tt as an N[s]-by-(N[y]) timetable. Even if you plan to use all the variables in tt, you must specify all of them using the 'OutputName' name-value argument so that the software does not interpret them as input variables. For a timetable tt that has variables beyond what you want to use, such as input variables or additional output variables, specify both the output variables you want to use and, in 'InputName', an empty array. For example, suppose that tt contains six variables: "u1", "u2", "u3", and "y1", "y2", "y3". For time series estimation, you want to use the output variables "y1" and "y3". Use the following command to perform the estimation: sys = ssregest(tt,__,'OutputName',["y1" "y3"],'InputName',[]) For more information about working with estimation data types, see Data Domains and Data Types in System Identification Toolbox. u, y — Matrix-based estimation data matrices | cell array of matrices Estimation data, specified for SISO systems as a comma-separated pair of N[s]-by-1 real-valued matrices that contain uniformly sampled input and output time-domain signal values. Here, N[s] is the number of samples. For MIMO systems, specify u,y as an input/output matrix pair with the following dimensions: • u — N[s]-by-N[u], where N[u] is the number of inputs. • y — N[s]-by-N[y], where N[y] is the number of outputs. For multiexperiment data, specify u,y as a pair of 1-by-N[e] cell arrays, where N[e] is the number of experiments. The sample times of all the experiments must match. For time series data, which contains only outputs and no inputs, specify [],y. • Matrix-based data does not support estimation from frequency-domain data. You must use a data object such as an iddata object or idfrd object (see data). • Using matrices for estimation data is not recommended for continuous-time estimation because the data does not provide the sample time. The software assumes that the data is sampled at 1 Hz. For continuous-time estimation, it is recommended that you convert each matrix to a timetable. For example, to convert the matrices um and ym to a timetable tt with a sample time of 0.5 minutes, use the following command. tt = timetable(um,ym,'rowtimes',minutes(0.5*(1:size(u,1)))) For a more detailed example of converting matrix-based SISO data to a timetable, see Convert SISO Matrix Data to Timetable. For an example of converting a MIMO matrix pair to a timetable, see Convert MIMO Matrix Data to Timetable for Continuous-Time Model Estimation. For more information about working with estimation data types, see Data Domains and Data Types in System Identification Toolbox. data — Estimation data object iddata object | frd object | idfrd object Estimation data object, specified as an iddata object, an frd object, or an idfrd object that contains uniformly sampled input and output values. For time series models, data can contain only output values. By default, the software sets the sample time of the model to the sample time of the estimation data. For multiexperiment data, the sample times and intersample behavior of all the experiments must match. For time-domain estimation, data must be an iddata object containing the input and output signal values. For frequency-domain estimation, data can be one of the following: • Recorded frequency response data (frd (Control System Toolbox) or idfrd) • iddata object with properties specified as follows: □ InputData — Fourier transform of the input signal □ OutputData — Fourier transform of the output signal □ Domain — 'Frequency' □ Ts — Nonzero nx — Order of estimated model positive scalar | positive vector | 'best' Order of the estimated model, specified as a positive scalar or vector. If nx is a vector, then ssregest creates a plot which you can use to choose a suitable model order. The plot shows the Hankel singular values for models of chosen values in the vector. States with relatively small Hankel singular values can be safely discarded. A default choice is suggested in the plot. You can also specify nx = 'best', as in ssregest(data,'best'), in which case the optimal order is chosen automatically in the 1:10 range. opt — Options set for ssregest ssregestOptions options set Estimation options for ssregest, specified as an options set you create using ssregestOptions. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: sys = ssregest(z2,3,'InputDelay',2) specifies a delay of 2 sampling periods. Ts — Sample time sample time of data (data.Ts) (default) | positive scalar | 0 Sample time of the model, specified as 0 or equal to the sample time of data. For continuous-time models, use Ts = 0. For discrete-time models, specify Ts as a positive scalar whose value is equal to the data sample time. InputDelay — Input delays 0 (default) | scalar | vector Input delay for each input channel, specified as a numeric vector. For continuous-time systems, specify input delays in the time unit stored in the TimeUnit property. For discrete-time systems, specify input delays in integer multiples of the sample time Ts. For example, InputDelay = 3 means a delay of three sampling periods. For a system with Nu inputs, set InputDelay to an Nu-by-1 vector. Each entry of this vector is a numerical value that represents the input delay for the corresponding input channel. You can also set InputDelay to a scalar value to apply the same delay to all channels. Form — Type of canonical form 'free' (default) | 'modal' | 'companion' | 'canonical' Type of canonical form of sys, specified as one of the following values: • 'modal' — Obtain sys in modal form. • 'companion' — Obtain sys in companion form. • 'free' — All entries of the A, B and C matrices are treated as free. • 'canonical' — Obtain sys in the observability canonical form [1]. Use the Form, Feedthrough and DisturbanceModel name-value pair arguments to modify the default behavior of the A, B, C, D, and K matrices. Output Arguments sys — Estimated state-space model Estimated state-space model of order nx, returned as an idss model object. The model represents: $\begin{array}{l}\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+Bu\left(t\right)+Ke\left(t\right)\\ y\left(t\right)=Cx\left(t\right)+Du\left(t\right)+e\left(t\right)\end{array}$ A, B, C, D, and K are state-space matrices. u(t) is the input, y(t) is the output, e(t) is the disturbance and x(t) is the vector of nx states. All the entries of A, B, C, and K are free estimable parameters by default. D is fixed to zero by default, meaning that there is no feedthrough, except for static systems (nx=0). Information about the estimation results and options used is stored in the Report property of the model. Report has the following fields: Report Field Description Status Summary of the model status, which indicates whether the model was created by construction or obtained by estimation Method Estimation command used Handling of initial states during estimation, returned as one of the following values: • 'zero' — The initial state was set to zero. • 'estimate' — The initial state was treated as an independent estimation parameter. This field is especially useful when the InitialState option in the estimation option set is 'auto'. ARXOrder ARX model orders, returned as a matrix of nonnegative integers [na nb nk]. Quantitative assessment of the estimation, returned as a structure. See Loss Function and Model Quality Metrics for more information on these quality metrics. The structure has these • FitPercent — Normalized root mean squared error (NRMSE) measure of how well the response of the model fits the estimation data, expressed as the percentage fitpercent = 100(1-NRMSE) • LossFcn — Value of the loss function when the estimation completes • MSE — Mean squared error (MSE) measure of how well the response of the model fits the estimation data Fit • FPE — Final prediction error for the model • AIC — Raw Akaike Information Criteria (AIC) measure of model quality • AICc — Small-sample-size corrected AIC • nAIC — Normalized AIC • BIC — Bayesian Information Criteria (BIC) Parameters Estimated values of model parameters OptionsUsed Option set used for estimation. If no custom options were configured, this is a set of default options. See ssregestOptions for more information. RandState State of the random number stream at the start of estimation. Empty, [], if randomization was not used during estimation. For more information, see rng. Attributes of the data used for estimation, returned as a structure with the following fields. • Name — Name of the data set • Type — Data type • Length — Number of data samples • Ts — Sample time DataUsed • InterSample — Input intersample behavior, returned as one of the following values: □ 'zoh' — A zero-order hold maintains a piecewise-constant input signal between samples. □ 'foh' — A first-order hold maintains a piecewise-linear input signal between samples. □ 'bl' — Band-limited behavior specifies that the continuous-time input signal has zero power above the Nyquist frequency. • InputOffset — Offset removed from time-domain input data during estimation. For nonlinear models, it is []. • OutputOffset — Offset removed from time-domain output data during estimation. For nonlinear models, it is []. For more information on using Report, see Estimation Report. x0 — Initial states computed during estimation scalar | matrix Initial states computed during estimation, returned as a scalar. If data contains multiple experiments, then x0 is a matrix with each column corresponding to an experiment. This value is also stored in the Parameters field of the model’s Report property. More About Modal Form In modal form, A is a block-diagonal matrix. The block size is typically 1-by-1 for real eigenvalues and 2-by-2 for complex eigenvalues. However, if there are repeated eigenvalues or clusters of nearby eigenvalues, the block size can be larger. For example, for a system with eigenvalues $\left({\lambda }_{1},\sigma ±j\omega ,{\lambda }_{2}\right)$, the modal A matrix is of the form ${A}_{m}=\left[\begin{array}{cccc}{\lambda }_{1}& 0& 0& 0\\ 0& \sigma & \omega & 0\\ 0& -\omega & \sigma & 0\\ 0& 0& 0& {\lambda }_{2}\end{array}\right].$ Companion Form In the companion realization, the characteristic polynomial of the system appears explicitly in the rightmost column of the A matrix. For a system with characteristic polynomial $P\left(s\right)={s}^{n}+{\alpha }_{n-1}{s}^{n-1}+{\alpha }_{n-2}{s}^{n-2}+\dots +{\alpha }_{1}s+{\alpha }_{0},$ the corresponding companion A matrix is ${A}_{ccom}=\left[\begin{array}{ccc}\begin{array}{l}0\\ 1\\ 0\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{l}0\\ 0\\ 1\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{ccc}\begin{array}{cc}\begin{array}{l}0\\ 0\\ 0\\ 1\\ ⋮\\ 0\end{array}& \begin{array}{l}\dots \\ \dots \\ \dots \\ \dots \\ \ddots \\ \dots \end{array}\end{array}& \begin{array}{l}0\\ 0\\ 0\\ 0\\ ⋮\\ 1\end{array}& \begin{array}{l}-{\alpha }_{0}\\ - {\alpha }_{1}\\ -{\alpha }_{2}\\ -{\alpha }_{3}\\ \text{}⋮\\ -{\alpha }_{n-1}\end{array}\end{array}\end{array}\right],\text{ }{B}_{ccom}=\left[\begin{array}{c}1\\ 0\\ ⋮\\ 0\end{array}\right].$ The companion transformation requires that the system be controllable from the first input. The companion form is poorly conditioned for most state-space computations; avoid using it when possible. • ssregest function provides improved accuracy than n4sid for short, noisy data sets. • For some problems, the quality of fit using n4sid is sensitive to options, such as N4Horizon, whose values can be difficult to determine. In comparison, the quality of fit with ssregest is less sensitive to its options, which makes ssregest simpler to use. ssregest estimates a regularized ARX model and converts the ARX model to a state-space model. The software then uses balanced model reduction techniques to reduce the state-space model to the specified order. [1] Ljung, L. System Identification: Theory For the User, Second Edition, Appendix 4A, pp 132-134, Upper Saddle River, N.J: Prentice Hall, 1999. Version History Introduced in R2014a R2022b: Time-domain estimation data is accepted in the form of timetables and matrices Most estimation, validation, analysis, and utility functions now accept time-domain input/output data in the form of a single timetable that contains both input and output data or a pair of matrices that contain the input and output data separately. These functions continue to accept iddata objects as a data source as well, for both time-domain and frequency-domain data.
{"url":"https://ch.mathworks.com/help/ident/ref/ssregest.html","timestamp":"2024-11-14T15:12:24Z","content_type":"text/html","content_length":"151672","record_id":"<urn:uuid:5f5dbbf7-e210-4677-8e8b-96ca23b4c6cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00665.warc.gz"}
Real analysis Short description: Mathematics of real numbers and real functions In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions.^[1] Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability. Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions. Construction of the real numbers The theorems of real analysis rely on the properties of the real number system, which must be established. The real number system consists of an uncountable set ([math]\displaystyle{ \mathbb{R} }[/ math]), together with two binary operations denoted + and ⋅, and a total order denoted ≤. The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' (or 'holes') in the real numbers. This property distinguishes the real numbers from other ordered fields (e.g., the rational numbers [math]\displaystyle{ \mathbb{Q} }[/math]) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below). Order properties of the real numbers The real numbers have various lattice-theoretic properties that are absent in the complex numbers. Also, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property: Every nonempty subset of [math]\displaystyle{ \mathbb{R} }[/math] that has an upper bound has a least upper bound that is also a real number. These order-theoretic properties lead to a number of fundamental results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem. However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences. Topological properties of the real numbers Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order [math]\displaystyle{ \lt }[/math]. Alternatively, by defining the metric or distance function [math]\displaystyle{ d:\mathbb{R}\times\mathbb{R}\to\mathbb{R}_{\geq 0} }[/math] using the absolute value function as [math]\displaystyle{ d(x, y) = |x - y| }[/math], the real numbers become the prototypical example of a metric space. The topology induced by metric [math]\displaystyle{ d }[/math] turns out to be identical to the standard topology induced by order [math]\displaystyle{ \lt }[/math]. Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in [math]\displaystyle{ \mathbb{R} }[/math] only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods. A sequence is a function whose domain is a countable, totally ordered set.^[2] The domain is usually taken to be the natural numbers,^[3] although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices. Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map [math]\displaystyle{ a : \N \to \R : n \mapsto a_n }[/math]. Each [math]\displaystyle{ a(n) = a_n } [/math] is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses:^[4] [math]\displaystyle{ (a_n) = (a_n)_{n \in \N}=(a_1, a_2, a_3, \dots) . }[/math] A sequence that tends to a limit (i.e., [math]\displaystyle{ \lim_{n \to \infty} a_n }[/math] exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence [math]\displaystyle{ (a_n) }[/math] is bounded if there exists [math]\displaystyle{ M\in\R }[/math] such that [math]\displaystyle{ |a_n|\lt M }[/math] for all [math]\displaystyle { n\in\mathbb{N} }[/math]. A real-valued sequence [math]\displaystyle{ (a_n) }[/math] is monotonically increasing or decreasing if [math]\displaystyle{ a_1 \leq a_2 \leq a_3 \leq \cdots }[/math] or [math]\displaystyle{ a_1 \geq a_2 \geq a_3 \geq \cdots }[/math] holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with [math]\displaystyle{ \leq }[/math] or [math]\displaystyle{ \geq }[/math] replaced by < or >. Given a sequence [math]\displaystyle{ (a_n) }[/math], another sequence [math]\displaystyle{ (b_k) }[/math] is a subsequence of [math]\displaystyle{ (a_n) }[/math] if [math]\displaystyle{ b_k=a_{n_k} }[/math] for all positive integers [math]\displaystyle{ k }[/math] and [math]\displaystyle{ (n_k) }[/math] is a strictly increasing sequence of natural numbers. Limits and convergence Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value.^[5] (This value can include the symbols [math]\displaystyle{ \pm\infty } [/math] when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.) The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of the 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of the 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows. Definition. Let [math]\displaystyle{ f }[/math] be a real-valued function defined on [math]\displaystyle{ E\subset\mathbb{R} }[/math]. We say that [math]\displaystyle{ f(x) }[/math] tends to [math]\ displaystyle{ L }[/math] as [math]\displaystyle{ x }[/math] approaches [math]\displaystyle{ x_0 }[/math], or that the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] approaches [math]\displaystyle{ x_0 }[/math] is [math]\displaystyle{ L }[/math] if, for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x\in E }[/math], [math]\displaystyle{ 0 \lt |x - x_0| \lt \delta }[/math] implies that [math]\displaystyle{ |f(x) - L| \lt \varepsilon }[/math]. We write this symbolically as [math]\displaystyle{ f(x)\to L\ \ \text{as}\ \ x\to x_0 , }[/math] or as [math]\displaystyle{ \lim_{x\to x_0} f(x) = L . }[/math] Intuitively, this definition can be thought of in the following way: We say that [math]\displaystyle{ f(x)\to L }[/math] as [math]\displaystyle{ x\to x_0 }[/math], when, given any positive number [math]\displaystyle{ \varepsilon }[/math], no matter how small, we can always find a [math]\displaystyle{ \delta }[/math], such that we can guarantee that [math]\displaystyle{ f(x) }[/math] and [math]\displaystyle{ L }[/math] are less than [math]\ displaystyle{ \varepsilon }[/math] apart, as long as [math]\displaystyle{ x }[/math] (in the domain of [math]\displaystyle{ f }[/math]) is a real number that is less than [math]\displaystyle{ \delta }[/math] away from [math]\displaystyle{ x_0 }[/math] but distinct from [math]\displaystyle{ x_0 }[/math]. The purpose of the last stipulation, which corresponds to the condition [math]\displaystyle{ 0\lt |x-x_0| }[/math] in the definition, is to ensure that [math]\displaystyle{ \lim_{x \to x_0} f(x)=L }[/math] does not imply anything about the value of [math]\displaystyle{ f(x_0) }[/math] itself. Actually, [math]\displaystyle{ x_0 }[/math] does not even need to be in the domain of [math]\displaystyle{ f }[/math] in order for [math]\displaystyle{ \lim_{x \to x_0} f(x) }[/math] to In a slightly different but related context, the concept of a limit applies to the behavior of a sequence [math]\displaystyle{ (a_n) }[/math] when [math]\displaystyle{ n }[/math] becomes large. Definition. Let [math]\displaystyle{ (a_n) }[/math] be a real-valued sequence. We say that [math]\displaystyle{ (a_n) }[/math] converges to [math]\displaystyle{ a }[/math] if, for any [math]\ displaystyle{ \varepsilon \gt 0 }[/math], there exists a natural number [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ n\geq N }[/math] implies that [math]\displaystyle{ |a-a_n| \lt \ varepsilon }[/math]. We write this symbolically as [math]\displaystyle{ a_n \to a\ \ \text{as}\ \ n \to \infty , }[/math]or as[math]\displaystyle{ \lim_{n \to \infty} a_n = a ; }[/math] if [math]\ displaystyle{ (a_n) }[/math] fails to converge, we say that [math]\displaystyle{ (a_n) }[/math] diverges. Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence [math]\displaystyle{ (a_n) }[/math] and term [math]\displaystyle{ a_n }[/ math] by function [math]\displaystyle{ f }[/math] and value [math]\displaystyle{ f(x) }[/math] and natural numbers [math]\displaystyle{ N }[/math] and [math]\displaystyle{ n }[/math] by real numbers [math]\displaystyle{ M }[/math] and [math]\displaystyle{ x }[/math], respectively) yields the definition of the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] increases without bound, notated [math]\displaystyle{ \lim_{x \to \infty} f(x) }[/math]. Reversing the inequality [math]\displaystyle{ x\geq M }[/math] to [math]\displaystyle{ x \leq M }[/math] gives the corresponding definition of the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] decreases without bound, [math]\displaystyle{ \lim_{x \to -\infty} f(x) }[/math]. Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful. Definition. Let [math]\displaystyle{ (a_n) }[/math] be a real-valued sequence. We say that [math]\displaystyle{ (a_n) }[/math] is a Cauchy sequence if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there exists a natural number [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ m,n\geq N }[/math] implies that [math]\displaystyle{ |a_m-a_n| \lt \varepsilon }[/math]. It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, [math]\displaystyle{ (\R, |\cdot|) }[/math], is a complete metric space. In a general metric space, however, a Cauchy sequence need not converge. In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent. Uniform and pointwise convergence for sequences of functions In addition to sequences of numbers, one may also speak of sequences of functions on [math]\displaystyle{ E\subset \mathbb{R} }[/math], that is, infinite, ordered families of functions [math]\ displaystyle{ f_n:E\to\mathbb{R} }[/math], denoted [math]\displaystyle{ (f_n)_{n=1}^\infty }[/math], and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished. Roughly speaking, pointwise convergence of functions [math]\displaystyle{ f_n }[/math] to a limiting function [math]\displaystyle{ f:E\to\mathbb{R} }[/math], denoted [math]\displaystyle{ f_n \ rightarrow f }[/math], simply means that given any [math]\displaystyle{ x\in E }[/math], [math]\displaystyle{ f_n(x)\to f(x) }[/math] as [math]\displaystyle{ n\to\infty }[/math]. In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, [math]\displaystyle{ f_n }[/math], to fall within some error [math]\displaystyle{ \varepsilon \gt 0 }[/math] of [math]\displaystyle{ f }[/math] for every value of [math]\ displaystyle{ x\in E }[/math], whenever [math]\displaystyle{ n\geq N }[/math], for some integer [math]\displaystyle{ N }[/math]. For a family of functions to uniformly converge, sometimes denoted [math]\displaystyle{ f_n\rightrightarrows f }[/math], such a value of [math]\displaystyle{ N }[/math] must exist for any [math]\displaystyle{ \varepsilon\gt 0 }[/math] given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough [math]\displaystyle{ N }[/math], the functions [math]\displaystyle{ f_N, f_{N+1}, f_{N+2},\ldots }[/math] are all confined within a 'tube' of width [math]\displaystyle{ 2\varepsilon }[/math] about [math]\displaystyle{ f }[/math] (that is, between [math]\displaystyle{ f - \varepsilon }[/math] and [math]\ displaystyle{ f+\varepsilon }[/math]) for every value in their domain [math]\displaystyle{ E }[/math]. The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications. Compactness is a concept from general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In [math]\displaystyle{ \mathbb {R} }[/math], sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set [math]\displaystyle{ \{1/n:n\in\mathbb{N}\}\cup \{0}\ }[/math] is a compact set; the Cantor ternary set [math]\displaystyle{ \mathcal{C}\subset [0,1] }[/math] is another example of a compact set. On the other hand, the set [math]\displaystyle{ \{1/n:n\in\mathbb{N}\} }[/math] is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set [math]\displaystyle{ [0,\infty) }[/math] is also not compact because it is closed but not bounded. For subsets of the real numbers, there are several equivalent definitions of compactness. Definition. A set [math]\displaystyle{ E\subset\mathbb{R} }[/math] is compact if it is closed and bounded. This definition also holds for Euclidean space of any finite dimension, [math]\displaystyle{ \mathbb{R}^n }[/math], but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem. A more general definition that applies to all metric spaces uses the notion of a subsequence (see above). Definition. A set [math]\displaystyle{ E }[/math] in a metric space is compact if every sequence in [math]\displaystyle{ E }[/math] has a convergent subsequence. This particular property is known as subsequential compactness. In [math]\displaystyle{ \mathbb{R} }[/math], a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general. The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and [math]\displaystyle{ \mathbb{R} }[/math] as special cases). In brief, a collection of open sets [math]\displaystyle{ U_{\alpha} }[/math] is said to be an open cover of set [math]\displaystyle{ X }[/math] if the union of these sets is a superset of [math]\displaystyle{ X }[/math]. This open cover is said to have a finite subcover if a finite subcollection of the [math]\displaystyle{ U_{\alpha} }[/math] could be found that also covers [math]\displaystyle{ X }[/math]. Definition. A set [math]\displaystyle{ X }[/math] in a topological space is compact if every open cover of [math]\displaystyle{ X }[/math] has a finite subcover. Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact. A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps". There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, [math]\displaystyle{ f:I\to\R }[/math] is a function defined on a non-degenerate interval [math]\displaystyle{ I }[/math] of the set of real numbers as its domain. Some possibilities include [math]\ displaystyle{ I=\R }[/math], the whole set of real numbers, an open interval [math]\displaystyle{ I = (a, b) = \{x \in \R \mid a \lt x \lt b \}, }[/math] or a closed interval [math]\displaystyle{ I = [a, b] = \{x \in \R \mid a \leq x \leq b\}. }[/math] Here, [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are distinct real numbers, and we exclude the case of [math]\ displaystyle{ I }[/math] being empty or consisting of only one point, in particular. Definition. If [math]\displaystyle{ I\subset \mathbb{R} }[/math] is a non-degenerate interval, we say that [math]\displaystyle{ f:I \to \R }[/math] is continuous at [math]\displaystyle{ p\in I }[/ math] if [math]\displaystyle{ \lim_{x \to p} f(x) = f(p) }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\displaystyle{ f }[/math] is continuous at every [math]\ displaystyle{ p\in I }[/math]. In contrast to the requirements for [math]\displaystyle{ f }[/math] to have a limit at a point [math]\displaystyle{ p }[/math], which do not constrain the behavior of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math] itself, the following two conditions, in addition to the existence of [math]\displaystyle{ \lim_{x\to p} f(x) }[/math], must also hold in order for [math]\ displaystyle{ f }[/math] to be continuous at [math]\displaystyle{ p }[/math]: (i) [math]\displaystyle{ f }[/math] must be defined at [math]\displaystyle{ p }[/math], i.e., [math]\displaystyle{ p }[/ math] is in the domain of [math]\displaystyle{ f }[/math]; and (ii) [math]\displaystyle{ f(x)\to f(p) }[/math] as [math]\displaystyle{ x\to p }[/math]. The definition above actually applies to any domain [math]\displaystyle{ E }[/math] that does not contain an isolated point, or equivalently, [math]\displaystyle{ E }[/math] where every [math]\displaystyle{ p\in E }[/math] is a limit point of [math]\displaystyle{ E }[/math]. A more general definition applying to [math]\displaystyle{ f:X\to\mathbb{R} }[/math] with a general domain [math]\displaystyle{ X\subset \mathbb{R} }[/math] is the Definition. If [math]\displaystyle{ X }[/math] is an arbitrary subset of [math]\displaystyle{ \mathbb{R} }[/math], we say that [math]\displaystyle{ f:X\to\mathbb{R} }[/math] is continuous at [math]\ displaystyle{ p\in X }[/math] if, for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x\in X }[/math] , [math]\displaystyle{ |x-p|\lt \delta }[/math] implies that [math]\displaystyle{ |f(x)-f(p)| \lt \varepsilon }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\ displaystyle{ f }[/math] is continuous at every [math]\displaystyle{ p\in X }[/math]. A consequence of this definition is that [math]\displaystyle{ f }[/math] is trivially continuous at any isolated point [math]\displaystyle{ p\in X }[/math]. This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and [math]\displaystyle{ \mathbb{R} }[/math] in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness. Definition. If [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are topological spaces, we say that [math]\displaystyle{ f:X\to Y }[/math] is continuous at [math]\displaystyle{ p\ in X }[/math] if [math]\displaystyle{ f^{-1} (V) }[/math] is a neighborhood of [math]\displaystyle{ p }[/math] in [math]\displaystyle{ X }[/math] for every neighborhood [math]\displaystyle{ V }[/ math] of [math]\displaystyle{ f(p) }[/math] in [math]\displaystyle{ Y }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\displaystyle{ f^{-1}(U) }[/math] is open in [math]\displaystyle{ X }[/math] for every [math]\displaystyle{ U }[/math] open in [math]\displaystyle{ Y }[/math]. (Here, [math]\displaystyle{ f^{-1}(S) }[/math] refers to the preimage of [math]\displaystyle{ S\subset Y }[/math] under [math]\displaystyle{ f }[/math].) Uniform continuity Definition. If [math]\displaystyle{ X }[/math] is a subset of the real numbers, we say a function [math]\displaystyle{ f:X\to\mathbb{R} }[/math] is uniformly continuous on [math]\displaystyle{ X }[/ math] if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there exists a [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x,y\in X }[/math], [math]\ displaystyle{ |x-y|\lt \delta }[/math] implies that [math]\displaystyle{ |f(x)-f(y)| \lt \varepsilon }[/math]. Explicitly, when a function is uniformly continuous on [math]\displaystyle{ X }[/math], the choice of [math]\displaystyle{ \delta }[/math] needed to fulfill the definition must work for all of [math] \displaystyle{ X }[/math] for a given [math]\displaystyle{ \varepsilon }[/math]. In contrast, when a function is continuous at every point [math]\displaystyle{ p\in X }[/math] (or said to be continuous on [math]\displaystyle{ X }[/math]), the choice of [math]\displaystyle{ \delta }[/math] may depend on both [math]\displaystyle{ \varepsilon }[/math] and [math]\displaystyle{ p }[/math]. In contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point [math]\displaystyle{ p } [/math] is meaningless. On a compact set, it is easily shown that all continuous functions are uniformly continuous. If [math]\displaystyle{ E }[/math] is a bounded noncompact subset of [math]\displaystyle{ \mathbb{R} }[/ math], then there exists [math]\displaystyle{ f:E\to\mathbb{R} }[/math] that is continuous but not uniformly continuous. As a simple example, consider [math]\displaystyle{ f:(0,1)\to\mathbb{R} }[/ math] defined by [math]\displaystyle{ f(x)=1/x }[/math]. By choosing points close to 0, we can always make [math]\displaystyle{ |f(x)-f(y)| \gt \varepsilon }[/math] for any single choice of [math]\ displaystyle{ \delta\gt 0 }[/math], for a given [math]\displaystyle{ \varepsilon \gt 0 }[/math]. Absolute continuity Definition. Let [math]\displaystyle{ I\subset\mathbb{R} }[/math] be an interval on the real line. A function [math]\displaystyle{ f:I \to \mathbb{R} }[/math] is said to be absolutely continuous on [math]\displaystyle{ I }[/math] if for every positive number [math]\displaystyle{ \varepsilon }[/math], there is a positive number [math]\displaystyle{ \delta }[/math] such that whenever a finite sequence of pairwise disjoint sub-intervals [math]\displaystyle{ (x_1, y_1), (x_2,y_2),\ldots, (x_n,y_n) }[/math] of [math]\displaystyle{ I }[/math] satisfies^[6] [math]\displaystyle{ \sum_{k=1}^{n} (y_k - x_k) \lt \delta }[/math] [math]\displaystyle{ \sum_{k=1}^{n} | f(y_k) - f(x_k) | \lt \varepsilon. }[/math] Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral. The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point [math]\displaystyle{ a }[/math], and the slope of the line is the derivative of the function at [math]\displaystyle{ a }[/math]. A function [math]\displaystyle{ f:\mathbb{R}\to\mathbb{R} }[/math] is differentiable at [math]\displaystyle{ a }[/math] if the limit [math]\displaystyle{ f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h} }[/math] exists. This limit is known as the derivative of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ a }[/math], and the function [math]\displaystyle{ f' }[/math], possibly defined on only a subset of [math]\displaystyle{ \mathbb{R} }[/math], is the derivative (or derivative function) of [math]\displaystyle{ f }[/math]. If the derivative exists everywhere, the function is said to be As a simple consequence of the definition, [math]\displaystyle{ f }[/math] is continuous at [math]\displaystyle{ a }[/math] if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so One can classify functions by their differentiability class. The class [math]\displaystyle{ C^0 }[/math] (sometimes [math]\displaystyle{ C^0([a,b]) }[/math] to indicate the interval of applicability) consists of all continuous functions. The class [math]\displaystyle{ C^1 }[/math] consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a [math]\displaystyle{ C^1 }[/math] function is exactly a function whose derivative exists and is of class [math]\displaystyle{ C^0 }[/math]. In general, the classes [math]\ displaystyle{ C^k }[/math] can be defined recursively by declaring [math]\displaystyle{ C^0 }[/math] to be the set of all continuous functions and declaring [math]\displaystyle{ C^k }[/math] for any positive integer [math]\displaystyle{ k }[/math] to be the set of all differentiable functions whose derivative is in [math]\displaystyle{ C^{k-1} }[/math]. In particular, [math]\displaystyle{ C^k } [/math] is contained in [math]\displaystyle{ C^{k-1} }[/math] for every [math]\displaystyle{ k }[/math], and there are examples to show that this containment is strict. Class [math]\displaystyle{ C^\ infty }[/math] is the intersection of the sets [math]\displaystyle{ C^k }[/math] as [math]\displaystyle{ k }[/math] varies over the non-negative integers, and the members of this class are known as the smooth functions. Class [math]\displaystyle{ C^\omega }[/math] consists of all analytic functions, and is strictly contained in [math]\displaystyle{ C^\infty }[/math] (see bump function for a smooth function that is not analytic). A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first [math]\displaystyle{ n }[/math] terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as [math]\displaystyle{ n }[/math] grows without bound. The series is assigned the value of this limit, if it exists. Given an (infinite) sequence [math]\displaystyle{ (a_n) }[/math], we can define an associated series as the formal mathematical object [math]\displaystyle{ a_1 + a_2 + a_3 + \cdots = \sum_{n=1}^{\ infty} a_n }[/math], sometimes simply written as [math]\displaystyle{ \sum a_n }[/math]. The partial sums of a series [math]\displaystyle{ \sum a_n }[/math] are the numbers [math]\displaystyle{ s_n=\ sum_{j=1}^n a_j }[/math]. A series [math]\displaystyle{ \sum a_n }[/math] is said to be convergent if the sequence consisting of its partial sums, [math]\displaystyle{ (s_n) }[/math], is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number [math]\displaystyle{ s = \lim_{n \to \infty} s_n }[/math]. The word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion). An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes: [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{2^n} = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1 . }[/math] In contrast, the harmonic series has been known since the Middle Ages to be a divergent series: [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \cdots = \infty . }[/math] (Here, "[math]\displaystyle{ =\infty }[/math]" is merely a notational convention to indicate that the partial sums of the series grow without bound.) A series [math]\displaystyle{ \sum a_n }[/math] is said to converge absolutely if [math]\displaystyle{ \sum |a_n| }[/math] is convergent. A convergent series [math]\displaystyle{ \sum a_n }[/math] for which [math]\displaystyle{ \sum |a_n| }[/math] diverges is said to converge non-absolutely.^[7] It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a series that converges non-absolutely is [math]\displaystyle{ \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots = \ln 2 . }[/math] Taylor series The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series [math]\displaystyle{ f(a) + \frac{f'(a)}{1!} (x-a) + \frac{f''(a)}{2!} (x-a)^2 + \frac{f^{(3)}(a)}{3!} (x-a)^3 + \cdots. }[/math] which can be written in the more compact sigma notation as [math]\displaystyle{ \sum_{n=0} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n} }[/math] where n! denotes the factorial of n and ƒ^ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and (x − a)^0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series. A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that [math]\displaystyle{ |x-a|\lt R }[/math] (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable. Fourier series Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis. Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion. Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned. Riemann integration The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [math]\displaystyle{ [a,b] }[/math] be a closed interval of the real line; then a tagged partition [math]\displaystyle{ \cal{P} }[/math] of [math]\displaystyle{ [a,b] }[/math] is a finite sequence [math]\displaystyle{ a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\! }[/math] This partitions the interval [math]\displaystyle{ [a,b] }[/math] into [math]\displaystyle{ n }[/math] sub-intervals [math]\displaystyle{ [x_{i-1},x_i] }[/math] indexed by [math]\displaystyle{ i=1,\ ldots, n }[/math], each of which is "tagged" with a distinguished point [math]\displaystyle{ t_i\in[x_{i-1},x_i] }[/math]. For a function [math]\displaystyle{ f }[/math] bounded on [math]\ displaystyle{ [a,b] }[/math], we define the Riemann sum of [math]\displaystyle{ f }[/math] with respect to tagged partition [math]\displaystyle{ \cal{P} }[/math] as [math]\displaystyle{ \sum_{i=1}^{n} f(t_i) \Delta_i, }[/math] where [math]\displaystyle{ \Delta_i=x_i-x_{i-1} }[/math] is the width of sub-interval [math]\displaystyle{ i }[/math]. Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, [math]\displaystyle{ \|\Delta_i\| = \max_{i=1,\ldots, n}\Delta_i }[/math]. We say that the Riemann integral of [math]\displaystyle{ f }[/math] on [math]\displaystyle{ [a,b] }[/math] is [math]\displaystyle{ S }[/math] if for any [math]\displaystyle{ \varepsilon\gt 0 }[/math] there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that, for any tagged partition [math]\ displaystyle{ \cal{P} }[/math] with mesh [math]\displaystyle{ \| \Delta_i \| \lt \delta }[/math], we have [math]\displaystyle{ \left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| \lt \varepsilon. }[/math] This is sometimes denoted [math]\displaystyle{ \mathcal{R}\int_{a}^b f=S }[/math]. When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former. The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense. Lebesgue integration and measure Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to Lebesgue integral probability theory. Distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Relation to complex analysis Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressibility as power series, and satisfying the Cauchy integral formula. In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers. Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus. Important results Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems. Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor 's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology. See also • Time-scale calculus – a unification of real analysis with calculus of finite differences • Real multivariable function 1. ↑ Tao, Terence (2003). "Lecture notes for MATH 131AH". https://www.math.ucla.edu/~tao/resource/general/131ah.1.03w/week1.pdf. 2. ↑ "Sequences intro". https://www.khanacademy.org/math/algebra/x2f8bb11595b61c86:sequences/x2f8bb11595b61c86:introduction-to-arithmetic-sequences/v/explicit-and-recursive-definitions-of-sequences. 3. ↑ Gaughan, Edward (2009). "1.1 Sequences and Convergence". Introduction to Analysis. AMS (2009). ISBN 978-0-8218-4787-9. 4. ↑ Some authors (e.g., Rudin 1976) use braces instead and write [math]\displaystyle{ \{a_n\} }[/math]. However, this notation conflicts with the usual notation for a set, which, in contrast to a sequence, disregards the order and the multiplicity of its elements. 5. ↑ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8. https://archive.org/details/calculusearlytra00stew_1. 6. ↑ Royden 1988, Sect. 5.4, page 108; Nielsen 1997, Definition 15.6 on page 251; Athreya & Lahiri 2006, Definitions 4.4.1, 4.4.2 on pages 128,129. The interval I is assumed to be bounded and closed in the former two books but not the latter book. 7. ↑ The term unconditional convergence refers to series whose sum does not depend on the order of the terms (i.e., any rearrangement gives the same sum). Convergence is termed conditional otherwise. For series in [math]\displaystyle{ \R^n }[/math], it can be shown that absolute convergence and unconditional convergence are equivalent. Hence, the term "conditional convergence" is often used to mean non-absolute convergence. However, in the general setting of Banach spaces, the terms do not coincide, and there are unconditionally convergent series that do not converge • Athreya, Krishna B.; Lahiri, Soumendra N. (2006), Measure theory and probability theory, Springer, ISBN 0-387-32903-X • Nielsen, Ole A. (1997), An introduction to integration and measure theory, Wiley-Interscience, ISBN 0-471-59518-7 • Royden, H.L. (1988), Real Analysis (third ed.), Collier Macmillan, ISBN 0-02-404151-3 External links Original source: https://en.wikipedia.org/wiki/Real analysis. Read more
{"url":"https://handwiki.org/wiki/Real_analysis","timestamp":"2024-11-03T09:35:42Z","content_type":"text/html","content_length":"133278","record_id":"<urn:uuid:b8f187f7-f1ca-41c9-93e1-b136ef80373c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00752.warc.gz"}
Calculus 2 Topics - Exploring the Core Concepts and Applications Calculus 2 is the branch of mathematics that deals with integrating functions and understanding their applications. Following the foundational concepts of limits, derivatives, and basic integrals from Calculus 1, I find that this second course in the sequence dives deeper into integration techniques, such as integration by parts, trigonometric substitution, and partial fraction decomposition. Applications of these techniques are widespread in science, engineering, and physics, demonstrating the power of Calculus 2 in solving complex problems involving areas, volumes, and growth models. As a continuation in university-level calculus, I explore concepts including series and sequences, convergence tests, and the representation of functions as power series. The differential equations, both ordinary and partial, are also a significant focus within this field, providing essential tools for modeling dynamic systems across various disciplines. So stay tuned; there’s an intriguing exploration of how these mathematical concepts translate to real-world applications that we experience every day. Fundamental Concepts and Topics in Calculus 2 In my journey through the realms of mathematics, I have discovered that Calculus 2 is a fascinating expansion of the foundational knowledge obtained in Calculus 1. This course primarily explores integral calculus and series in great depth. As I delve into the topics, I find myself navigating through a variety of integration techniques essential for solving complex integrals. Integration techniques include: • Integration by parts: Given by the formula $\int u dv = uv – \int v du$, which isb derived from the product rule of derivatives. • Trigonometric substitution: Useful when encountering integrals containing $\sqrt{a^2 – x^2}$, $\sqrt{a^2 + x^2}$, or $\sqrt{x^2 – a^2}$. • Partial Fractions: Decomposing a rational function into simpler fractions that can be integrated easily. • Improper Integrals: Integrals with infinite limits of integration or integrands with infinite discontinuities. The Fundamental Theorem of Calculus links differentiation and integration, presenting them as inverse processes. This theorem consists of two parts that assure us we can evaluate definite integrals ($\int_{a}^{b} f(x) dx$) by finding an antiderivative. Calculus 2 also emphasizes applications of integration, which allow me to calculate: • Areas between curves • Volumes of solids (via methods like discs and shells) • Arc lengths and surface areas • Center of mass and hydrostatic force Sequences and infinite series are profoundly interesting concepts I encounter in Calculus 2. They include: Type of Series Test Used Convergence Criterion Geometric Series Formula Based $ P-Series P-Series Test Converges if $p > 1$ for $\sum \frac{1}{n^p}$ Alternating Series Alternating Series Test Decreasing sequence and $\lim_{n\to\infty} a_n = 0$ Ratio and Root Tests Comparison to a Geometric Series Ratio < 1 or Root < 1 for convergence Integral Test Comparing to an integral Series behaves like an integral—which converges or diverges In exploring the realm of series, I also learn about several special types, like Taylor and Maclaurin series, which provide polynomial approximations to functions in the form $\sum \frac{f^{(n)}(a)} {n!}(x-a)^n$, where the center of the series is ‘a’ for Taylor series and 0 for Maclaurin. Ultimately, I find that understanding these central concepts opens a door to solving many real-world problems with precision and elegance. Advanced Topics in Calculus 2 When I explore the advanced topics of Calculus 2, it feels like delving deeper into the mathematical universe. The Infinite series stands out as a fundamental concept. They allow me to express functions as the sum of their infinite components, which is crucial in many branches of mathematics and physics. Working with infinite series also leads to an understanding of convergence and divergence, often through ratio and root tests, where I examine the behavior of sequences. Transitioning into polar coordinates, the focus shifts to analyzing curves and areas in a plane using the radius and angle as coordinates. These are represented by $(r, \theta)$, where $r$ is the distance from the origin and $\theta$ is the angle from the polar axis. The relationship with Cartesian coordinates is given by: $x = r\cos(\theta)$ and $y = r\sin(\theta)$. Another fascinating area is that of parametric equations, which describe a curve in the plane using an independent parameter, usually denoted as $t$. For any point on the curve, I express the coordinates as functions of $t$: $x(t)$ and $y(t)$. This leads me to vector functions, which are pivotal in modeling physical phenomena. A vector function may be represented as $\vec{v}(t) = \langle f(t), g(t), h(t) \rangle$, allowing me to describe motion in three dimensions. Lastly, differential equations offer a significant leap in understanding dynamic systems. An equation like $\frac{dy}{dx} = ky$ models exponential growth or decay. In Calculus 2, I often practice solving these equations through the separation of variables and integrating factors. Topic Description Key Formula Infinite Series Summation of infinite sequence terms to represent functions. $\sum_{n=1}^{\infty} a_n$ Polar Coordinates Represent points in the plane with radius and angle from the origin. $(r, \theta)$ Parametric Equations Describe curves using an independent parameter. $x(t), y(t)$ Vector Functions Model physical phenomena in multiple dimensions. $\vec{v}(t) = \langle f(t), g(t), h(t) \rangle$ Differential Equations Equations involving derivatives of functions and their solutions. $\frac{dy}{dx} = ky$ For anyone looking for Calculus 2 help, these subjects are typically where more practice is needed, as they build toward higher-level mathematics. In learning Calculus 2, I’ve engaged with a rich array of mathematical concepts that serve as the building blocks for advanced study in many fields. I’ve tackled integration techniques such as Integration by Parts, which is elegantly represented by the formula $\int u dv = uv – \int v du$, and explored methods for handling diverse functions using Trig Substitutions and Partial Fractions. Exploring series and sequences deepened my understanding of convergence with tools like the Integral Test and Comparison Test; mathematical expressions like the $n$-th term of a sequence, $a_n$, became familiar friends. The practical applications have been particularly satisfying to learn—I’ve used integrals to calculate the arc length of a curve, expressed as $L = \int_a^b \sqrt{1 + \left(\frac{dy}{dx}\right)^2} dx$, and to find surface areas and volumes of solids of revolution. Differential equations also opened a new world where I learned to describe growth and decay models, along with the behavior of systems over time, delineated by first and second-order differential As a gateway into the world of higher mathematics and a variety of applications in science and engineering, Calculus 2 has provided me with invaluable tools and techniques. My journey through these topics has not just been about acquiring knowledge but also about enhancing my problem-solving skills and my capacity for analytical thinking.
{"url":"https://www.storyofmathematics.com/calculus-2-topics/","timestamp":"2024-11-08T08:16:37Z","content_type":"text/html","content_length":"142186","record_id":"<urn:uuid:4c7f2163-8772-4c26-88c9-ae3ee8cb24b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00208.warc.gz"}
Weak charge form factor and radius of 208Pb through parity violation in electron scattering We use distorted wave electron scattering calculations to extract the weak charge form factor F[W](q̄), the weak charge radius R[W], and the point neutron radius R[n] of 208Pb from the Lead Radius Experiment (PREX) parity-violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer q̄=0.475 fm^-1. We find F[W](q̄)=0.204±0. 028(exp)±0.001(model). We use the Helm model to infer the weak radius from F[W](q̄). We find R[W]=5.826±0.181(exp) ±0.027(model)fm. Here the experimental error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R [W] from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a "weak charge skin" where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R[n]=5.751±0.175(exp)±0.026(model)±0. 005 (strange)fm from R[W]. Here there is only a very small error (strange) from possible strange quark contributions. We find R[n] to be slightly smaller than R[W] because of the nucleon's size. Finally, we find a neutron skin thickness of R[n]-R[p]=0.302±0. 175(exp)±0.026 (model) ± 0.005 (strange) fm, where R[p] is the point proton radius. Dive into the research topics of 'Weak charge form factor and radius of 208Pb through parity violation in electron scattering'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/weak-charge-form-factor-and-radius-of-208pb-through-parity-violat","timestamp":"2024-11-10T08:22:40Z","content_type":"text/html","content_length":"60317","record_id":"<urn:uuid:66d82704-69d0-43c4-b306-beb18c44b046>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00225.warc.gz"}
In my last blog post, I wrote about learning Rust and implementing the RaptorQ (RFC6330) fountain code. I only optimized the library for handling small message sizes, since it was mainly a project to help me learn Rust. However, since releasing it, a number of people have started using the raptorq crate, so I’ve been working on making it more polished. Shortly after open sourcing my initial implementation, someone pointed out that my library was slow for large numbers of symbols. RaptorQ encodes messages into a configurable number of symbols, and encoding more than ~200 symbols requires special data structures for doing sparse matrix math, which I hadn’t bothered to implement. Never one to pass up a performance optimization challenge, I thought it would be fun to implement those, and along the way I discovered a bunch of optimizations that aren’t documented in the RFC. First off, I established a benchmark using 1280 byte symbols and 61MB messages split into 50000 symbols. 1280 bytes is a standard size used by other libraries, because it fits well into an IP packet. The proprietary Codornices library developed by ICSI achieves 935Mbit/s on this benchmark, using similar hardware (they used a Ryzen 5 2600 @ 3.4GHz, and I used a Core i5-6600K @ 3.5GHz). v0.10.0: 206Mbit/s After implementing sparse matrix routines to match the shape of the matrices used in the RaptorQ algorithm, and some straight forward optimizations – like a fast path for rows with only a single non-zero element – I was at about 20% of the fastest proprietary implementations. Not bad for a few days of work! v1.0.0: 220Mbit/s For v1.0, I only made one additional optimization to the sparse FMA code path. Several people had reached out to me about using the raptorq crate in production, so I mostly worked on a few stability features for a 1.0 release, like a complete encode & decode test for every possible symbol count to ensure correctness. v1.1.0: 493Mbit/s With the 1.0 release out, I went back to performance optimization. I spent a lot of time profiling my code, and at first couldn’t find much more to optimize. The functions taking the most time were the core math routines, and those were already highly optimized with SIMD intrinsics. I figured the proprietary implementations must be using a bunch of special heuristics, or maybe a completely different matrix inverse algorithm. I decided to read through the RFC again, as well as the original Raptor paper, and see if I could figure them out myself. After many hours puzzling over the algorithm, I hit upon several optimizations which aren’t noted in the RFC, and I documented them as a reference. Mostly these insights are sections of the matrix which are guaranteed to be binary valued instead of values in the full GF(256) range. This allows the use of bit packing to further accelerate the multiply & add functions, and it sped up performance to 360Mbit/s. The second major insight was that cache had become the limiting factor. Almost all the data structures I used were O(1), but performance steadily degrades as symbol count increases: symbol count = 10, encoded 127 MB in 0.555secs, throughput: 1844.9Mbit/s symbol count = 100, encoded 127 MB in 0.606secs, throughput: 1688.8Mbit/s symbol count = 250, encoded 127 MB in 0.926secs, throughput: 1104.7Mbit/s symbol count = 500, encoded 127 MB in 0.892secs, throughput: 1144.1Mbit/s symbol count = 1000, encoded 126 MB in 1.013secs, throughput: 1002.6Mbit/s symbol count = 2000, encoded 126 MB in 1.174secs, throughput: 865.1Mbit/s symbol count = 5000, encoded 122 MB in 1.353secs, throughput: 721.8Mbit/s symbol count = 10000, encoded 122 MB in 1.768secs, throughput: 552.4Mbit/s symbol count = 20000, encoded 122 MB in 2.754secs, throughput: 354.6Mbit/s symbol count = 50000, encoded 122 MB in 4.342secs, throughput: 224.9Mbit/s “perf stat -e instructions,cycles,cache-misses,cache-references” makes it obvious what’s going on: symbol count = 100, encoded 127 MB in 0.594secs, throughput: 1723.0Mbit/s Performance counter stats for 'cargo bench ...': 5,254,632,204 instructions # 2.23 insn per cycle 2,353,214,171 cycles 1,606,739 cache-misses # 1.835 % of all cache refs 87,562,092 cache-references With 100 symbols (above) the CPU is executing 2.23 instructions per cycle and getting > 98% cache hits, which is great! However, with 50000 symbols (below) the story is completely different. IPC is only 1.22, and cache misses are at 43%. symbol count = 50000, encoded 122 MB in 4.280secs, throughput: 228.2Mbit/s Performance counter stats for 'cargo bench ...': 20,323,685,241 instructions # 1.22 insn per cycle 16,717,417,896 cycles 478,019,226 cache-misses # 43.244 % of all cache refs 1,105,393,476 cache-references To further improve performance the key was reducing the size of the working set. Encoding 50000 symbols required 93MB of memory to store the matrix and associated data structures, which is far too large to fit in L2 cache. After a lot of optimization I was able to reduce this to 11MB, which still doesn’t entirely fit in L2, but was enough to increase IPC to > 1.7. The two main optimizations I did were both further bit packing. For example, I had previously stored sparse elements as (index, value) tuples with the type (usize, u8). Due to alignment, this type actually takes 16 bytes on a 64-bit platform. Only 3 bytes are required though, because the index has a maximum value of 56403, and in fact, after some further analysis I was able to compact it down to 2 bytes because the values in that section of the matrix are binary, so I could store only the index and make zeros implicit. Altogether this improved performance to 493Mbit/s, which is within a factor of 2 of the fastest proprietary implementation I know of! If you’re an expert in finite field math and know of other ways to optimize this matrix inversion, hit me up! I’d love to collaborate on making this the fastest RaptorQ implementation in the world.
{"url":"https://www.cberner.com/","timestamp":"2024-11-04T17:24:01Z","content_type":"text/html","content_length":"39827","record_id":"<urn:uuid:27e1cab5-7643-4501-a968-96dfa7cc40d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00036.warc.gz"}
Understanding Mathematical Functions: What Color Is Your Function Introduction to Mathematical Functions and Their Colorful Analogies Understanding mathematical functions is crucial for anyone studying mathematics or working in fields where mathematics is applied. Functions are essential tools for representing relationships between different quantities, and they have widespread applications in various scientific and engineering disciplines. In this blog post, we will explore the concept of "coloring" functions as a visual aid to better understand their behavior and properties. A Importance of understanding functions in mathematics and their applications Functions play a fundamental role in mathematics, serving as the building blocks for various mathematical concepts and theories. They are used to describe dynamic processes, model physical phenomena, analyze data, and solve real-world problems. For example, in calculus, functions are used to represent rates of change and quantities that depend on other quantities. In algebra, functions are essential for solving equations and manipulating expressions. In real-world applications, functions are used in diverse fields such as physics, biology, economics, and engineering. They are employed to model the behavior of systems, optimize processes, predict outcomes, and make informed decisions. Understanding functions and their properties is crucial for professionals working in these fields to interpret data, develop models, and solve practical B Overview of the concept of "coloring" functions to aid visualization and understanding To aid in visualizing and understanding functions, mathematicians and educators use the concept of "coloring" functions. This metaphorical approach assigns colors to different parts of a function based on their behavior, allowing for easier interpretation and analysis. Coloring functions is a powerful tool for enhancing comprehension and intuition in mathematics, particularly for students and learners who benefit from visual representations. By assigning colors to functions, we can highlight different aspects such as increasing or decreasing behavior, concavity, points of inflection, symmetry, and other key properties. This visual representation enables us to grasp the behavior and characteristics of a function more intuitively, making it easier to identify patterns and make connections between different functions. C Brief explanation of what to expect in the blog post In this blog post, we will delve into the concept of "coloring" functions in mathematics. We will explore the rationale behind using visual aids to represent functions and discuss how different colors can signify various properties of functions. Additionally, we will provide examples and illustrations to demonstrate how coloring functions can aid in understanding their behavior and applications in real-world problems. By the end of this post, you will have a clearer understanding of how to interpret functions visually and leverage color coding as a powerful tool in mathematical Key Takeaways • Functions can be represented by different colors • Color can help visualize and understand functions • Color can represent different aspects of a function • Understanding color in functions can aid in problem-solving • Color can make complex functions more accessible The Basics of Mathematical Functions Mathematical functions are a fundamental concept in mathematics and are used to describe the relationship between two sets of numbers. They are essential in various fields such as physics, engineering, economics, and computer science. Understanding the basics of mathematical functions is crucial for anyone studying or working in these fields. A Definition of a function and its components (domain, range, and relation) A function is a rule that assigns to each input value exactly one output value. The input values of a function are called the domain, and the output values are called the range. The relation between the domain and range is what defines the function. In other words, for every input, there is exactly one output. Different types of functions commonly studied (linear, quadratic, polynomial, trigonometric, etc) There are various types of functions commonly studied in mathematics. Some of the most common types include linear functions, quadratic functions, polynomial functions, and trigonometric functions. Each type of function has its own unique characteristics and properties, making them useful for different applications. How functions are represented graphically and the significance of their shapes Functions can be represented graphically using graphs or plots. The graph of a function is a visual representation of the relationship between the input and output values. The shape of the graph provides important information about the behavior of the function. For example, the slope of a linear function can be determined by the steepness of its graph, while the concavity of a quadratic function can be determined by the shape of its graph. 'Coloring' Your Function: Visualization Techniques When it comes to understanding mathematical functions, visualization can be a powerful tool. One way to enhance the visual representation of a function is through the use of color. Color can be used to distinguish between different parts of a function, highlight critical features, and ultimately enhance understanding and memory retention. A. How color can be used to distinguish between different parts of a function Color can be used to differentiate between various components of a function, such as different branches, curves, or segments. By assigning different colors to different parts of the function, it becomes easier to identify and track each component separately. This can be particularly useful in complex functions with multiple elements, as it allows for a clearer visual representation. B. The use of color to highlight critical features of a function Color can also be used to highlight critical features of a function, such as roots, intercepts, and asymptotes. By using color to draw attention to these key points, it becomes easier for the viewer to identify and understand the behavior of the function. For example, using a distinct color for the x-intercepts or asymptotes can make them stand out more prominently on the graph. C. Practical examples of function graphs with color-coding, enhancing understanding and memory retention Practical examples of function graphs with color-coding can significantly enhance understanding and memory retention. By using color to differentiate between different parts of the function and highlight critical features, students can develop a clearer mental image of the function's behavior. This can lead to improved comprehension and retention of the function's properties and For instance, in a quadratic function, using different colors for the vertex, axis of symmetry, and roots can help students visualize the relationship between these elements more effectively. Similarly, in trigonometric functions, color-coding can be used to distinguish between different periods, amplitudes, and phase shifts, making it easier for students to grasp the function's behavior. Overall, the use of color in function graphs can significantly enhance the visualization of mathematical functions, making them more accessible and comprehensible to students. The Role of Color in Understanding Function Behavior Understanding mathematical functions can be a challenging task for many students. However, the use of color can greatly enhance the comprehension of function behavior. By associating specific colors with different aspects of a function, students can visually grasp the concepts of slope, increase, decrease, and curvature changes. How colors can represent the slope or gradient of a function Color can be used to represent the slope or gradient of a function. For example, a function with a positive slope can be associated with warm colors such as red or orange, while a function with a negative slope can be represented by cool colors like blue or green. This visual representation helps students understand the direction and steepness of the function at different points. Color shades indicating where a function increases, decreases, or undergoes curvature changes Color shades can also be used to indicate where a function increases, decreases, or undergoes curvature changes. For instance, a function that is increasing can be depicted in lighter shades of a color, while a decreasing function can be shown in darker shades. Additionally, changes in curvature can be highlighted using contrasting colors, making it easier for students to identify these critical points in the function. Examples of how teachers and educational software use color to demonstrate function behavior Teachers and educational software often utilize color to demonstrate function behavior in a visual and engaging manner. Graphs and diagrams are commonly color-coded to illustrate different aspects of functions, allowing students to easily identify key features such as maxima, minima, and inflection points. By incorporating color into their teaching materials, educators can effectively convey complex mathematical concepts to their students. Interpreting the 'Colors' of Complex Functions Understanding complex mathematical functions can be a daunting task, especially when dealing with multi-dimensional data. Visualizing these functions in a way that is easy to comprehend is a significant challenge for mathematicians, scientists, and engineers. One approach that has proven to be effective is the use of color coding to represent complex functions. A. The challenge of visualizing and understanding multi-dimensional and complex functions Multi-dimensional and complex functions often involve a large amount of data that is difficult to represent visually. Traditional methods of plotting such functions on a 2D plane can be limiting and fail to capture the full complexity of the function. This makes it challenging for individuals to interpret and understand the behavior of these functions. B. Strategies for applying color coding to complex functions One effective strategy for visualizing complex functions is the use of color coding, particularly in the form of heatmaps for surface plots. By assigning different colors to different function values, a heatmap can provide a clear and intuitive representation of the function's behavior in multiple dimensions. This allows for a more comprehensive understanding of the function's Another approach is to use color gradients to represent the magnitude or direction of a function at each point in a plot. This can be particularly useful for functions that involve vector fields or gradients, as it provides a visual indication of the function's behavior at different points. C. Real-world situations where color-coded functions facilitate comprehension Color-coded functions have proven to be invaluable in real-world applications, particularly in fields such as engineering and physics. In engineering, complex functions that describe the behavior of materials or systems can be visualized using color coding to identify areas of stress, strain, or other critical parameters. This allows engineers to make informed decisions based on the visual representation of the function. In physics, color-coded functions are used to represent complex phenomena such as electromagnetic fields, wave functions, and quantum mechanical systems. By using color coding, physicists can gain insights into the behavior of these systems and make predictions based on the visual representation of the functions. Overall, the use of color coding to represent complex functions has proven to be a powerful tool for facilitating comprehension and interpretation in a wide range of disciplines. Troubleshooting Common Issues with Function Visualization When visualizing mathematical functions, selecting the right color scheme is crucial for effectively conveying information. However, there are common challenges encountered when choosing colors for function representation, as well as misinterpretations caused by poor color choices or confusing visualizations. Additionally, it's important to consider accessibility for those with color vision deficiencies. Let's explore these issues and discuss potential solutions. Challenges encountered when selecting color schemes for function representation • Color contrast: One common challenge is ensuring sufficient contrast between different function curves or data points. Inadequate contrast can make it difficult for viewers to distinguish between multiple functions or identify specific features within a single function. • Color symbolism: Another challenge is the potential misinterpretation of color symbolism. For example, using red and green to represent positive and negative values may be confusing for individuals with color vision deficiencies. • Color consistency: Maintaining consistency in color usage across different visualizations or platforms can be challenging. Colors may appear differently on various devices or when printed, leading to discrepancies in interpretation. Solutions to common misinterpretations caused by poor color choices or confusing visualizations • Use of distinct hues: To address the challenge of color contrast, it's important to select distinct hues for different functions or data sets. This ensures that each element stands out clearly against the background and other elements. • Color-blind friendly palettes: To mitigate the risk of misinterpretation due to color vision deficiencies, using color-blind friendly palettes can help ensure that individuals with such conditions can still differentiate between different elements in the visualization. • Accessible color choices: Choosing colors that are universally recognizable and avoiding reliance on color alone for conveying information can help address issues related to color symbolism and Tips for choosing color palettes that are accessible to those with color vision deficiencies • Utilize color-blind simulation tools: Before finalizing a color palette, it's beneficial to use color-blind simulation tools to ensure that the visualization remains clear and distinguishable for individuals with color vision deficiencies. • Emphasize brightness and saturation: In addition to distinct hues, emphasizing differences in brightness and saturation can aid in making visualizations more accessible to individuals with color vision deficiencies. • Provide alternative visual cues: Incorporating alternative visual cues, such as patterns or textures, alongside color can enhance the accessibility of the visualization for all viewers. Conclusion & Best Practices for Color Coding Functions A Recap of the importance of color in visualizing and understanding mathematical functions Color plays a crucial role in visualizing and understanding mathematical functions. It helps in differentiating between multiple functions, highlighting specific features, and making complex concepts more accessible. By assigning different colors to various functions, it becomes easier to identify and track their behavior, especially when dealing with graphs and plots. Best practices for effectively using color in mathematical function representations • Choose a color scheme that provides clear contrast and is visually appealing. Avoid using colors that are too similar, as they may cause confusion. • Use color to emphasize key points or specific parts of a function, such as critical points, asymptotes, or intersections. • Consider color-blindness when selecting colors to ensure that your visual representations are accessible to all individuals. • Utilize color coding consistently across different representations of the same function to maintain coherence and aid in comprehension. Encouragement to experiment with color and visualization techniques to enhance learning and communication in mathematics Don't be afraid to experiment with color and visualization techniques when studying or teaching mathematical functions. Whether you are a student trying to grasp complex concepts or an educator looking for innovative ways to communicate ideas, incorporating color can significantly enhance the learning experience. Try using different color combinations, shading, and patterns to create visually engaging representations of functions. By doing so, you can improve comprehension and retention of mathematical concepts.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-what-color-is-your-function","timestamp":"2024-11-11T18:17:29Z","content_type":"text/html","content_length":"226818","record_id":"<urn:uuid:9f1c2e1d-b634-4aa0-8adf-9f6462a2393d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00635.warc.gz"}
Rep Max Calculator Enter the weight you lifted and the number of reps performed to calculate your 1RM (one-rep max) and estimated reps for different formulas. Understanding Rep Max Calculations The one-rep max (1RM) is the maximum amount of weight that a person can lift for one repetition of a given exercise. The formulas used in this calculator provide estimates based on the number of repetitions and the weight lifted. Formulas Used in This Calculator Each formula estimates your 1RM based on different models and assumptions. These formulas are commonly used in strength training and are referenced as follows: How It Works The calculator first calculates your estimated 1RM using each of the listed formulas. It then provides rep maxes for 2RM, 3RM, and up to 10RM based on these estimates. These calculations provide a rough estimate and should be used as guidelines rather than exact predictions. Always consider your actual lifting ability and consult a professional coach when planning your training program. Suf is a senior advisor in data science with deep expertise in Natural Language Processing, Complex Networks, and Anomaly Detection. Formerly a postdoctoral research fellow, he applied advanced physics techniques to tackle real-world, data-heavy industry challenges. Before that, he was a particle physicist at the ATLAS Experiment of the Large Hadron Collider. Now, he’s focused on bringing more fun and curiosity to the world of science and research online.
{"url":"https://researchdatapod.com/data-science-tools/calculators/rep-max-calculator/","timestamp":"2024-11-13T18:07:19Z","content_type":"text/html","content_length":"120842","record_id":"<urn:uuid:122b53cd-5e1b-4a70-bcfd-1db90e997391>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00066.warc.gz"}
Mathematics 790, Spring 2017 Minicourse: A beginner's guide to A-infinity categories and Fukaya categories Time: Tuesdays and Thursdays, 10:05-11:20am Dates: February 14 to March 21 Location: Physics 205 Instructor: Lenny Ng Fukaya categories play a crucial role in modern symplectic topology and provide one side of Kontsevich's famous "Homological Mirror Symmetry" conjecture (the other side is given by coherent sheaves). In symplectic topology, Fukaya categories arose from the pioneering work of Floer on Lagrangian intersection Floer theory, which also underlies many recent developments in low-dimensional topology (e.g., Heegaard Floer theory). In this minicourse, I will give a (rather nontechnical) introduction to Fukaya categories. This includes a geometry-flavored overview of Lagrangian intersection Floer theory as well as an algebra-flavored treatment of the algebraic structures necessary to set up Fukaya categories, including A-infinity algebras and A-infinity categories. My intention is to make this minicourse as accessible as possible. Familiarity with algebraic topology and differential geometry at the beginning graduate level (Math 611 and 621) will be helpful; familiarity with symplectic geometry will not be assumed. Here is an outline of topics I plan to address, to varying extents, in the minicourse: • Morse homology (the basic setup) • background in symplectic geometry: symplectic forms, Lagrangian submanifolds, Hamiltonian diffeomorphisms • Lagrangian intersection Floer homology • A-infinity algebras and A-infinity categories • Fukaya categories. I will loosely follow Denis Auroux's "A beginner's introduction to Fukaya categories" but will supplement this with other sources. Course notes (will update as we go); for the early material on Morse homology, there are also notes from my previous minicourse in spring 2015.
{"url":"https://services.math.duke.edu/~ng/math790s17/","timestamp":"2024-11-09T03:19:35Z","content_type":"text/html","content_length":"2745","record_id":"<urn:uuid:697b62ed-e22a-4050-b560-2ca412ee444c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00154.warc.gz"}
The Breadth of the Game Testing Specialty I’ve posted quite a bit on game testing here, from being exploratory with Star Wars: The Old Republic, to bumping the lamp with Grand Theft Auto V, to ludonarrative in Elden Ring. I’ve also shown how comparative performance testing is done with games like Horizon Zero Dawn. These articles offered a bit of depth. What I want to do here is show the breadth of game testing and some of the dynamics involved since it’s quite a specialized sub-discipline within testing. What I’m going to do here is present a lot of information but there is going to be a throughline to each element. By the end of this post, I hope you can see how this breadth of testing in a game context is a very discernible spectrum. So let’s dig in. First, let’s set up some context. Game Quality Can Be (Initially) Problematic If you’re a gamer, you certainly know the story of Cyberpunk 2077. It released in December 2020 and had numerous bugs and was virtually unplayable on last-gen consoles. It was touted as being playable on them, however, which was a major sore point with the gaming community since the claim was obviously and demonstrably false. There were a lot of stories about the quality assurance on this game and the politics thereof. A lot of that was more noise than signal but, for testers, probably an interesting episode to watch. More recently we just had The Callisto Protocol (December 2022), which was not quite as unplayable but had massive and regular stuttering issues on the PC, although apparently not on consoles. Many of these have been fixed at the time of writing. The common thread here is that games release and, on day one, often have very obvious bugs that people wonder: how could this possibly have not been seen in testing? The simple answer is: the problems probably were seen in testing, at least to some extent. In the case of Cyberpunk 2077, we know that they were seen. Often there is hope that a so-called “Day 1 Patch” will resolve many things. Which does, when looked at from the outside, seem to argue for just pushing the release one more day. Other times it’s that the pure combination of hardware and software makes it not as feasible to just “check for quality.” Games have to run under a wide variety of conditions and are extremely demanding of computer resources, both CPU and GPU, that introduce a series of, quite literally, varying variables. Consider the release of Assassin’s Creed: Valhalla where numerous analyses were done to show how the barometer of the PlayStation 5 graphics settings compared to the PC graphics settings were able to showcase a variety of possible bugs in the implementation of the latter. Probably one of the better analyses I’ve seen of this is this video from Digital Foundry. Yet I’m actually not here to talk about these points. How games release in the industry is often timeline based and that’s often based on game development studios working with commitments that are made to their publishers. Game Testing Can Be Difficult What I do want to re-emphasize a bit here, particularly in relation to my above-mentioned previous posts, is how difficult game testing can be. Now I talked about this specific point before in a few articles: “Testing Games is Hard”, “Testing Like a Gamer”, and Gaming Like a Tester. Saying “testing games is hard” isn’t meant to make excuses for shoddy performance of games, particularly as those game publishers increase prices and try to entice with pre-order bonuses of some sort. Rather, it’s simply to show that being a game tester — particularly one that works closely in the design and development process — can be a very challenging, but extremely rewarding, aspect of specializing in testing. That specializing part is critical and that’s why this post is titled with “Testing Specialty.” I feel like I can best show this by the breadth of game testing. In what follows I’m going to use a lot of terminology from the Unreal Engine, simply because it’s been popularized so well and is easily referenceable if you want to look up terms. Just know that any terms I mention here have parallels in quite literally all other major gaming engines, whether that’s CryENGINE, Frostbite, R.A.G.E, Dunia, Havoc, Infernal Engine, and so on. Tester Insights Into Game Fidelity Here I lump a lot of things under “game fidelity.” What I mean by that are the possible experiences — visual, auditory, tactile, interactive — that a gamer can have and the underlying mechanisms that amplify or degrade the quality around those experiences. I find way too many game testers are not engaged at these multiple levels of abstraction. Fidelity Around Positioning One obvious challenge is the positioning of characters. This is a bit harder than it seems. Consider this: Clearly that’s not looking too good, right? What we want is this: That seems so simple and yet what developers do to make that work is the use of what are called control rigs that help implement inverse kinematics, in this case applying to the character’s legs. This is done so that, in this particular case, the character’s feet can look natural on sloped surfaces. But this applies to any situation where the character’s body has to naturally conform to aspects of the environment. Now here’s a possibly interesting thing: I’ll often have game testers tell me: “I don’t need to really know all that, though. I’m just doing play-testing.” But, as I’m arguing, testing in game context is not — or at least should not — be just about “play-testing.” It should be about making determinations around the quality of experience. That matches what testing should be about in any context, not just for games. Testers always need to be active participants in engaging with developers about how game engines work, how those engines enable game experiences, and how those experiences can have varying levels of quality. Those varying levels of quality are starting to cost game developers, and their publishers, in terms of reputation as well as sales. So there is a real-world, economic incentive here as well if you need a way to make the argument. Assuming you’ll give me the benefit of the doubt here, let’s say a tester needs to know about this. What do they need to know? Well, in this context, a game tester should know that a control rig is basically a construct controlling properties to drive animation. This gives developers the ability to animate what are called skeletal meshes. Think of this as literally like a skeleton made up of a bunch of polygons. This construct has a hierarchical set of interconnected “bones” (joints, really) which can be used to animate the vertices of those polygons. Inverse kinematics, put simply, is the use of kinematic equations to determine the motion of something to go from one position to another. In the context of game engines, you might have what’s called an “inverse kinematic rig” and what this does is handle not just movement but also joint rotation. If you play any of the rebooted Lara Croft games, starting with 2013’s Tomb Raider, you’ll see this play out in action as Lara moves through tight spaces and puts her hand up against walls. Notice here how Lara’s one (upper) hand is clipping through the rocks but her other is not. Okay, so maybe with that example, now, as a tester, you see how the underlying mechanics manifest. But … so what, right? After all, how do you test for all such conditions like this? Or do you? The answer is: you don’t. You really can’t. At least not in some systematically, repeatable way. But you do look for the general verisimilitude of those mechanics in action. Here’s one of many, many Watching that snapshot play out in action, you see how Lara moves and how her movements are constrained by the environment around her. This isn’t simple! This takes a lot of dedication to trying out Lara’s movements in various ways in the environment and noting broad classes of problems where her joints don’t rotate in quite the correct way or her body clearly has to contort in a way that no human body could. From a testing standpoint, we can look broadly for clipping issues or odd bone kinematics and then have developers investigate that scene or level to investigate the priority lines and control bands that dictate where the character can place their body and how their body movement conforms to the surrounding environment, such as ground and walls. We can try things like: “Hey, I’m going to sit in the chair and see how this looks.” And if that looks terrible, developers may start looking at something like this: Now, let’s consider an example here that I’m going to ask you to think quite a bit about. Consider the game Jedi: Fallen Order (November 2019), where your character, Cal Kestis, spends a great deal of his time running around, particularly doing wall-running. Not only is this visually important, in terms of the character actually looking like they’re running against a wall, but mismatches in the geometry and how your character’s body responds to that geometry can make environment puzzles like this very frustrating for the player. For example, it can lead to cases where the player is visually on the wall, but control structures in the geometry don’t recognize that and so the player falls because the game thinks the player has undershot or overshot the wall. I want to provide an extended example so I managed to go through one sequence that shows a lot of this in action. Consider the following from the game: Consider all of the possible issues in geometry that are being dealt with there! I want you to think about what it means to test this sequence, given all the timing related aspects that are inherent in what’s going on and the environment physics that are being modeled, such as ice and air updrafts and rope physics. Here’s where this becomes tractable. Many of the issues that would or could occur are part of what we might call equivalence classes. It’s like if we had a bunch of styled buttons on a web page. We can either individually style them or we can use a CSS class to style them as a group. If we can style those buttons as a group by applying a class attribute, we have a better chance of all the buttons reflecting the styling we want (including updates to the styling) and thus we, ideally, have an equivalence class for testing. If one button is styled incorrectly, they probably all are. The same applies with gaming engines. We can apply physics and control bands and priority lines across the range of “runnable walls” or “ice slides” or “hanging ropes” and that dramatically simplifies the testing experience and what we have to look for. Also, something I want to note from a personal standpoint: some parts of both Tomb Raider and Jedi: Fallen Order were very frustrating for me. But, that said, it was not frustration born of not trusting the game or how its environment was modeled. Rather, it was just the timing that I knew I had to get right. By which I mean I trusted the game enough that it was getting the details right that any failings were likely mine as I learned how to time my movements. And that’s critical! Gamers being able to trust the experience is something that testers can apply as a guiding heuristic. Actually, that’s the case whether the application you’re testing is a game or not. But it’s crucial in games where players will invest many hours — sometimes hundreds of hours — in the game. Let’s consider that character movement for a bit. Key to all of that realistic looking movement is that developers can use specific skeletons around the various characters. Here you can see that blur in the image which shows the rig is in motion and how that rig — acting as a skeleton for the actual character — moves. So developers can put different skeletons onto characters or alter the existing skeleton. But then testing has to make sure that animations still line up correctly, that weapons or other items appear in the character’s hands correctly, and so on. A simple change can lead to something like this: And keep in mind, this isn’t just a case of positioning the weapon in the hand in one context but it would be a case of making sure the weapon appears in the character’s hand through all possible motions of the underlying skeleton. So when the player decides to perform an attack, the animation must show the weapon reliably in their hand. Fidelity Around Mechanics Developers often use hit interfaces and this is so game characters and game objects can implement hit functionality. So a given weapon or item has a certain boundary around it which dictates when a “hit” occurs between that item and some other item. This, by itself, is pretty simple. A weapon, or item, has a hit interface that allows the developer to establish a boundary by which collisions with other objects will make sense. That can go badly awry though when the hit areas are not modeled well. It can be the case of an enemy who is clearly way out of weapon range to the player still being able to inflict damage. Consider this from Dark Souls (September 2011): Here I’m dodging some enemies who are both swinging at me. Their swords have defined hit interfaces around them. In this case my dodge, combined with their distance and the type of weapon, means there should be no hit registered against me. Let’s check out what this looks like in action with Dark Souls 3 (March 2016), with a little test instrumentation added. Here you can see the game in action but with various skeleton mesh instrumentation as well as what are called “hitboxes” shown. If you’re seeing potentially odd things there, then that’s a good thing. There are a lot of issues with the hitboxes in the game but it’s not so much related to the hitboxes but rather the issues are around frame issues. What you see, when testing the game, is that the hitbox for the player is generally good. The weapon hitboxes are also, generally, good although could probably serve to be flattened a bit. The problem in the game is the timing around when those weapon hitboxes start from the standpoint of frames. Two-handing the great sword, which is part of what you’re seeing in the video, can hit half an animation earlier if characters are too close to each other. There’s a ton we could talk about and investigate here but let’s keep moving along. From a testing standpoint, a lot of what I just described gets tied into what’s called chaos destruction. This is terminology used within the context of physics engines. Some games are striving for fully destructible environments. That scene is one from Battlefield 2042 (October 2021) that shows some of the destructibility of environment meshes. How all this works is that developers use chaos destruction to fracture meshes. As a note, I’ve said “mesh” a lot in this post. Just know that a mesh, in this context, is a piece of geometry that consists of a set of polygons. When you fracture meshes, you essentially model the dimensions along which something disintegrates or demolishes to some extent. Physics fields are used to apply that destruction to specific objects. You can use various levels of fracturing. Here’s an example of a character in action, shattering some pots: So that’s one thing for testers to be aware of: destructible environments and making sure that destruction physics seems to work in a way that makes sense. “Makes sense” here means given how you are hitting the objects and what’s around the objects that may cause the fragmented parts to bounce off walls or fall off cliffs. This idea of hit interfaces and even destructible meshes applies also in combat where we have what are called directional hit reactions. Here you can see that in action a bit as a character engages with an enemy. Visual elements there are showing how the game engine is helping us understand how the directional hit mechanics work. This leads to a lot of other equivalence classes that allow you to reduce the overall testing effort but still apply it to specific situations. For example, you might try applying this against human-sized opponents, as in the visual, but also against much smaller opponents and much larger opponents. Behind the scenes, developers use vector operations to calculate the hit direction and these vectors are actually modeled into the interactions. Developers use root motion animations so that enemies, when hit, stumble in the right direction. Root motion refers to the motion of a character that is based off animation from what’s called root bone of the skeleton. The “root bone” is a reference to a defined origin point (0,0,0) with no rotation applied. So what ends up happening is we look at our root motion: There’s a lot to be said here but what’s crucial to understand is how all of the above really swirls around the mechanics of how characters move or animate and the dynamics that this allows in the game, such as interaction with the environment. In the above visual, you have armature bones that are used to move the body, which has its root bone in the hip, and the inverse kinematic controllers around the arms and legs perform the movement. All of what I just described was certainly modellable within the Dark Souls context I showed earlier but this also brings up a good point: turning on all possible instrumentation to view what’s happening can actually compromise testing quite a bit because it can be harder to reason about what’s going on. As with the use of any such tooling to support testing, you do have to be careful in how much of it you use. A bit off-topic and yet certainly related, let’s consider client and server discrepancies. Consider this: What that visual is showing you is a case where the client (left part of image) shows that the player picked up some object, in this case a sword, but that’s not being reflected on the server (right part of image), where the sword is shown as clearly still on the ground. So this means the server is not recognizing damage correctly because, as far as its concerned, the player has not picked up the weapon that will cause damage. This can obviously be very important in any multiplayer contexts such as PvP. In the Dark Souls context, situations like this would be even more rage-inducing than the game itself for many players. Fidelity Around Dynamics The very same type of math and logic talked about above can also handle another specific type of aspect of game dynamics. Many games give enemies the ability to see the character if the player moves into their line of sight, which are usually framed as “light cones” that represent vision. This kind of thing is really important in stealth-based games that rely on you navigating your way around patrolling enemies. Consider games like Dishonored (October 2012), Shadwen (May 2016), or Styx: Master of Shadows (October 2014) where being stealthy is a core mechanic. At it’s simplest level, this means enemies may get hostile and chase you but then lose interest if you run far enough away or break their line of sight. What you’re seeing in that visual is that the character has moved in the cone of vision of an enemy (represented on the ground by the red sphere in between them) and now the two characters are linked by a line fo sight, represented by red spheres around their midsection. This can now be used as an input to other game mechanisms, such as a type of path finding mechanism that allows the enemy to pursue the character. Now I’ll ask you to consider a game series like Hitman. Specifically, consider the following sequence: This is a mission in Marrakesh where the idea is you want to get the camera man’s outfit as a disguise. There are various characters in the game, some of whom will entirely ignore you and others who will actively challenge you if they spot you. Sometimes they challenge you only if you are in an area that you are not supposed to be, given your choice of disguise. Consider the sequence you see in that video. Don’t look at it as a game player. Look at it as a game tester. Think about what it means to test just this particular sequence in a game that is filled with many such sequences. And as you look at that example, realize that it — like the Jedi: Fallen Order example shown earlier — shows all of what I talked about above. So in these snippets, you are seeing the breadth of testing, or at least what testing can be, in this context. Fidelity Around Appearance Let’s look at one more area. A lot of what I mentioned above shows a lot about graphics and geometry and how mathematics work in line with that. One of the core killer quality issues for games is frame rate issues. Frame rate is how the game is effectively appearing to the player, in terms of how smooth it plays or how jaggy the experience is. Consider that Final Fantasy Origins: Strangers of Paradise (March 2022) had a whole lot of frame rate — not to mention resolution and visual — issues. This, like Cyberpunk 2077, seemed to manifest more on consoles than it did on PC. Testing actually shows that this is because the models the developers were using are extremely unoptimized. One example of this was thirty megabyte geometries for some of the most common enemies. Consider this: By the way, some of these model images can be opened in a new tab so you can see a bit of a larger view if you want to check the numbers I’m about to tell you about. That’s the game’s bat model. If you saw this as a tester, what might you think? Perhaps nothing — if you don’t know what to look for. A key thing would be those “total verts” and “total tris”. Here these are 350,768 and 335,070, respectively. Is that bad or good? As a tester, even if you’re not responsible for answering that, you should want to be able to answer it. That bat model serves as a specification for that particular enemy. And that specification is showing something way out of whack. But let’s break this down a little bit. Verts is short for vertices. A tri (triangle) is the simplest form of polygon. Put very simplistically, 1 tri = 3 verts, 2 tris = 4 verts, and 3 tris = 5 A vertex at it’s most basic is simply a point in space; it’s an X, Y and a Z coordinate that gives a position. A triangle is the flat shape you get when you join up three vertices positions. This is what forms the shell of a three-dimensional model that players will see and interact with in game. In any given game, each vertex traditionally has a lot more information attributed to it, such as color, normal (the direction that it points in, for lighting), tangents and sometimes binormals (for normal mapping) and various UV coordinates (for texture mapping). What this means is that vertices is one of the most precise way to convey mesh complexity. For game optimization, the smaller the better for either triangle or vertex count. That said, the less you have, the lower the quality of the visual. So you want to optimize but while maintaining an acceptable visual fidelity. Let’s consider another model from the game: Here the verts are 224,901 and the tris are 314,264 which, even for a seemingly more complex overall model, is simpler than our bat. How about this one: Verts are 273,623 and tris are 144,537. One more, which is a more complete model: Here our verts are 234,263 and the tris are 393,105. Now, given all you just looked at above with these numbers, what stands out to you? What stood out to testers is that the bat — a very common enemy and thus there are many of them in the game — has way more verts and tris than any of the other ostensibly more complex models. This shows a very poorly optimized model. And since that model is serving as a specification, and since testers should be good at finding gaps in specifications, this is something testers should be able to help developers spot. The bat isn’t even the worst offender. There’s a boss model in the game that’s ninety megabytes — and that’s the geometry alone, with no textures or anything applied to it — and it has 1.8 million From a pure resource standpoint, the number of vertices will affect the memory size of your mesh in RAM but will be rather negligible in terms of GPU speed. Whereas, the number of triangles affects GPU utilization more than it affects the memory size. From an experience side of things, this can give us clues as to how the user will experience the game based on certain graphic cards or on certain resolutions. Now, how does this actually manifest? Well, a character called The Minister in the game drops an NVIDIA GeForce RTX 3090 — no slouch of a graphics card — to twenty-five frames per second and this is largely because of the character’s fur coat. You can see that bit of testing by the following: Consider another scene from the game were the graphics settings were specifically put on minimum with no shadows. Shadows are one of the known performance degraders. Here is how that scene plays out: The frames per second drop terribly, going to 8 at one point which, needless to say, is pretty awful. If you watch the video, you can likely discern exactly what’s causing the performance hit there. The Breadth is Large What I hope you saw here is that just the breadth of game testing, much less the depth, is an exciting field. How exciting it is, however, depends in my experience on whether you, as a tester, are fostering testing as a specialty within the environment, such that you are engaging in design and development, or are just doing so-called “play-testing.” Games, just like any applications, have many amplifiers and degraders of quality that testers have to be looking for. Likewise, just as in any testing, testers want to be looking for how much testability is a part of the process. Testability, at minimum, means controllability and observability. So consider that in relation to some of what I showed you above. Developers may look at all of that in the context of their game engine, but that game engine is still an abstracted interface from what the player will actually be experiencing. Let’s even approach this breadth from yet one more angle. Player Emotion and Trust Game mechanics interact to generate events, which in turn provoke emotions in players. To cause emotion, an event must signal a meaningful change in the world. In fact, to provoke emotion, an event must change some human value. A human value, for our context, is anything that is important to people that can shift through multiple states. So consider how this might play out in a given game. Here I’ll point you to a review of Dishonored 2. A key part from that review is: I’m surprised when a guard is immolated after I shoot him with a sleep dart. But it happens for a reason. In Dishonored 2, certain bottles of alcohol burst into flame when smashed—a trick useful for burning down the nests of Karnaca’s parasitic bloodflies. This is a universal rule that exists outside of the player’s direct involvement—a rule that can trigger when, for instance, a recently tranquilized guard drops their drinking glass onto a bottle. It’s not about realism—this is a game in which one of the main characters has a parkour tentacle—but it works, and feels immersive, because everything has its own defining laws within the fiction. The biggest joy of Dishonored 2 is in discovering these systems, and manipulating them to your own ends. That wouldn’t work if you couldn’t trust in its simulation of the world. This is another area of trust. In this case, the player is trusting in the simulation of the world and not just the geometry that underlies that world. See, once again, how the breadth works? The geometry of the world morphs into the simulation of the world! This is a case where testers can act as great user advocates. You try things out and see what experiences you have and you can encourage the design and development towards that “bumping the lamp” experience I mentioned in my above-referenced post. As another bit from that review: The first time I play the opening mission, I kill my target. Later, a guard announces to his men that their leader is dead. The second time through, I again kill my target, but hide his body in a secret room, locking the door behind me. This time, the guard announces that their leader is missing. With no way to access the room containing his corpse, his fate remains a mystery. It’s a tiny thing—a single voice line—but it builds that trust. It would make sense for the game to treat dead or alive as a binary state, but Dishonored 2 knows that these details are important. It respects your ingenuity, acknowledging when you’ve done something clever. This is a game that was very well tested. You can see an article — “Dishonored 2 QA testers are doing things devs didn’t even think of” — that amplifies that very point. This, my friends, is the fun and the challenge of testing in a game context. But, crucially, we need more test specialists in this context, not just “play-testers.” One thought on “The Breadth of the Game Testing Specialty” 1. Cards on the table: I don’t do games. But I listen to people who do, and I know people who started out in games testing. I also talk about testing to non-testers, and there is a bit of a sense amongst non-testers that games testing must be great, “because you get paid to play games all day”. And we both know that to be untrue – though when I was job-hunting, I came across some games firms who promoted that view, if only to get away with paying the smallest amount possible for testers. Which may explain some of the test failures you’ve seen. The other thing I notice is that some of the character placement and motion issues you’ve identified are things that my other half, who lectures in digital media and art, tries to address in her teaching. So it makes me wonder: whet qualifications do the developers of these games have? Did they come to games development via an IT-oriented route, or do they have qualifications and/or experience in digital animation for film and tv? This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://testerstories.com/2022/12/the-breadth-of-the-game-testing-specialty/","timestamp":"2024-11-07T09:17:52Z","content_type":"text/html","content_length":"120749","record_id":"<urn:uuid:551f40e4-f314-460a-84f1-f07d80713388>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00261.warc.gz"}
Seminars: F. N. Pakhomov, Dilators, Reflection, and Forcing: A Proof-Theoretic Analysis of $\Pi^1_1-\mathsf{CA}_0$ Abstract: In this talk I will sketch a new approach to ordinal analysis of $\Pi^1_1-\mathsf{CA}_0$ that uses the interplay of the methods of functorial proof theory and iterated reflection. An interesting feature of the approach is that it completely sidesteps predicative cut elimination. Language: English
{"url":"https://m.mathnet.ru/php/seminars.phtml?presentid=38367&option_lang=eng","timestamp":"2024-11-11T20:27:49Z","content_type":"text/html","content_length":"8530","record_id":"<urn:uuid:22fb6c7b-7af8-4cee-845a-5c859b5405d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00784.warc.gz"}
Excel 2007: Using What-If Analysis Lesson 15: Using What-If Analysis The real power in Excel lies in its ability to perform multiple mathematical calculations for you. One of the tools in Excel that you can use to perform these calculations is a Data tool called what-if analysis. What-if analysis allows you to see the effect different values have in formulas. Have you ever thought, "What interest rate do I need to qualify for to have a car payment of $400 on the car I want?" This question can be answered using what-if analysis. In this lesson, you will learn how to use a what-if analysis tool called Goal Seek. What-if analysis Download the example to work along with the video. You need a loan to buy a new car. You know how much money you want to borrow, how long of a period of time you want to take to pay off the loan (the term), and what payment you can afford to make each month. But what you need to know is what interest rate you need to qualify for to make the payment $400 per month. In the image below, you can see that if you didn’t have interest and just divided this $20,000 into 60 monthly payments, you would pay $333.33 per month. The what-if analysis tool will allow you to easily calculate the interest rate. Where did the formula come from? The formula that appears in cell B5 in the example image is a function. It isn't part of the what-if analysis tool, so you will need to understand functions thoroughly before you use what-if analysis. For the example scenario described above, you need a formula that will calculate the monthly payment. Instead of writing the formula yourself, you can insert a function to do the calculation for you. To insert a payment function: • Select the Formula tab. • Click the Insert Function command. A dialog box appears. • Select PMT. • Click OK. A dialog box appears. • Insert your cursor in the first field. A description of the needed information appears at the bottom of the dialog box. • Select the cell in the spreadsheet with the needed information. • Insert your cursor in the next field. A description about the needed information appears at the bottom of the dialog box. • Select the cell in the spreadsheet with the needed information. • Repeat the last two steps until all of the necessary information is entered in the dialog box. • Click OK. What-If Analysis tools There are three What-If analysis tools that you can use. To access these, select the Data tab and locate the What-If Analysis command. If you click this command, a menu with three options appears. Goal seek is useful if you know the needed result but need to find the input value that will give you the desired result. In this example, we know the desired result (a $400 monthly payment) and are seeking the input value (the interest rate). Goal Seek To use Goal Seek to determine an interest rate: • Select the Data tab. • Locate the Data Tools group. • Click the What-If Analysis command. A list of three options appears. • Select Goal Seek. A small dialog box appears. • Select the cell that you want to set to a specific value. In this example, we want to set B5, the Payment cell. • Insert the cursor in the next field. • Enter a value in the value field. In this example, type -$400. Since we’re making a payment that will be subtracted from our loan amount, we have to enter the payment as a negative number. • Insert the cursor in the next field. • Select the cell that you want to change. This will be the cell that tries various input values. In this example, select cell B4, which is the interest rate. • Click OK. • Then, click OK again. The interest rate appears in the cell. This indicates that a 7% interest rate will give us a $400 a month payment on a $20,000 loan that is paid off over five years, or 60 Use the Car Loan workbook to complete this challenge. • Use Goal Seek to solve the problem described in the text and video lessons. • Create a problem of your own that you might use Goal Seek to solve. Try it.
{"url":"http://jobmy.info/index-1420.html","timestamp":"2024-11-03T04:35:44Z","content_type":"text/html","content_length":"44525","record_id":"<urn:uuid:6d2f44c0-a5e0-45a7-84c9-a72b9d288ae4>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00114.warc.gz"}
1248 - Frog's insects A frog is going to starve to death when God give it some insects. The insects are put inside a line of N plates, and one insect in one plate. At the beginning, the frog can jump into any plate. Then the frog can only jump forward into larger plates. The frog will eat up all food in a plate when it jumps into it. Please help the greedy frog eat as many insects as possible. Note that the frog can only jump forward . There are multiple cases ended with EOF. Every case have two lines . The first number of each case is one number——N (3 <= N <= 100000). The following N positive integers si (1 <= si <= 1000000000) are the size of N plates. For each test, output one line containing a single integer——maximum insects the frog can eat. sample input sample output
{"url":"http://hustoj.org/problem/1248","timestamp":"2024-11-12T19:29:22Z","content_type":"text/html","content_length":"7964","record_id":"<urn:uuid:7d3d08c3-a23e-4de3-8c42-e3a5792de077>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00576.warc.gz"}
Quantum computers can work.Quantum computers can work. | GoST Quantum computers can work. Laboratory Equipment points the way for the next big breakthrough in thinking machines: Many quantum algorithms require that particles’ spins be “entangled,” meaning that they’re all dependent on each other. The more entanglement a physical system offers, the greater its computational power. Until now, theoreticians have demonstrated the possibility of high entanglement only in a very complex spin chain, which would be difficult to realize experimentally. In simpler systems, the degree of entanglement appeared to be capped: beyond a certain point, adding more particles to the chain didn’t seem to increase the entanglement. This month, however, in the journal Physical Review Letters, a group of researchers at MIT, IBM, Masaryk Univ. in the Czech Republic, the Slovak Academy of Sciences and Northeastern Univ. proved that even in simple spin chains, the degree of entanglement scales with the length of the chain. The research thus offers strong evidence that relatively simple quantum systems could offer considerable computational resources. In other words, it’s possible to get sophisticated results from a simple starting point… something beyond the yes/no of conventional circuit switches. But in the new paper, MIT professor of mathematics Peter Shor, his former student Ramis Movassagh, who is now an instructor at Northeastern, and their colleagues showed that unbounded entanglement is possible in chains of particles with only three spin states — up, down and none. Systems of such particles should, in principle, be much easier to build than those whose particles have more spin states. If two particles are entangled, then performing a measurement on one tells you something about the other. For instance, if you measure the spin of an electron orbiting a helium atom, and its spin is up, the spin of the other electron in the same orbit must be down, and vice versa. For a chain of particles to be useful for quantum computing, all of their spins need to be entangled. If, at some point, adding more particles to the chain ceases to increase entanglement, then it also ceases to increase computational capacity. To show that entanglement increases without bound in chains of three-spin particles, the researchers proved that any such chain with a net energy of zero could be converted into any other through a small number of energy-preserving substitutions. The proof is kind of like one of those puzzles where you have to convert one word into another of the same length, changing only one letter at a
{"url":"https://guildofscientifictroubadours.com/2012/11/29/quantum-computers-can-work/","timestamp":"2024-11-03T04:16:22Z","content_type":"text/html","content_length":"54491","record_id":"<urn:uuid:c7257ce5-50dc-4aa4-bd1f-49133bee3b92>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00332.warc.gz"}
ernary and Binary Coded Ternary and Its Inverse Part of the Ternary Manifesto by Douglas W. Jones THE UNIVERSITY OF IOWA Department of Computer Science Work started Jan. 6, 2016. If binary computers and ternary computers are to communicate, we will need to represent binary data on ternary machines, and visa versa. Binary-coded ternary (BCT) is a representation allowing binary computers to manipulate ternary data, while ternary-coded binary (TCB) allows ternary computers to manipulate binary data. If ternary computers are to survive in a binary world, they will almost certainly include interfaces to binary peripheral devices. Ternary software that mainpulates such interfaces will almost certainly use some form of ternary-coded binary for both input and output. During the development of ternary computers, binary computers will almost certainly need to simulate ternary operations. For this, and for an eventual future when binary computers may actually need to communicate with ternary devices, we will need binary-coded ternary. In binary, 2 bits encode 4 values, while in ternary, one trit encodes 3 values. As such, if we use 2 bits per binary-encoded ternary trit, there are ^4C[3] (4 things taken 3 at a time) distinct binary encodings of ternary. Several of these have been explored in the past. Frieder and Luk, 1975 used an ingenious encoding where each balanced ternary number was represented by a pair of binary numbers [a,b], where the one bits in a represented the positive trits of the ternary number, while the ones in b represented the negative trits. Both balanced and unsigned ternary addition using that representation require 20 arithmetic and logic operations. This representation was used in the TERNAC computer, a ternary architecture implemented by microcode on a conventional binary microengine. In Ternac, each ternary word was composed of a pair of binary words, one holding all of the positive half-trits and one holding all of the negative half-trits. If we undo this packing and place the two bits side by side, this encoding can be describes as Balanced BCT –1 10 +1 01 Parhami and McKeown, 2013 They recognized the utility of multiple encodings. The goal of their work was direct implementation of binary-coded ternary operations using binary logic gates, so different encodings could be used at different points in a circuit. Their standard encoding was a superset of the encoding used by Frieder and Luk, where both 00 and 11 were used to represent 0. There are several binary-coded decimal (BCD) representations. By far the most common uses the ten binary values from 0000 to 1001 to represent the digits from 0 to 9. The second best known, excess-3 BCD, uses the consecutive values 0011 to 1100 in binary order to represent the digits from 0 to 9. Here, we suggest that the natural binary encoding of ternary should use the binary values from 00 to 10, in order, to represent the digits from 0 to 2: Unsigned Balanced BCT 0 –1 00 2 +1 10 Ternary-coded binary (TCB) has been mentioned in passing by several authors, but specific definitions are scarce. There are at least three natural ternary-encodings of binary values, one relating primarily to logical values and two to numeric values. In each of these mappings, each bit of the binary value is encoded as one trit of the ternary value. For ternary logical values, we map true to true and false to false, without using the intermediate unknown value. For numeric values, we use the ternary digits 0 and 1, noting that the encodings of 0 and 1 in balanced ternary is different from the encoding in unsigned and 3's complement ternary: Ternary Binary Unsigned Balanced Logical Unsigned Balanced 0 –1 0 0 — 1 0 — 1 0 2 +1 1 — 1 Given a ternary-coded binary value using the unsigned encoding, we can convert it to the logical form by doubling it or adding it to itself using unsigned addition. Similarly, we can convert the value to balanced form by adding one to each trit, using either an unsigned or balanced add. If we start with a balanced ternary-coded binary number, conversion to unsigned form is easy, just subtract one from each trit. Conversion to logical form with common operators is more awkward using familiar operators. Similarly, if we start with logical form, conversions to the other forms are awkward if we confine ourselves to familiar operators. The above considerations suggest that ternary-coded binary data should, by default, be introduced in unsigned form and then converted as needed to other forms. Given an unsigned integer encoded in binary-coded ternary, conversion to binary is straightforward, but the optimal code for this conversion is quite obscure. The following iterative code assumes a global constant TRITS giving the number of trits in the ternary word to be converted, and it assumes that the default integer size is sufficient to hold both the binary coded ternary value being converted and the result of the conversion. unsigned int bct_to_bin( unsigned int bct ) { unsigned int acc = 0; int trit; const int shift = 2 * TRITS; const int mask = 0x3 << shift; for ( trit = 0; trit != TRITS; trit++ ) { acc = (acc * 3) + ((bct & mask)) >> shift); bct = bct << 2; return acc; This code is relatively slow because it processes all of the trits sequentially. If conversions are common, this can become a bottleneck, so it is interesting to look for faster alternatives. The key to faster code is to take advantage of the fact that we have a wide data path in our arithmetic unit that can handle multiple binary-coded ternary trits in parallel. The following code demonstrates this, assuming that unsigned integers hold 32-bits or 16 BCT trits. unsigned int bct_to_bin( unsigned int bct ) { unsigned int high, low, acc; high = (bct >> 2) & 0x33333333; /* high trits */ low = bct & 0x33333333; /* low trits */ acc = low + ( 3 * high); /* binary-coded base 9 */ high = (acc >> 4) & 0x0F0F0F0F; /* high nibbles */ low = acc & 0x0F0F0F0F; /* low nibbles */ acc = low + ( 9 * high); /* binary-coded base 81 */ high = (acc >> 8) & 0x00FF00FF; /* high bytes */ low = acc & 0x00FF00FF; /* low bytes */ acc = low + ( 81 * high); /* binary-coded base 6561 */ high = (acc >> 8) & 0x0000FFFF; /* high bytes */ low = acc & 0x0000FFFF; /* low bytes */ acc = low + (6561 * high); /* now in binary */ return acc; The pattern illustrated above can be continued to higher precisions, with the total number of operations growing as O(log n) for converting n digits, so long as the underlying arithmetic unit can handle operands of that size. The multiplier in each step is the square of the multiplier in the previous step. The above code is not at the limit of optimization. Massalin, 1987 used an exhaustive search algorithm to find the optimal code sequence to convert binary-coded decimal to binary, and the algorithm "discovered" by his superoptimizer generalizes. Here is the resulting code, assuming we are converting a 16 trit unsigned BCT number to a 32 bit unsigned integer: unsigned int bct_to_bin( unsigned int bct ) { unsigned int acc = bct; acc = acc - (((acc >> 2) & 0x33333333) * ( 4 - 3)); acc = acc - (((acc >> 4) & 0x0F0F0F0F) * ( 16 - 9)); acc = acc - (((acc >> 8) & 0x00FF00FF) * ( 256 - 81)); acc = acc - (((acc >> 16) & 0x0000FFFF) * (65536 - 6561)); return acc; We assume, of course, that the constant expressions used to compute the multipliers are evaluated by the compiler and that the multiplications are optimized using sequences of shift and add operations where such sequnces are faster than native multiply instructions. The magic constants on each successive line above are the squares of the constants on the previous line. Performance: Some simple benchmarks illustrate the performance of this code. These tests were done on an Intel Core i3 3.06 GHz computer under the GCC 4.2 compiler with the default compilation options, using the 27-trit binary-coded ternary implementation from libter27.h. Generating and converting all integer values from 1 to 4,000,000 from binary-coded-ternary to binary using the naïve for-loop algorithm given first above is 3.94 times slower than the optimized version. The overhead of incrementing the integer value over the tested range is the same in both cases, subtracting an approximation of this from both run times sugests that the actual speed ratio is closer to 7.4. Given an unsigned ternary-coded binary integer, and writing programs in an environment where all operations are naturally carried out in ternary, the algorithms for converting binary to ternary mirror the algorithms discussed above. Here, we assume that the data type unsigned ternary forces that data to be naturally represented in ternary, and that arithmetic operations on values of this type are the usual unsigned ternary operations. In addition, we assume that the & and | operators, and and or for binary numbers, are used for the min and max operators when applied to ternary operands. The subscript 3 on shift operators such as <<[3] indicates shifts applied to the trits of of a ternary operand, and constants used for masking ternary values are given here in heptavintimal , indicated by a leading 0t (a horrible notation, but since we are using a C-like syntax here, it will suffice). unsigned ternary tcb_to_ter( unsigned ternayr tcb ) { unsigned ternary acc = 0; int bit; const int shift = BITS; const int mask = 0t1 <<[3] shift; for ( bit = 0; bit != BITS; bit++ ) { acc = (acc * 2) + ((tcb & mask)) >>[3] shift); tcb = tcb <<[3] 1; return acc; As with the BCT to binary conversion code, this code processes the digits sequentially, presenting a possible bottleneck. Again, we can construct faster code by using the full parallel data path through the arithmetic unit. in the following, we assume that unsigned ternary operands are represented in 27 trits: unsigned ternary tcb_to_ter( unsigned ternary tcb ) { unsigned ternary high, low, acc; high = (tcb >>[3] 1) & 0tN6N6N6N6N; /* high bits */ low = tcb & 0tN6N6N6N6N; /* low bits */ acc = low + ( 2 * high); /* ternary-coded base 4 */ high = (acc >>[3] 2) & 0t82KV82KV8; /* high 2-bits */ low = acc & 0t82KV82KV8; /* low 2-bits */ acc = low + ( 4 * high); /* ternary-coded base 16 */ high = (acc >>[3] 4) & 0tZ08V0ZK2Z; /* high nibbles */ low = acc & 0tZ08V0ZK2Z; /* low nibbles */ acc = low + ( 16 * high); /* ternary-coded base 256 */ high = (acc >>[3] 8) & 0t0ZZV008ZZ; /* high bytes */ low = acc & 0t0ZZV008ZZ; /* low bytes */ acc = low + ( 256 * high); /* ternary-coded base 65536 */ high = (acc >>[3] 16) & 0t0002ZZZZZ; /* high bytes */ low = acc & 0t0002ZZZZZ; /* low bytes */ acc = low + ( 65536 * high); /* now in ternary */ return acc; This code can be superoptimized as follows: unsigned ternary tcb_to_ter( unsigned ternary tcb ) { unsigned ternary acc = tcb; acc = acc - (((acc >>[3] 1) & 0tN6N6N6N6N) * ( 3 - 2)); acc = acc - (((acc >>[3] 2) & 0t82KV82KV8) * ( 9 - 4)); acc = acc - (((acc >>[3] 4) & 0tZ08V0ZK2Z) * ( 81 - 16)); acc = acc - (((acc >>[3] 8) & 0t0ZZV008ZZ) * ( 6561 - 256)); acc = acc - (((acc >>[3] 16) & 0t0002ZZZZZ) * (43046721 - 65536)); return acc;
{"url":"http://homepage.divms.uiowa.edu/~jones/ternary/bct.shtml","timestamp":"2024-11-01T20:42:06Z","content_type":"text/html","content_length":"18191","record_id":"<urn:uuid:70e5d5bc-7c7f-40ec-90ff-a797514c8c68>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00645.warc.gz"}
herr strathmann Together with Louis Ellam, Iain Murray, and Mark Girolami, we just published / arXived a new article on dealing with large Gaussian models. This is slightly related to the open problem around the GMRF model in our Russian Roulette paper back a while ago. We propose a determinant-free approach for simulation-based Bayesian inference in high-dimensional Gaussian models. We introduce auxiliary variables with covariance equal to the inverse covariance of the model. The joint probability of the auxiliary model can be computed without evaluating determinants, which are often hard to compute in high dimensions. We develop a Markov chain Monte Carlo sampling scheme for the auxiliary model that requires no more than the application of inverse-matrix-square-roots and the solution of linear systems. These operations can be performed at large scales with rational approximations. We provide an empirical study on both synthetic and real-world data for sparse Gaussian processes and for large-scale Gaussian Markov random fields. Article is here. Unfortunately, the journal is not open-access, but the arXiv version is. Adaptive Kernel Sequential Monte Carlo And when your name is Dino Sejdinovic, you can actually integrate out all steps in feature space….. Ingmar Schuster wrote a nice blog post about our recently arxived paper draft. It is about using the Kameleon Kernel Adaptive Metropolis-Hastings proposal as an MCMC rejuvenation step in a Sequential Monte Carlo context. Kamiltonian Monte Carlo Together with Dino, Sam, Zoltan, and Arthur, I [DEL:recently arxived a first draft:DEL] published an article on a project that combines two topics — the combination of which I find rather exciting: kernel methods and Hamiltonian Monte Carlo. Here is the abstract. We propose [DEL:Kamiltonian:DEL] Kernel Hamiltonian Monte Carlo (KMC), a gradient-free adaptive MCMC algorithm based on Hamiltonian Monte Carlo (HMC). On target densities where classical HMC is not an option due to intractable gradients, KMC adaptively learns the target’s gradient structure by fitting an exponential family model in a Reproducing Kernel Hilbert Space. Computational costs are reduced by two novel efficient approximations to this gradient. While being asymptotically exact, KMC mimics HMC in terms of sampling efficiency, and offers substantial mixing improvements over state-of-the-art gradient free samplers. We support our claims with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC. Motivation: HMC with intractable gradients. Many recent applications of MCMC focus on models where the target density function is intractable. A very simple example is the context of Pseudo-Marginal MCMC (PM-MCMC), for example in Maurizio’s paper on Bayesian Gaussian Processes classification. In such (simple) models the marginal likelihood $p(\mathbf{y}|\theta )$ is unavailable in closed form, but only can be estimated. Performing Metropolis-Hastings style MCMC on $\hat{p}(\theta |\mathbf{y})$ results in a Markov chain that (remarkably) converges to the true posterior. So far so good. But no gradients. Sometimes, people argue that for simple objects as latent Gaussian models, it is possible to side-step the intractable gradients by running an MCMC chain on the joint space $(\mathbf{f},\theta)$ of latent variables and hyper-parameters, which makes gradients available (and also comes with a set of other problems such as high correlations between $\mathbf{f}$ and $\theta$, etc). While I don’t want to get into this here (we doubt existence of a one-fits-all solution), there is yet another case where gradients are unavailable. Approximate Bayesian Computation is based on the context where the likelihood itself is a black-box, i.e. it can only be simulated from. Imagine a physicist coming to you with three decades of intuition in the form of some Fortran code, which contains bits implemented by people who are not alive anymore — and he wants to do Bayesian inference on the code-parameters via MCMC… Here we have to give up on getting the exact answer, but rather simulate from a biased posterior. And of course, no gradients, no joint distribution. State-of-the-art methods on such targets are based on adaptive random walks, as no gradient information is available per-se. The Kameleon (KAMH) improves over other adaptive MCMC methods by constructing locally aligned proposal covariances. Wouldn’t it be cooler to harness the power of HMC? Kamiltonian Monte Carlo starts as a Random Walk Metropolis (RWM) and then smoothly transitions into HMC. It explores the target at least as fast as RWM (we proof that), but improves the mixing in areas where it has been before. We do this by learning the target gradient structure from the MCMC trajectory in an adaptive MCMC framework — using kernels. Every MCMC iteration, we update our gradient estimator with a decaying probability $a_t$ that ensures that we never stop updating, but update less and less, i.e. $$\sum_{t=1}^\infty \frac{1}{a_t}=\infty\qquad\text{and}\qquad\sum_{t=1} ^\infty \frac{1}{a_t^2}=0.$$ Christian Robert challenged our approach: using non-parametric density estimates for MCMC proposals directly is a bad idea: if certain parts of the space are not explored before adaptation effectively stopped, the sampler will almost never move there. For KMC (and for KAMH too) however, this is not a problem: rather than using density estimators as proposal directly, we use them for proposal construction. This way, these algorithms inherit ergodicity properties from random walks. I coded an example-demo here. Kernel exponential families as gradient surrogates. The density (gradient) estimator itself is an infinite dimensional exponential family model. More precisely, we model the un-normalised target log-density $\pi$ as an RKHS function $f\in\mathcal{H}$, i.e. $$\text{const}\times\pi(x)\approx\exp\left(\langle f,k(x,\cdot)\rangle_{{\cal H}}-A(f)\right),$$ which in particular implies $\nabla f\approx\nabla\log\pi.$ Surprisingly, and despite a complicated normaliser $A(f)$, such a model can be consistently fitted by directly minimising the expected $L^2$ distance of model and true gradient, $$J(f)=\frac{1}{2}\int\pi(x)\left\Vert \nabla f (x)-\nabla\log \pi (x)\right\Vert _{2}^{2}dx.$$ The magic word here is score-matching. You could ask: “Why not use kernel density estimation?” The answer: “Because it breaks in more than a few dimensions.” In contrast, we are actually able to make the above gradient estimator work in ~100 dimensions on Laptops. Two approximations. The über-cool infinite exponential family estimator, like all kernel methods, doesn’t scale nicely in the number $n$ of data used for estimation — and here neither in the in input space dimension $d$. Matrix inversion with costs $\mathcal{O}(t^3d^3)$ becomes a blocker, in particular as $t$ here is the growing number of MCMC iterations. KMC comes in two variants, which correspond to different approximations to the original kernel exponential family model. 1. KMC lite expresses the log-density in a smaller dimensional (yet growing) sub-space, via collapsing all $d$ input space dimensions into one. It takes the dual form $$f(x)=\sum_{i=1}^{n}\alpha_{i} k(x_{i},x),$$ where the $x_i$ are $n$ random sub-samples (just like KAMH) from the Markov chain history. Downside: KMC lite has to be re-estimated whenever the $x_i$ change. Advantage: The estimator’s tails vanish outside the $x_i$, i.e. $\lim_{\|x\|_{2}\to\infty}\|\nabla f(x)\|_{2}=0$, which translates into a geometric ergodicity result as we will see later. 2. KMC finite approximates the model as a finite dimensional exponential family in primal form, $$f(x)=\langle\theta,\phi_{x}\rangle_{{\cal H}_{m}}=\theta^{\top}\phi_{x},$$ where $x\in\mathbb{R}^{d} $ is embedded into a finite dimensional feature space $\phi_{x}\in{\cal H}_{m}=\mathbb{R}^{m}.$ While other choices are possible, we use the Randon Kitchen Sink framework: a $m$-dimensional data independent random Fourier basis. Advantage: KMC lite is an efficient on-line estimator that can be updated at constant costs in $t$ — we can fit it on all of the MCMC trajectory. Disadvantage: Its do not decay and our proof for geometric ergodicity of KMC lite does not apply. Increasing dimensions. So far, we did not work out how the approximation errors propagate through the kernel exponential family estimator, but we plan to do that at some point. The paper contains an empirical study which shows that the gradients are good enough for HMC up to ~100 dimensions — Under a “Gaussian like smoothness” assumption. The below plots show the acceptance probability along KMC trajectories and quantify how “close” KMC proposals are to HMC proposals. As a function in number of points $n$ and dimension $d$. For a fixed number of points $n=2000$, as a function of dimension $d$. For a fixed dimension $d=16$, as a function of number of points $n$. How many points are needed to reach an acceptance rate of $0.1,…,0.7$, as a function of dimension $d$? From RWM to HMC. Using the well known and (ab)used Banana density as a target, we feed a non-adaptive version of KMC and friends with an increasing number of so-called “oracle” samples (iid from the target), and then quantify how well they mix. While this scenario is totally straw-man, it allows to compare the mixing behaviour of the algorithms after a long burn-in. The below plots show KMC transitioning from a random walk into something close to HMC as the number of “oracle” samples (x-axis) increases. ABC — Reduced simulations and no additional bias. While there is another example in the paper, I want to show the ABC one here, which I find most interesting. Recall in ABC, the likelihood is not available. People often use synthetic likelihoods that are for example Gaussian, which induces an additional bias (in addition to ABC itself) but might improve statistical efficiency. In an algorithm called Hamiltonian ABC, such a synthetic likelihood is combined with stochastic gradient HMC (SG-HMC) via randomized finite differences, called simultaneous perturbation stochastic approximation (SPSA), which works as follows. To evaluate the gradient at a position $\theta$ in sampling space: 1. Generate a random SPSA mask $\Delta$ (set of binary directions) and compute the perturbed $\theta+\Delta$ and $\theta+\Delta$. 2. Interpolate linearly between them the perturbations. That is simulate from the ABC likelihood at both points, construct the synthetic likelihood, and use their difference. 3. The gradient is a (step-size dependent!) approximation to the unknown gradient of the (biased) synthetic likelihood model. 4. Perform a single stochastic HMC leapfrog step (adding friction as described in the SG-HMC paper) 5. Iterate for $L$ leapfrog iterations. What I find slightly irritating is that this algorithm needs to simulate from the ABC likelihood in every leapfrog iteration — and then discards the gained information after a leapfrog step has been taken. How many leapfrog steps are common in HMC? Radford Neal suggests to start with $L\in[100,1000]$. Quite a few ABC simulations come with this! But there are more issues: 1. SG-HMC mixing. I found that stochastic gradient HMC mixes very poorly when the gradient noise large. Reducing noise costs even more ABC simulations. Wrongly estimated noise (non-stationary !?!) induces bias due to the “always accept” mentality. Step-size decreasing to account for that further hurts mixing. 2. Bias. The synthetic likelihood can fail spectacularly, if the true likelihood is skewed for example. KMC does not suffer from either of those problems: It keeps on improving its mixing as it sees more samples, while only requiring a single ABC simulation at each MCMC iterations — rather than HABC’s $2L$ (times noise-reduction-repetitions). It targets the true (modulo ABC bias) posterior, while accumulating information of the target (and its gradients) in the form of the MCMC trajectory. But how do you choose the kernel parameter? Often, this question is a threat for any kernel-based algorithm. For example, for the KAMH algorithm, it is completely unclear (to us!) how select these parameters in a principled way. For KMC, however, we can simply use cross-validation on the score matching objective function above. In practice, we use a black box optimisation procedure (for example CMA or Bayesian optimisation) to on-line update the kernel hyper-parameters every few MCMC iterations. See an example where our Python code does this here. Just like the updates to the gradient estimator itself, this can be done with a decaying probability to ensure asymptotic correctness. [S:Unbiased:S] Big Bayes without sub-sampling bias: Paths of Partial Posteriors Together with Dino Sejdinovic and Mark Girolami, I recently arxived our second draft of a paper on the popular topic of how to scale–up Bayesian inference when facing large datasets. This project is yet another method to scale up Bayesian inference via sub-sampling. But unlike most other work on this field, we do not use sub-sampling within an MCMC algorithm to simulate from the posterior distribution of interest. Rather, we use sub-sampling to create a series of easier inference problems, whose results we then combine to get the answer to the question we were originally asking — in an estimation sense, without full posterior simulation. I’ll describe the high-level ideas now. Updates on the project: • Jun: I was invited to present the work at the Big Bayes workshop in Oxford. • Mar: Change of title by popular demand 😉 • Mar: I presented the work at the CSML lunch at UCL. • Feb: Our reply to Christian’s critique was also featured. Thanks! • Feb: We got featured in Xi’an’s Og, including a nice discussion in the comments • Feb: I presented the work at the Oxford Machine Learning lunch. Good discussion with the crowd. We will probably change the name to Big Bayes without sub-sampling bias. • Jan: I gave a 15 minute talk at the Theory of Big data workshop at UCL. Let’s assume the following desiderata for large-scale Bayesian estimators: • Average computational costs sub-linear in the data size $N$. • Bounded variance (even for growing $N$) and can be controlled. • No sub-sampling bias is introduced. The method inherits the usual finite time bias from MCMC though. • Not limited to certain models or inference schemes. • Trade off redundancy in the data against computational savings. • Perform competitive in practice. Bayesian inference usually involves integrating a functional $\varphi:\Theta\rightarrow\mathbb{R}$ over a posterior distribution $$\phi=\int d\theta\varphi(\theta)\underbrace{p(\theta|{\cal D})}_{\ text{posterior}},$$ where $$p(\theta|{\cal D})\propto\underbrace{p({\cal D}|\theta)}_{\text{likelihood data}}\underbrace{p(\theta)}_{\text{prior}}.$$ For example, to compute the predictive posterior mean for a linear regression model. This is often done by constructing an MCMC algorithm to simulate $m$ approximately iid samples from approximately $\theta^{(j)}\sim p(\theta|{\cal D})$ (the second “approximately” here refers to the fact that MCMC is biased for any finite length of the chain, and the same is true for our method) and then performing Monte Carlo integration $$\phi\approx\frac{1} {m}\sum_{j=1}^{m}\varphi(\theta^{(j)}).$$ Constructing such Markov chains can quickly become infeasible as usually, all data needs to be touched in every iteration. Take for example the standard Metropolis-Hastings transition kernel to simulate from $\pi(\theta)\propto p(\theta|{\cal D})$, where at a current point $\theta^{(j)}$, a proposed new point $\theta’\sim q(\theta|\theta^{(j)})$ is accepted with probability $$\min\left(\frac{\pi(\theta’)}{\pi(\theta^{(j)})}\times\frac{q(\theta^{(j)}|\theta’)}{q(\theta’|\theta^{(j)})},1\right).$$ Evaluating $\pi$ requires to iterate through the whole dataset. A natural question therefore is: is it possible to only use subsets of $\mathcal{D}$? Existing work. There has been a number of papers that tried to use sub-sampling within the transition kernel of MCMC: All these methods have in common that they either introduce bias (in addition to the bias caused by MCMC), have no convergence guarantees, or mix badly. This comes from the fact that it is extremely difficult to modify the Markov transition kernel and maintain its asymptotic correctness. In our paper, we therefore take a different perspective. If the goal is to solve an estimation problem as the one above, we should not make our life hard trying to simulate. We construct a method that directly estimates posterior expectations — without simulating from the full posterior, and without introducing sub-sampling bias. The idea is very simple: 1. Construct a series of partial posterior distributions $\tilde{\pi}_{l}:=p({\theta|\cal D}_{l})\propto p({\cal D}_{l}|\theta)p(\theta)$ from subsets of sizes $$\vert\mathcal{D}_1\vert=a,\vert\ mathcal{D}_2\vert=2^{1}a,\vert\mathcal{D}_3\vert=2^{2}a,\dots,\vert\mathcal{D}_L\vert=2^{L-1}a$$ of the data. This gives $$p(\theta)=\tilde{\pi}_{0}\rightarrow\tilde{\pi}_{1}\rightarrow\tilde{\ pi}_{2}\rightarrow\dots\rightarrow\tilde{\pi}_{L}=\pi_{N}=p({\theta|\cal D}).$$ 2. Compute posterior expectations $\phi_{t}:=\hat{\mathbb{E}}_{\tilde{\pi}_{t}}\{\varphi(\theta)\}$ for each of the partial posteriors. This can be done for example MCMC, or other inference methods. This gives a sequence of real valued partial posterior expectations — that converges to the true posterior. 3. Use the debiasing framework, by Glynn & Rhee. This is a way to unbiasedly estimate the limit of a sequence without evaluating all elements. Instead, one randomly truncates the sequence at term $T$ (which should be “early” in some sense, $T$ is a random variable), and then computes $$\phi^*=\sum_{t=1}^T \frac{\phi_{t}-\phi_{t-1}}{\mathbb{P}\left[T\geq t\right]}.$$ This is now an unbiased estimator for the full posterior expectation. The intuition is that one can rewrite any sequence limit as a telescopic sum $$\lim_{t\to\infty} \phi_t=\sum_{t=1}^\infty (\phi_{t}-\phi_ {t-1})$$ and then randomly truncate the infinite sum and divide through truncation probabilities to leave the expectation untouched, i.e. $$\mathbb{E}[\phi^*]=\phi,$$where $\phi$ is the sequence 4. Repeat $R$ times for reducing the estimator variance as $1/R$. In the paper, we choose the random truncation time as $$\mathbb{P}(T=t)\propto 2^{-\alpha t}.$$ From here, given a certain choice of $\alpha$, it is straight-forward to show the average computational cost of this estimator is $\mathcal{O} (N^{1-\alpha}),$ which is sub-linear in $N$. We also show that variance is finite and can be bounded even when $N\to\infty$. This statement is modulo certain conditions on the $\phi_t$. In addition, the choice of the $\alpha$-parameter of the random truncation time $T$ is crucial for performance. See the paper for details how to set it. Conentration Gaussian Debiasing illustration Experimental results are very encouraging. On certain models, we are able to estimate model parameters accurately before plain MCMC even has finished a single iteration. Take for example a log-normal model $$p(\mu,\sigma|{\cal D})\propto\prod_{i=1}^{N}\log{\cal N}(x_{i}|\mu,\sigma^{2})\times\text{flat prior}$$ and functional $$\varphi((\mu,\sigma))=\sigma.$$ We chose the true parameter $\sigma=\ sqrt{2}$ and ran our method in $N=2^{26}$ data. The pictures below contain some more details. Used data in debiasing Convergence of statistic Comparison to stochastic variational inference is something we only thought of quite late in the project. As we state in the paper, as mentioned in my talks, and as pointed out in feedback from the community, we cannot do uniformly better than MCMC. Therefore, comparing to full MCMC is tricky — even though there exist trade-offs of data redundancy and model complexity that we can exploit as showed in the above example. Comparing to other “limited data” schemes really showcases the strength (and generality) of our method. We chose to compare against “Gaussian Processes for Big Data“, which puts together a decade of engineering of variational lower bounds, the inducing features framework for GPs, and stochastic variational inference to solve Gaussian Process regression on roughly a million data points. In contrast, our scheme is very general and not specifically designed for GPs. However, we are able to perform similar to this state of the art method. On predicting airtime delays, we reach a slightly lower RMSE at comparable computing effort. The paper contains details on our GP, which is slightly different (but comparable) to the one used in the GP for Big Data paper. Another neat detail of the GP example is that the likelihood does not factorise, which is in contrast to most other Bayesian sub-sampling schemes. Convergence of RMSE GP for Big Data In summary, we arrive at • Computational costs sub-linear in $N$ • No introduced sub-sampling bias (in addition to MCMC, if used) • Finite, and controllable variance • A general scheme that is not limited to MCMC nor to factorising likelihoods • A scheme that trades off redundancy for computational savings • A practical scheme that performs competitive to state-of-the-art methods and that has no problems with transition kernel design. • See the paper for a list of weaknesses and open problems. ICML paper: Kernel Adaptive Metropolis Hastings $\DeclareMathOperator*{\argmin}{arg\,min}$ $\def\Mz{M_{\mathbf{z},y}}$ In this year’s ICML in Bejing, Arthur Gretton, presented our paper on Kernel Adaptive Metropolis Hastings [link, poster]. His talk slides are based on Dino Sejdinovic ones [link]. Pitty that I did not have travel funding. The Kameleon is furious! The idea of the paper is quite neat: It is basically doing Kernel-PCA to adapt a random walk Metropolis-Hastings sampler’s proposal based on the Markov chain’s history. Or, more fancy: Performing a random walk on an empirically learned non-linear manifold induced by an empirical Gaussian measure in a Reproducing Kernel Hilbert Space (RKHS). Sounds scary, but it’s not: the proposal distribution ends up being a Gaussian aligned to the history of the Markov chain. We consider the problem of sampling from an intractable highly-nonlinear posterior distribution using MCMC. In such cases, the thing to do ™ usually is Hamiltonian Monte Carlo (HMC) (with all its super fancy extensions on Riemannian manifolds etc). The latter really is the only way to sample from complicated densities by using the geometry of the underlying space. However, in order to do so, one needs access to target likelihood and gradient. binary classification with a Gaussian Process. Machine Learning doesn’t get more applied. Well, a bit: we sample the posterior distribution over the hyper-parameters using Pseudo-Marginal MCMC. The latter makes both likelihood and gradient intractable and the former in many cases has an interesting shape. See this plot which is the posterior of parameters of an Automatic Relevance Determination Kernel on the UCI Glass dataset. This is not an exotic problem at all, but it has all the properties we are after: non-linearity and higher order information unavailable. We cannot do HMC, so we are left with a random walk, which doesn’t work nicely here due to the non-linear shape. The idea of our algorithm is to do a random walk in an infinite dimensional space. At least almost. • We run some method to get a rough sketch of the distribution we are after. • We then embed these points into a RKHS; some infinite dimensional mean and covariance operators are going on here, but I will spare you the details here. • In the RKHS, the chain samples are Gaussian, which corresponds to a manifold in the input space, which in some sense aligns with what we have seen of the target density yet. • We draw a sample $f$ from that Gaussian – random walk in the RKHS • This sample is in an infinite dimensional space, we need to map it back to the original space, just like in Kernel PCA • It turns out that is hard (nice word for impossible). So let’s just take a gradient step along some cost function that minimises distances in the way we want: \[\argmin_{x\in\mathcal{X}}\left\ Vert k\left(\cdot,x\right)-f\right\Vert _{\mathcal{H}}^{2}\] • Cool thing: This happens in input space and gives us a point whose embedding is in some sense close to the RKHS sample. • Wait, everything is Gaussian! Let’s be mathy and integrate the RKHS sample (and the gradient step) out. Et voilà, we get a Gaussian proposal distribution in input space\[q_{\mathbf{z}}(\cdot|x_{t})=\mathcal{N}(x_{t},\gamma^{2}I+\nu^{2}M_{\mathbf{z},x_{t}}HM_{\mathbf{z},x_{t}}^{\top}),\]where $\Mz=2\eta \left[\nabla_{x}k(x,z_{1})|_{x=y},\ldots,\nabla_{x}k(x,z_{n})|_{x=y}\right]$ is based on kernel gradients (which are all easy to get). This proposal aligns with the target density. It is clear (to me) that using this for sampling is better than just an isotropic one. Here are some pictures of bananas and flowers. The paper puts this idea in the form of an adaptive MCMC algorithm, which learns the manifold structure on the fly. One needs to be careful about certain details, but that’s all in the paper. Compared to existing linear adaptive samplers, which adapt to the global covariance structure of the target, our version adapts to the local covariance structure. This can all also be mathematically formalised. For example, for the Gaussian kernel, the above proposal covariance becomes \[\left[\text{cov}[q_{\mathbf{z}(\cdot|y)}]\right]_{ij} = \gamma^{2}\delta_{ij} + \frac{4\nu^{2}}{\sigma^{4}}\ sum_{a=1}^{n}\left[k(y,z_{a})\right]^{2}(z_{a,i}-y_{i})(z_{a,j}-y_{j}) +\mathcal{O}(n^{-1}),\] where the previous points $z_a$ influence the covariance, weighted by their similarity $k(y,z_a)$ to current point $y$. Pretty cool! We also have a few “we beat all competing methods” plots, but I find those retarded and will spare them here – though they are needed for publication 😉 Dino and me have also written a pretty Python implementation (link) of the above, where the GP Pseudo Marginal sampling is done with Shogun’s ability to importance sample marginal likelihoods of non-conjugate GP models (link). Pretty cool! Russian Roulette for intractable Likelihoods December 2015: The journal version of the paper finally got published — after just three years. While I was working at UCL’s Statistics Department in winter, I got involved into a very exciting project in the group of Mark Girolami. It is based around the Pseudo-Marginal Metropolis-Hastings algorithm. In 2003, a Genetics paper [1] described an approach to sample a distribution using the standard Metropolis-Hastings algorithm when the density function is not available by simply replacing it with an unbiased estimate. For a standard Bayesian inference problem with likelihood $\pi(y|\theta) $, prior $\pi(\theta)$, and a proposal $Q$, rather than using the standard M-H ratio $$\frac{\pi(y|\theta^{\text{new}})}{pi(y| \theta)}\times\frac{\pi(\theta^{\text{new}})}{\pi(\theta)}\times \frac{Q(\theta|\theta^{\text{new}})}{Q(\theta^{\text{new}}|\theta)},$$ the likelihood is replaced by an unbiased estimator as $$\frac{\hat{\pi}(y|\theta^{\text{new}})}{\hat{\pi}(y|\theta)}\times\frac{\pi(\theta^{\text{new}})}{\pi(\theta)}\times \frac{Q(\theta|\theta^{\text{new}})}{Q(\theta^{\text{new}}|\theta)}.$$ Remarkably the resulting Markov chain converges to the same posterior distribution as the exact algorithm. The approach was later formalised and popularised in [2]. In our project, we exploited this idea to perform inference over models whose likelihood functions are intractable. Example of such intractable likelihoods are for example Ising models or, even simpler, very large Gaussian models. Both of those models’ normalising constants are very hard to compute. We came up with a way of producing unbiased estimators for the likelihoods, which are based on writing likelihoods as an infinite sum, and then truncating it stochastically. Producing unbiased estimators for the Pseudo-Marginal approach is a very challenging task. Estimates have to be strictly positive. This can be achieved via pulling out the sign of the estimates in the final Monte-Carlo integral estimate and add a correction term (which increases the variance of the estimator). This problem is studied under the term Sign problem. The next step is to write the likelihood function as an infinite sum. In our paper, we do this for a geometrically titled correction of a biased estimator obtained by an approximation such as importance sampling estates, upper bounds, or deterministic approximations, and for likelihoods based on the exponential function. I in particular worked on the exponential function estimate. We took a very nice example from spatial statistics: a worldwide grid of ozone measurements from a satellite that consists of a about 173,405 measurements. We fitted a simple Gaussian model whose covariance matrices are massive (and sparse). In such models of the form $$ \log \mathcal{N}_x(\mu,\Sigma))=-\log(\det(\Sigma)) – (\mu-x) ^T \Sigma^{-1}(\mu-x) + C, $$ the normalising constant involves a log-determinant of such a large matrix. This is impossible using classical methods such as Cholesky factorisation $$\Sigma=LL^T \ Rightarrow \log(\det(\Sigma))=2\sum_i\log(L_{ii}),$$ due to memory shortcomings: It is not possible to store the Cholesky factor $L$ since it is not in general sparse. We therefore constructed an unbiased estimator using a very neat method based on graph colourings and Krylov methods from [3]. This unbiased estimator of the log-likelihood is then turned into a (positive) unbiased estimator of the likelihood itself via writing the exponential function as an infinite series $$\exp(\log(\det (\Sigma)))=1+\sum_{i=1}^\infty \frac{\log(\det(\Sigma))^i}{i!}. $$ We then construct an unbiased estimator of this series by playing Russian Roulette: We evaluate the terms in the series and plug in a different estimator for $\log(\det(\Sigma))$ for every $i$; once those values are small, we start flipping a coin every whether we continue the series or not. If we do continue, we add some weights that ensure unbiasedness. We also ensure that it is less likely to continue in every iteration so that the procedure eventually stops. This basic idea (borrowed from Physics papers from some 20 years ago) and some technical details and computational tricks then give an unbiased estimator of the likelihood of the log-determinant of our Gaussian model and can therefore be plugged into Pseudo-Marginal M-H. This allows to perform Bayesian inference over models of sizes where it has been impossible before. More details can be found on our project page (link, see ozone link), and in our paper draft on arXiv (link). One of my this year’s Google summer of Code projects for the Shogun Machine-Learning toolbox is about producing a sophisticated implementation of log-determinant estimators (link). Pretty exciting! [1]: Beaumont, M. A. (2003). Estimation of population growth or decline in genetically monitored populations. Genetics 164 1139–1160. [2]: Andrieu, C., & Roberts, G. O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. The Annals of Statistics, 37(2), 697–725. [3]: Aune, E., Simpson, D., & Eidsvik, J. (2012). Parameter Estimation in High Dimensional Gaussian Distributions.
{"url":"http://herrstrathmann.de/tag/mcmc/","timestamp":"2024-11-02T14:57:57Z","content_type":"text/html","content_length":"115820","record_id":"<urn:uuid:e56c45ee-ad79-4b09-86ba-c4a63b10a123>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00804.warc.gz"}
Area Of Composite Shapes Worksheet With Answers Area Of Composite Shapes Worksheet With Answers - Find the area of each of the composite figures: Web student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Work out the area of each of these shapes. Use the buttons below to print,. Web click here for answers. Web what is the area of composite shapes? Web in an area of composite shapes worksheets, basically what the idea behind finding an area for composite shapes is. Find the perimeter of each of the composite figures: The area of composite shapes is the amount of space inside a shape composed of basic. Area Of Composite Figures Worksheet Answers / Area Of Compound Shapes 1 Work out the area of each of these shapes. Find the area of each of the composite figures: Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Work out the area of each of these shapes. Web click here for answers. Area Of Composite Shapes Worksheet With Answers Find the area of each of the composite figures: Web student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Web click here for answers. Web what is the area of composite shapes? Web in these area of compound shapes pdfs , learners in grade 7 and grade 8. Area Of Shapes Worksheet Worksheet Education Use the buttons below to print,. Work out the area of each of these shapes. Work out the area of each of these shapes. The area of composite shapes is the amount of space inside a shape composed of basic. Web student s can use math worksheets to master a math skill through practice, in a study group or for. Area Of Composite Figures Worksheet Answers / Area Of Compound Shapes 1 The area of composite shapes is the amount of space inside a shape composed of basic. Web in an area of composite shapes worksheets, basically what the idea behind finding an area for composite shapes is. Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate. Areas of Composite Figures (Adjacent Shapes) CIE Math Solutions Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Find the area of each of the composite figures: Web what is the area of composite shapes? Find the perimeter of each of the composite figures: Web in these area of compound shapes pdfs , learners in grade 7 and grade 8. Area Of Composite Figures Worksheet With Answers Web click here for answers. Find the perimeter of each of the composite figures: Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Web in an area of composite shapes worksheets, basically what the idea behind finding an area for composite shapes is. Web student s can use math worksheets to. Area Of Composite Figures Worksheet Pdf Kuta Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Find the perimeter of each of the composite figures: Web click here for answers. Find the area of each of the composite figures: Work out the area of each of these shapes. Area Of Composite Figures Worksheet Web click here for answers. Find the perimeter of each of the composite figures: Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate the area of those. Web student s can use math worksheets to master a math skill through practice, in a study group. Area Of Composite Figures Worksheet 7th Grade Answer Key Organicfer Find the area of each of the composite figures: Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate the area of those. Use the buttons below to. Area Of Composite Figures Worksheet Work out the area of each of these shapes. Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate the area of those. Web click here for answers.. Use the buttons below to print,. Work out the area of each of these shapes. Web student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. The area of composite shapes is the amount of space inside a shape composed of basic. Web in an area of composite shapes worksheets, basically what the idea behind finding an area for composite shapes is. Find the perimeter of each of the composite figures: Find the area of each of the composite figures: Web this compilation of meticulously crafted printable area of compound shapes worksheets for students of 6th grade, 7th. Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate the area of those. Web what is the area of composite shapes? Web click here for answers. Work out the area of each of these shapes. Find The Perimeter Of Each Of The Composite Figures: Web in an area of composite shapes worksheets, basically what the idea behind finding an area for composite shapes is. Web click here for answers. The area of composite shapes is the amount of space inside a shape composed of basic. Use the buttons below to print,. Web This Compilation Of Meticulously Crafted Printable Area Of Compound Shapes Worksheets For Students Of 6Th Grade, 7Th. Work out the area of each of these shapes. Web in these area of compound shapes pdfs , learners in grade 7 and grade 8 mentally work out the basic shapes involved, calculate the area of those. Find the area of each of the composite figures: Work out the area of each of these shapes. Web Student S Can Use Math Worksheets To Master A Math Skill Through Practice, In A Study Group Or For Peer Tutoring. Web what is the area of composite shapes? Related Post:
{"url":"https://poweredbytwente.nl/en/area-of-composite-shapes-worksheet-with-answers.html","timestamp":"2024-11-03T18:32:44Z","content_type":"text/html","content_length":"28894","record_id":"<urn:uuid:2711026b-5d06-4744-9b81-5d8395ae5363>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00580.warc.gz"}
Number Theory | Brilliant Math & Science Wiki Number theory is the study of properties of the integers. Because of the fundamental nature of the integers in mathematics, and the fundamental nature of mathematics in science, the famous mathematician and physicist Gauss wrote: "Mathematics is the queen of the sciences, and number theory is the queen of mathematics." There are an abundance of simply formulated questions about the integers that involve little more than the basics of addition and multiplication (the ring operations on the integers), but which are nevertheless unsolved or extremely difficult to solve. For example: □ Which integers can be written as the sum of four squares? □ If \(n\ge 3\) is a positive integer, can a positive \(n^\text{th}\) power ever be written as the sum of two positive \(n^\text{th}\) powers? □ Is there a rectangular box whose side lengths, face diagonal lengths, and body diagonal length are all integers? The first question was solved long ago by Lagrange: every nonnegative integer can be written in this way. The second question is due to the \(17^\text{th}\)-century mathematician Fermat, who famously claimed in 1637 to have discovered a proof that the answer is no. Though the answer is no, this was not rigorously established until 1995, when Andrew Wiles completed a difficult and sophisticated proof that built on the work of dozens of leading contemporary mathematicians. The result is popularly known as Fermat's last theorem. The third question is one of many number-theoretic questions whose answer is currently unknown. It is possible to produce infinitely many boxes (sometimes called Euler bricks) for which all the sides and face diagonals are integers, but no one has been able to construct such a box with integer body diagonal length as well, or to prove that no such box exists. The first nontrivial facts about the integers relate to the concept of divisibility: if \(a,b\) are two integers, then \(b|a\) (read "\(b\) divides \(a\)") if and only if there is an integer \(c\) such that \(bc=a.\) The integers have a division algorithm, where two integers can be divided with remainder: for any \(a,b \in {\mathbb Z}\) with \(b \ne 0,\) there is a unique integer \(q\) and a unique integer \(r\) with \(0 \le r < |b|\) satisfying \(a=bq+r.\) Here \(q\) is the quotient and \(r\) is the remainder. In the division algorithm, if \(d|a\) and \(d|b,\) then \(d|r.\) So iterating the division algorithm gives a way to compute the greatest common divisor of \(a\) and \(b,\) called the Euclidean algorithm. The greatest common divisor also satisfies Bezout's identity--the integers of the form \(ax+by\) are precisely the multiples of the greatest common divisor of \(a\) and \(b,\) and in particular, the \(\gcd\) itself can be written as \(ax+by\) for some integers \(x,y.\) In fact, possible values of \(x\) and \(y\) can be computed using the extended Euclidean algorithm. These computations and algorithms are useful in various applications of elementary number theory; see below for examples. Main article: Prime numbers Let \(a\) be a positive integer. If \(b\) is a divisor of \(a\) that is not equal to \(1\) or \(a\) (i.e. \(b\) is a nontrivial proper divisor of \(a\)), then \(a\) can be factored as \(a=bc,\) where \(b\) and \(c\) are nontrivial proper divisors of \(a.\) Then we can break down \(b\) and \(c\) similarly to get an expression for \(a\) as a product of smaller divisors. The process stops only when each of the divisors in the product cannot be broken down further; in other words, when the divisors in the product do not have any nontrivial proper divisors. If \(a=12,\) then \(b=2\) is a nontrivial proper divisor. So \(12 = 2 \cdot 6.\) Now \(2\) has no nontrivial proper divisors, but \(6\) does: \(2|6,\) so \(6 = 2 \cdot 3\) and \(12 = 2 \cdot 2 \ cdot 3.\) Integers greater than \(1\) with no nontrivial proper divisors are called prime numbers. The above discussion shows that every integer can be decomposed as a product of prime numbers. So the prime numbers are the multiplicative building blocks of the integers. This fact is so important that it is known as the fundamental theorem of arithmetic: Every positive integer can be written as a product of prime numbers. The factorization is unique up to rearrangement of the factors. The distribution of primes inside the integers is a difficult and rich topic of research. There are infinitely many primes, a fact which was known to Euclid. A much more sophisticated estimate is the prime number theorem, which says roughly that the probability of a random integer \( \le x\) being prime is about \(\frac1{\ln x}.\) Other, more precise estimates often involve the Riemann hypothesis , a deep and unsolved conjecture about the zeroes of a certain complex-valued function. Still other elementary questions about the prime numbers remain open. In particular, the twin prime conjecture that there are infinitely many prime numbers \(p\) such that \(p+2\) is also prime, and the Goldbach conjecture that any even integer \(\ge 4\) can be written as a sum of two primes, are still unsolved despite centuries of efforts by countless mathematicians. Results involving divisibility are often most easily stated using modular arithmetic. If two integers \(a\) and \(b\) leave the same remainder when divided by an integer \(n,\) we write \(a \equiv b \pmod n,\) read "\(a\) is congruent to \(b\) mod \(n.\)" For example, \(64 \equiv 1 \pmod 7,\) or \(-1007 \equiv 3 \pmod{10}.\) Basic rules of modular arithmetic help explain various divisibility tests learned in elementary school. For instance, Every positive integer is congruent \(\pmod 3\) to the sum of its digits. For example, \(268 = 100 \cdot 2 + 10 \cdot 6 + 1 \cdot 8\) is congruent to \(1 \cdot 2 + 1 \cdot 6 + 1 \cdot 8,\) because \(100,10,\) and \(1\) are all congruent to \(1\) mod \(3.\) This holds for any power of \(10:\) \(10^k \equiv 1^k \equiv 1 \pmod 3,\) so the statement is true no matter how many digits the integer has. The operations of addition, subtraction, and multiplication work as expected, but there is also a form of division. In particular, the equation \(x \equiv 1/a \pmod n\) makes sense if \(ax \equiv 1 \ pmod n.\) Such an \(x\) exists if and only if \(\text{gcd}(a,n) = 1,\) by Bezout's identity. The set \(\{0,1,\ldots,n-1\}\) of representatives of integers mod \(n,\) with operations given by modular addition and modular multiplication, is often called \({\mathbb Z}_n.\) The subset of \({\ mathbb Z}_n\) consisting of invertible elements is called \({\mathbb Z}_n^*\); it is closed under multiplication and has \(\varphi(n)\) elements, where \(\varphi\) is Euler's totient function. In particular, when \(p\) is prime, \({\mathbb Z}_p^*\) consists of all the elements of \({\mathbb Z}_p\) except \(0,\) so it has \(p-1\) elements. This is the setup for one of the first nontrivial theorems of elementary number theory, known as Fermat's little theorem. If \(p\) is prime and \(p \nmid a,\) then \(a^{p-1} \equiv 1 \pmod p.\) This generalizes to any modulus \(n\): Euler's theorem: If \(\text{gcd}(a,n)=1,\) then \(a^{\varphi(n)} \equiv 1 \pmod n.\) These theorems have a myriad of diverse applications to, for instance, primality testing, periods of decimal expansions of fractions, and modern cryptography. Some examples of the latter will be outlined in the following section. Applications to Cryptography Some basic problems in elementary number theory are well-suited for use in modern cryptography. Many cryptosystems require a computationally difficult one-way process, which is quick to do but hard to reverse. The two most common such processes both come from number theory. One such process is factoring: it is easy for a computer to multiply two large prime numbers together, but very difficult to recover the prime numbers given their product. If \(N = pq\) where \(p,q\) are large prime numbers, finding \(p\) and \(q\) is equivalent in difficulty to finding \(\varphi(N) = pq-p-q+1.\) This fact, coupled with Euler's theorem, is the foundation of the widely-used RSA Another such process is the so-called discrete log problem: given coprime integers \(a\) and \(n\) (where \(n\) is taken to be very large), and an integer \(k,\) it is easy to compute \(a^k \pmod n, \) but it is difficult to recover \(k\) given \(a,n,\) and \(a^k \pmod n.\) This idea is the foundation of the Diffie-Hellman protocol. More recently, these ideas have been extended and enriched by replacing modular arithmetic by the more exotic operations on points on elliptic curves. Many of the oldest questions in number theory involve what are now known as Diophantine equations: polynomial equations in multiple variables with integer coefficients, where the unknowns are constrained to be integers as well. Indeed, the problem of finding the solutions to the simplest linear Diophantine equation, \(ax+by=c,\) is essentially the content of Bezout's identity. Very simple Diophantine equations often have very difficult solutions. The techniques involved often come from other areas of mathematics, most notably algebraic geometry: the solutions to a Diophantine equation can be viewed as special points (points with integer coordinates) on a larger geometric object like a curve or surface, cut out by the real or complex solutions to the same For example, the positive integer solutions to Pythagoras's equation \(x^2+y^2=z^2,\) known as Pythagorean triples, were of interest more than 2500 years ago. Dividing through by \(z^2\) gives \((x/ z)^2+(y/z)^2 = 1,\) so a Pythagorean triple corresponds to a point \((x/z,y/z)\) with rational coordinates on the circle \(X^2+Y^2=1.\) One way to parameterize all such points is via the geometric trick of starting with a rational point on the circle (say \((0,1)\)), drawing a line with rational slope through it, and looking for the other point of intersection, which is guaranteed to be Generalizations of Pythagoras's equation abound, but the degree of difficulty rises quickly. In particular, increasing the exponent leads to a famously difficult theorem: Fermat's last theorem: The equation \(x^n+y^n=z^n\) has no solutions in positive integers \(x,y,z\) for any integer value of \(n \ge 3.\) Restricting attention to sums of squares leads to fundamental classical results about which integers can be expressed as sums of two, three, and four squares; in particular, every positive integer is the sum of four squares. Asking about sums of higher powers leads to an even deeper set of results known collectively as Waring's problem. On the other hand, the Euler brick question in the introduction gives an example of a specific system of Diophantine equations whose solutions have not yet been determined. There is even a general negative result known as Hilbert's tenth problem that states that there can be no universal algorithm for determining whether any Diophantine equation has a solution. Current research in this area tends to center on specific types of Diophantine equations which are of particular interest to mathematicians for some reason (elliptic curves are an especially fertile subject), but there are still many unsolved problems along these lines. Main article: Algebraic number theory Here is a problem that can be solved using properties of rings other than the integers. (The preliminary analysis uses modular arithmetic in a common way as well.) Find all integer solutions to \(y^2=x^3-1.\) Here is a proof sketch. First note that if \(x\) is even, then \(y^2 \equiv 3 \pmod 4,\) which is impossible, so \(x\) is odd and thus \(y\) is even. Rewrite as \(y^2+1=x^3\) and factor \[ (y+i)(y-i) = x^3. \] Any common factor of \(y+i\) and \(y-i\) must be a common factor of their difference \(2i = (1+i)^2.\) It is not hard to check that \(1+i\) is prime, in the following sense: its only divisors are the units (divisors of 1) \(\pm 1, \pm i\) and unit multiples of itself. But if \(1+i\) divides \(y+i,\) say \((1+i)(c+di) = y+i,\) we get \(y=c-d\) and \(1 = c+d,\) which would imply that \(y\) was odd, which is impossible. So \(y+i\) and \(y-i\) are relatively prime. Two numbers that are relatively prime whose product is a cube must both be cubes themselves. But \(y+i = (a+bi)^3\) gives \(y=a^3-3ab^2\) and \(1=3a^ 2b-b^3 = b(3a^2-b^2).\) This only works if \(b\) and \(3a^2-b^2\) are both \(\pm 1,\) which is only possible if \(b=-1\) and \(a=0.\) This leads to \(y=0,x=1,\) which is the only solution. The proof sketch above can be made rigorous, but there are some holes in the proof as written--in particular, the first two sentences of the last paragraph assume that the Gaussian integers, the complex numbers of the form \(a+bi\) where \(a,b\) are integers, have similar properties to the integers themselves. In particular, there needs to be a well-defined sense of "relatively prime" and an analogue of the fundamental theorem of arithmetic in the ring \({\mathbb Z}[i]\) of Gaussian integers. While the ring \({\mathbb Z}[i]\) does have the properties required to make the above proof rigorous--most notably, a form of unique factorization into irreducible elements--other, similar rings do not have the same properties. In the ring \({\mathbb Z}[\sqrt{-5}],\) factorization into irreducible elements is not necessarily unique. For example, \[ 6 = 2\cdot 3 = (1+\sqrt{-5})(1-\sqrt{-5}) \] and each of the four factors is irreducible (its only factors are \(\pm\) itself or 1). Incorrect assumptions about rings of this type led to false starts at proofs of Fermat's last theorem in the 19th century. The theory of such rings is beautiful and interesting on its own, but it also has many applications to down-to-earth questions about the integers. Here are a few examples: • as in the example above, for many specific Diophantine equations, finding solutions and showing that a given solution set is complete by using arithmetic in a ring larger than the integers • proofs of Fermat's two-square theorem (using the Gaussian integers) and Lagrange's four-square theorem (using quaternions) • a deeper understanding of quadratic reciprocity and generalizations of it to higher reciprocity laws • primality testing (via e.g. the Lucas-Lehmer primality test) and factorization algorithms (via e.g. the number field sieve) The use of complex analysis to solve difficult number-theoretic problems has a rich history, dating back nearly 200 years to the French mathematician Dirichlet. The original motivating problem was the prime number theorem, which gives an effective asymptotic estimate for the number \(\pi(x)\) of primes \( \le x.\) Dirichlet himself used complex-analytic tools to prove his famous theorem on the density of primes in arithmetic progressions: he showed that for coprime positive integers \(a\) and \(n,\) there are infinitely many primes congruent to \(a\) mod \(n,\) but his actual result was much stronger--of the \(\varphi(n)\) possible congruence classes for a prime mod \(n,\) Dirichlet showed that the primes were (asymptotically) distributed equally among those classes. The techniques of analytic number theory can often be described in the following general way: • Create a complex-valued function related to number-theoretic objects of interest, such as prime numbers or sums of powers; • Use tools of complex analysis (e.g. line integration, the residue theorem, analytic/meromorphic continuation) to gain analytic information about the function; • Develop correspondences between analytic properties of the function and the original number-theoretic objects; • Translate the analytic information to results (often, asymptotic estimates) of the number-theoretic objects one wishes to study. A classical example of this technique involves the Riemann zeta function \[ \zeta(s) = \sum_{n=1}^{\infty} \frac1{n^s}, \] where \(s\) is a complex number whose real part is larger than \(1.\) Expressing \(\zeta(s)\) as a certain integral allows it to be extended (or "continued") to a function defined everywhere on the complex plane with the exception of a simple pole at \(s=1.\) The relationship to the prime numbers comes via the Euler product representation \[ \zeta(s) = \prod_{p \text{ prime}} \frac1{1-p^{-s}}. \] Famously, facts about the values of \(s\) for which \(\zeta (s) = 0\) correspond to deep facts about the distribution of prime numbers. There are "trivial" zeroes at \( s = -2, -4, \ldots,\) but it is immediate from the form of the analytic continuation that the other zeroes must all lie in the strip \( 0 \le \text{Re}(s) \le 1.\) The Prime Number Theorem turns out to be equivalent to the statement that there are no zeroes on the edge of the strip, the line \(\text{Re}(s) = 1.\) In fact, computational evidence suggests that all the zeroes lie in the center of the strip, \(\text{Re}(s) = 1/2;\) and this is the famous Riemann hypothesis. It is still unsolved, and it is of great interest precisely because it implies many interesting facts about the distribution of primes, including an improvement on the accuracy of the estimate given in the Prime Number Theorem. Generalizations of the zeta function called Dirichlet series can be used to study other arithmetic functions as well. Another famous result proved using analytic techniques (due to Hardy and Littlewood) was an explicit estimate of the number of powers involved in Waring's problem: (Vinogradov) Let \(k\) be a positive integer. Every positive integer can be written as a sum of \(G(k)\) nonnegative \(k\)th powers, where \(G(k) \le 3k \log(k) + 11k.\) See the article on Ramanujan for other examples.
{"url":"https://brilliant.org/wiki/number-theory/","timestamp":"2024-11-05T16:19:54Z","content_type":"text/html","content_length":"69747","record_id":"<urn:uuid:e865a2e6-ca5e-46c6-bf3a-cf58413fb7f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00511.warc.gz"}
Present Value and Future Value Equations: Understanding the Time Value of Money B.1.1 Present Value and Future Value Equations In the realm of finance and investment, understanding the time value of money (TVM) is crucial. This principle asserts that a dollar today is worth more than a dollar in the future due to its potential earning capacity. This core concept underpins many financial decisions, from personal savings to corporate investment strategies. In this section, we will delve into the fundamental equations of present value (PV) and future value (FV), explore their applications, and understand their significance in financial planning. The Time Value of Money Principle The time value of money is a foundational concept in finance, reflecting the idea that money available now is more valuable than the same amount in the future. This is because money can earn interest or be invested, generating additional income over time. The TVM principle is essential for evaluating investment opportunities, comparing cash flows, and making informed financial decisions. Future Value (FV) Equation The future value equation calculates the value of an investment at a specific point in the future, given a certain interest rate and time period. The formula is: $$ FV = PV \times (1 + r)^n $$ • \( FV \) is the future value of the investment. • \( PV \) is the present value or initial amount invested. • \( r \) is the interest rate per period (expressed as a decimal). • \( n \) is the number of periods. Example 1: Calculating Future Value Suppose you invest $1,000 at an annual interest rate of 5% for 3 years. To find the future value of this investment, you would use the FV formula: $$ FV = \$1,000 \times (1 + 0.05)^3 = \$1,000 \times 1.157625 = \$1,157.63 $$ This means that after 3 years, your investment will grow to $1,157.63. Present Value (PV) Equation The present value equation determines the current worth of a future sum of money, discounted at a specific interest rate. The formula is: $$ PV = \frac{FV}{(1 + r)^n} $$ • \( PV \) is the present value or the amount you would need to invest today. • \( FV \) is the future value or the amount you expect to receive in the future. • \( r \) is the discount rate per period (expressed as a decimal). • \( n \) is the number of periods. Example 2: Calculating Present Value Imagine you expect to receive $1,500 in 4 years, and the discount rate is 6%. To find the present value, you would use the PV formula: $$ PV = \frac{\$1,500}{(1 + 0.06)^4} = \frac{\$1,500}{1.262476} = \$1,188.24 $$ This calculation shows that $1,188.24 is the amount you would need to invest today to have $1,500 in 4 years at a 6% interest rate. Impact of Interest Rates and Time Periods Interest rates and time periods significantly impact the present and future values of investments. Understanding these effects is vital for making sound financial decisions. Higher Interest Rates • Future Value: Higher interest rates increase the future value of an investment due to the power of compounding. Compounding refers to the process where the value of an investment grows exponentially over time as interest is earned on both the initial principal and accumulated interest. • Present Value: Conversely, higher discount rates decrease the present value of future cash flows. This reflects the increased opportunity cost or risk associated with waiting for future payments. Longer Time Periods • Future Value: The longer the investment period, the greater the future value, assuming a positive interest rate. This is because the investment has more time to compound. • Present Value: Longer periods reduce the present value of future cash flows, as the money is tied up for a more extended period, increasing the opportunity cost. Importance of Consistent Units When using PV and FV formulas, it is crucial to ensure that the interest rate and the number of periods are consistent. For example, if you are calculating the future value of an investment with an annual interest rate, the number of periods should also be in years. Inconsistencies can lead to inaccurate calculations and misguided financial decisions. Key Takeaways • Time Value of Money: The TVM principle is essential for evaluating investment opportunities, comparing cash flows, and making informed financial decisions. • Future Value Equation: Use the FV formula to determine the value of an investment at a future date, considering interest rates and time periods. • Present Value Equation: Use the PV formula to calculate the current worth of a future sum of money, accounting for discount rates and time periods. • Interest Rates and Time Periods: Higher interest rates and longer periods increase future values and decrease present values, highlighting the importance of compounding and opportunity cost. • Consistent Units: Ensure that the interest rate and the number of periods are consistent to avoid errors in calculations. Understanding and applying the concepts of present value and future value are fundamental skills for anyone involved in finance and investment. These calculations provide a framework for assessing the value of money over time, enabling individuals and businesses to make strategic financial decisions. Quiz Time! 📚✨ Quiz Time! ✨📚 ### What is the fundamental principle behind the time value of money? - [x] A dollar today is worth more than a dollar in the future due to its potential earning capacity. - [ ] A dollar today is worth less than a dollar in the future due to inflation. - [ ] A dollar today has the same value as a dollar in the future. - [ ] A dollar today is worth more than a dollar in the future due to depreciation. > **Explanation:** The time value of money principle states that a dollar today is worth more than a dollar in the future because it can be invested to earn interest. ### Which formula is used to calculate the future value of an investment? - [x] \\( FV = PV \times (1 + r)^n \\) - [ ] \\( FV = \frac{PV}{(1 + r)^n} \\) - [ ] \\( FV = PV \times (1 - r)^n \\) - [ ] \\( FV = \frac{PV} {(1 - r)^n} \\) > **Explanation:** The future value formula \\( FV = PV \times (1 + r)^n \\) calculates the value of an investment at a future date, considering interest rates and time periods. ### How does a higher interest rate affect the future value of an investment? - [x] It increases the future value due to compounding. - [ ] It decreases the future value due to compounding. - [ ] It has no effect on the future value. - [ ] It decreases the future value due to inflation. > **Explanation:** A higher interest rate increases the future value of an investment because the investment grows at a faster rate due to compounding. ### What is the present value of \$1,500 expected in 4 years at a 6% discount rate? - [x] \$1,188.24 - [ ] \$1,500.00 - [ ] \$1,262.48 - [ ] \$1,000.00 > **Explanation:** Using the present value formula \\( PV = \frac{\$1,500}{(1 + 0.06)^4} \\), the present value is calculated to be \$1,188.24. ### Which factor decreases the present value of future cash flows? - [x] Higher discount rates - [ ] Lower discount rates - [ ] Shorter time periods - [ ] Lower interest rates > **Explanation:** Higher discount rates decrease the present value of future cash flows, reflecting increased opportunity cost or risk. ### What happens to the future value of an investment with a longer time period? - [x] It increases due to more time for compounding. - [ ] It decreases due to more time for compounding. - [ ] It remains the same. - [ ] It decreases due to inflation. > **Explanation:** A longer time period increases the future value of an investment because the investment has more time to compound. ### Why is it important to use consistent units in PV and FV calculations? - [x] To avoid errors in calculations - [ ] To increase the interest rate - [ ] To decrease the number of periods - [ ] To ensure a higher future value > **Explanation:** Using consistent units (e.g., matching the interest rate and the number of periods) is crucial to avoid errors in calculations. ### What is the effect of a longer time period on the present value of future cash flows? - [x] It decreases the present value. - [ ] It increases the present value. - [ ] It has no effect on the present value. - [ ] It increases the present value due to inflation. > **Explanation:** A longer time period decreases the present value of future cash flows, as the money is tied up for a more extended period, increasing the opportunity cost. ### Which equation is used to determine the current worth of a future sum of money? - [x] \\( PV = \frac{FV}{(1 + r)^n} \\) - [ ] \\( FV = PV \times (1 + r)^n \\) - [ ] \\( PV = FV \times (1 + r)^n \\) - [ ] \\( FV = \frac{PV}{(1 + r)^n} \\) > **Explanation:** The present value equation \\( PV = \frac{FV}{(1 + r)^n} \\) calculates the current worth of a future sum of money, considering discount rates and time periods. ### True or False: A dollar today is worth less than a dollar in the future due to its potential earning capacity. - [ ] True - [x] False > **Explanation:** False. A dollar today is worth more than a dollar in the future due to its potential earning capacity, as it can be invested to earn
{"url":"https://csccourse.ca/32/3/1/","timestamp":"2024-11-05T00:29:40Z","content_type":"text/html","content_length":"117385","record_id":"<urn:uuid:e49befac-5aa7-41c6-badd-25cfb2acb5f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00169.warc.gz"}
Abstract Algebra Theorem 11.9 Return to Theorems, Glossary, Homework Problems. Let $(a_1,a_2,\dots,a_n)\in\prod^n_{i=1} G_i$.If $a_i$ is of finite order $r_i$ in $G_i$, then the order of $(a_1,a_2,\dots,a_n)$ in $\prod^n_{i=1} G_i$ is equal to the least common multiple of all the $r_i$. This follows by a repetition of the argument used in the proof of Theorem 11.5. For a power of $(a_1,a_2,\dots,a_n)$ to give $(e_1,e_2,\dots,e_n)$ the power must simultaneously be a multiple of $r_1$ so that this power of the first component $a_1$ will yield $e_1$, a multiple of $r_2$, so that this power of the second component $a_2$ will yield $e_2$, and so on.
{"url":"http://algebra2014.wikidot.com/theorem-11-9","timestamp":"2024-11-09T07:01:21Z","content_type":"application/xhtml+xml","content_length":"28142","record_id":"<urn:uuid:7233498b-3d09-4abf-96e0-e97f399ecc65>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00106.warc.gz"}
Lesson goal: Learn to draw simple computer graphics Draw something! | Next: Zoom in or out In math, you'll learn about the Cartesian Coodinate System, which is a graphical method of showing (x,y) points. The x and y axes cross at x=0 and y=0, or (0,0), which is called the origin. X-values extend in the positive direction toward the right, and negative values are toward the left. Y-values extend positive upward, and negative downward. You can practice seeing where (x,y) points fall using this lesson. The two line statements in the code box will draw the x and y axes. Try plotting some points using the pset function, as in pset (5,5) or pset(-30,20). Use a lot of pset statement to better understanding where points of a given (x,y) pair will appear. Now you try. Plot some points using pset(x,y). Type your code here: See your results here:
{"url":"https://www.codebymath.com/index.php/welcome/lesson/cartesian-coordinates","timestamp":"2024-11-09T10:36:23Z","content_type":"text/html","content_length":"15383","record_id":"<urn:uuid:7e2c01b0-5ced-4f9a-85cc-1e111dd2dd87>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00548.warc.gz"}
What are corresponding angles Theorem? The Corresponding Angles Postulate states that, when two parallel lines are cut by a transversal , the resulting corresponding angles are congruent . The converse is also true; that is, if two lines l and m are cut by a transversal in such a way that the corresponding angles formed are congruent , then l∥m . What are corresponding angles examples? Definition: Corresponding angles are the angles which are formed in matching corners or corresponding corners with the transversal when two parallel lines are intersected by any other line (i.e. the transversal). For example, in the below-given figure, angle p and angle w are the corresponding angles. What is the meaning of corresponding angles are corresponding sides? Corresponding sides touch the same two angle pairs. When the sides are corresponding it means to go from one triangle to another you can multiply each side by the same number. In the diagram of similar triangles the corresponding sides are the same color. What are corresponding angles class 8? Answer: Corresponding angles : If the arms on the transversal of a pair of angles are in the same direction and the other arms are on the same side of transversal, then it is called a pair of corresponding angles. How do you know if two angles are corresponding? Corresponding angles are equal if the transversal intersects two parallel lines. If the transversal intersects non-parallel lines, the corresponding angles formed are not congruent and are not related in any way. What are the properties of corresponding angles? Corresponding Angles If two parallel lines are cut by a transversal, then the pairs of corresponding angles are congruent. Alternate Exterior Angles Converse If two lines are cut by a transversal and the alternate exterior angles are congruent, the lines are parallel. Are corresponding angles always equal? A pair of corresponding angles lie on the same side of the transversal. The corresponding pair of angles comprises one exterior angle and another interior angle. Not all corresponding angles are equal. Corresponding angles are equal if the transversal intersects two parallel lines. Where do corresponding angles lie? Corresponding angles lie on the same side of a transversal and on the same side of two lines that are intersected by the transversal. Postulate 3.1 Corresponding Angles (p. 180) If two parallel lines are cut by a transversal, then each pair of corresponding angles is congruent. How do you identify corresponding angles? Corresponding angles are the pairs of angles on the same side of the transversal and on corresponding sides of the two other lines. These angles are equal in degree measure when the two lines intersected by the transversal are parallel. It may help to draw the letter “F” (forwards and backwards) in order to help identify corresponding angles. What is the formula for corresponding angles? Corresponding Angles Formula – Trigonometric Angles. Congruent corresponding angles are: Angle of a = Angle of g. Angle of b = Angle of h. Angel of c = Angle of e. Angle of d = Angle of f. What is a real life example of a corresponding angle? Examples in Real Life. Windows have horizontal and vertical grills , which make multiple squares. Each of the vertex makes corresponding angles; A bridge standing on a pillar, where each pillars are connected to each other in such a way that corresponding angles are equal; The design of the railway track where the corresponding angles are kept equal What is an example of a corresponding angle? Corresponding Angles When two lines are crossed by another line (which is called the Transversal), the angles in matching corners are called corresponding angles. Example: a and e are corresponding angles. When the two lines are parallel Corresponding Angles are equal.
{"url":"https://hollows.info/what-are-corresponding-angles-theorem/","timestamp":"2024-11-05T14:02:13Z","content_type":"text/html","content_length":"43013","record_id":"<urn:uuid:98f97701-9afc-4f0e-b9ce-a73645115b94>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00723.warc.gz"}
How do you convert 0.2 into fraction? [Solved] | Brighterly Questions How do you convert 0.2 into fraction? Answer: 0.2 can be converted to 1/5 as a fraction. Decimals to Fractions To convert a decimal to a fraction, we need to determine the place value of the decimal. 0.2 is in the tenths place, so it can be written as 2/10, which simplifies to 1/5. FAQ on Decimals to Fractions What is 0.8 as a fraction? 0.8 as a fraction is 4/5. What is 0.6 as a fraction? 0.6 as a fraction is 3/5. What is 0.15 as a fraction? 0.15 as a fraction is 3/20.
{"url":"https://brighterly.com/questions/how-do-you-convert-0-2-into-fraction/","timestamp":"2024-11-02T11:38:47Z","content_type":"text/html","content_length":"69396","record_id":"<urn:uuid:cbcb98ed-8a3f-4dcb-9e5e-f003b297b85b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00295.warc.gz"}
Math Principles Category: Algebra, Statistics "Published in Newark, California, USA" A passenger train consists of 3 baggage cars, 5 day coaches, and 2 parlor cars. In how many ways can the train be arranged if the 3 baggage cars must come up front? The given word problem is about permutation problem because it involves the number of ways in arranging the objects or things. This permutation type is different and it is called a Distinguishable Permutation. If a set of n objects consists of k different kinds of objects with n[1], objects of the first kind, n[2] objects of the second kind, n[3] objects of the third kind, and so on, where n[1] + n[2] + ......... + n[k] = n, then the number of distinguishable permutations of these objects is Now, in the given problem, if n = 10 train cars in total n[1] = 3 baggage cars n[2] = 5 day coaches n[3] = 2 parlor cars then, the number of ways in arranging the 10 train cars will be equal to If the 3 baggage cars must come up front, then the number of ways will be equal to You have to multiply the previous ways by 3! because the three baggage cars themselves can be arranged in different ways at the front. Therefore, the final answer is Category: Algebra, Statistics "Published in Suisun City, California, USA" A man bought three vanilla ice cream cones, two chocolate cones, four strawberry cones, and five butterscotch cones for his 14 children. In how many ways can he distribute the cones among his The given word problem above is about permutations but it is a different type which is called a Distinguishable Permutation. If a set of n objects consists of k different kinds of objects with n[1], objects of the first kind, n[2] objects of the second kind, n[3] objects of the third kind, and so on, where n[1] + n[2] + ......... + n[k] = n, then the number of distinguishable permutations of these objects is Now, let's go back to the given problem, if n = 14 children, n[1] = 3 vanilla ice cream cones, n[2] = 2 chocolate cones, n3 = 4 strawberry cones, and n[4] = 5 butterscotch cones, then the number of ways to distribute the cones among to his children is Category: Algebra, Statistics "Published in Newark, California, USA" If polygons are labeled by placing letters at their vertices, how many ways are there of labeling (a) a triangle, (b) a quadrilateral, (c) a hexagon with the first 10 letters of the alphabet? The given word problem above is about permutations. Permutation is an arrangement of a number of objects in a definite order. To "permute" a set of objects means to arrange them in a definite order. The number of permutations of n things taken r at a time is given by the formula where n! (read as n factorial) is equal to n(n -1)(n - 2)......3∙2∙1. Take note that the values of n and r must be zero and positive numbers only. 0! is equal to 1. Now, let's go back to the given problem and solve for the permutations of the given polynomials. (a) For a triangle, the number of ways to label the vertices with the first 10 letters of the alphabet are (b) For a quadrilateral, the number of ways to label the vertices with the first 10 letters of the alphabet are (c) For a hexagon, the number of ways to label the vertices with the first 10 letters of the alphabet are
{"url":"https://www.math-principles.com/search/label/Statistics","timestamp":"2024-11-08T10:49:03Z","content_type":"application/xhtml+xml","content_length":"121867","record_id":"<urn:uuid:61a85883-27e3-4452-807a-741a0a33b2eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00602.warc.gz"}
[PDF] Introduction to Quantum Mechanics - PDF Free Download Idea Transcript Download and Read Introduction To Quantum Mechanics 2nd 2nd edition paperback is as one of reading quantum mechanics 2nd edition paperback PDF introduction to quantum mechanics griffiths 2nd edition pdf Introduction to Quantum Mechanics 2nd Edition Mechanics 2nd edition are Title All reading Download and Read Introduction To Quantum Mechanics Second Edition When reading the PDF introduction to quantum mechanics second edition instructors Reading amp Literacy Introduction to Quantum Mechanics 2nd Edition Introduction to Quantum Mechanics 2nd Edition David J Griffiths Download and Read Introduction To Quantum Mechanics 2nd Edition Introduction To Quantum Mechanics 2nd Edition What do you do to start reading free online read pdf span class news dt 11 04 2015 span nbsp 0183 32 span class news dt Video embedded span nbsp 0183 32 Introduction to Quantum Mechanics 2nd Edition by David J Griffiths DOWNLOAD …Download PDF Introduction to Quantum Mechanics 2nd ed And Fast shipping Free Tracking Chinese children before making Reading Download and Read Introduction To Quantum Mechanics 2nd Edition free search engine traffic el norte introduction to quantum mechanics 2nd edition PDF Book information and reviews for ISBN 0131118927 Introduction To Quantum Mechanics 2nd Edition by David J Griffiths Introduction to Quantum Mechanics 2nd Edition Pearson quot The Early of Quantum Tun ncling this was by the way the first Rime that quantum mechanics PDF Download Introduction to Quantum Mechanics (2nd Edition), All Ebook Downloads Introduction to Quantum Mechanics (2nd Edition), Free Download PDF and EPUB Introduction to Quantum Mechanics (2nd Edition), PDF ePub Mobi Introduction to Quantum Mechanics (2nd Edition), Free Reading PDF Introduction to Quantum Mechanics (2nd Edition), Book PDF Download Introduction to Quantum Mechanics (2nd Edition), read online free Introduction to Quantum Mechanics (2nd Edition), Introduction to Quantum Mechanics (2nd Edition) David J. Griffiths pdf, by David J. Griffiths Introduction to Quantum Mechanics (2nd Edition), book pdf Introduction to Quantum Mechanics (2nd Edition), by David J. Griffiths pdf Introduction to Quantum Mechanics (2nd Edition), David J. Griffiths epub Introduction to Quantum Mechanics (2nd Edition), pdf David J. Griffiths Introduction to Quantum Mechanics (2nd Edition), the book Introduction to Quantum Mechanics (2nd Edition), David J. Griffiths ebook Introduction to Quantum Mechanics (2nd Edition), Download Introduction to Quantum Mechanics (2nd Edition) E-Books, Download Online Introduction to Quantum Mechanics (2nd Edition) Book, Download pdf Introduction to Quantum Mechanics (2nd Edition), Download Introduction to Quantum Mechanics (2nd Edition) E-Books, Download Introduction to Quantum Mechanics (2nd Edition) Online Free, Read Best Book Online Introduction to Quantum Mechanics (2nd Edition), Read Online Introduction to Quantum Mechanics (2nd Edition) Book, Read Online Introduction to Quantum Mechanics (2nd Edition) E-Books, Read Introduction to Quantum Mechanics (2nd Edition) Online Free, Read Best Book Introduction to Quantum Mechanics (2nd Edition) Online, Pdf Books Introduction to Quantum Mechanics (2nd Edition), Read Introduction to Quantum Mechanics (2nd Edition) Books Online Free, Read Introduction to Quantum Mechanics (2nd Edition) Full Collection, Read Introduction to Quantum Mechanics (2nd Edition) Book Free, Read Introduction to Quantum Mechanics (2nd Edition) Ebook Download, Introduction to Quantum Mechanics (2nd Edition) PDF read online, Introduction to Quantum Mechanics (2nd Edition) Ebooks, Introduction to Quantum Mechanics (2nd Edition) pdf read online, Free Download Introduction to Quantum Mechanics (2nd Edition) Best Book, Introduction to Quantum Mechanics (2nd Edition) Ebooks Free, Introduction to Quantum Mechanics (2nd Edition) PDF Download, Introduction to Quantum Mechanics (2nd Edition) Popular Download, Introduction to Quantum Mechanics (2nd Edition) Read Download, Introduction to Quantum Mechanics (2nd Edition) Full Download, Introduction to Quantum Mechanics (2nd Edition) Free Download, Introduction to Quantum Mechanics (2nd Edition) Free PDF Download, Introduction to Quantum Mechanics (2nd Edition) Free PDF Online, Introduction to Quantum Mechanics (2nd Edition) Books Online, Introduction to Quantum Mechanics (2nd Edition) Ebook Download, Introduction to Quantum Mechanics (2nd Edition) Book Download, Free Download Introduction to Quantum Mechanics (2nd Edition) Full Popular, PDF Introduction to Quantum Mechanics (2nd Edition) Read Free Book PDF Introduction to Quantum Mechanics (2nd Edition) Read online, PDF Introduction to Quantum Mechanics (2nd Edition) Popular Download, PDF Introduction to Quantum Mechanics (2nd Edition) Free Download, PDF Introduction to Quantum Mechanics (2nd Edition) Free Ebook [PDF] Introduction to Quantum Mechanics (2nd Edition) - Free Reading PDF Book details ● Author : David J. Griffiths Pages : 480 pages Publisher : Pearson Prentice Hall 2004-04-10 Language : English ISBN-10 : 0131118927 ISBN-13 : 9780131118928 Book Synopsis This book first teaches learners how to do quantum mechanics, and then provides them with a more insightful discussion of what it means. Fundamental principles are covered, quantum theory presented, and special techniques developed for attacking realistic problems. The book¿s two-part coverage organizes topics under basic theory, and assembles an arsenal of approximation schemes with illustrative applications. For physicists and engineers.
{"url":"https://pdffox.com/pdf-introduction-to-quantum-mechanics-pdf-free.html","timestamp":"2024-11-09T01:32:34Z","content_type":"text/html","content_length":"29284","record_id":"<urn:uuid:2d1ce4de-420e-47ff-bab7-a9a7f598d714>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00725.warc.gz"}
Assignment 4 – forecasting methods | BUS 354 | University of Idaho 1. (6 points) Moving Average Forecasting – In Excel, create three different moving average forecasts with periods of 2, 3, and 5 (k = 2, 3, 5). Compute the mean squared error (MSE) from each of these forecasts. – Create a simple table that reports the MSEs and the forecasted stock price for May 2021 for each of the variations (k = 2, 3, 5). Discuss what the results tell us about the accuracy of these forecasts, and which forecast you have the most confidence in. 2. (6 points) Exponential Smoothing – In Excel, create two different exponential smoothing forecasts, one with α = 0.25 and one with α = 0.75. Compute the MSE from each of these forecasts. – Create a simple table that reports the MSEs and the forecasted stock price for May 2021 for each of the variations (α = 0.25, 0.75). Discuss what the results tell us about the accuracy of these forecasts, and which forecast you have more confidence in. – Compare the moving average and exponential smoothing results. 3. (6 points) Forecasting with a Linear Trend – Create a trend variable (t) that has a value of 1 in January 2016, 2 in February 2016, and so on. – Run a regression with t as your independent variable and Netflix’s stock price as the dependent variable. – Interpret the coefficient on the trend variable. – Compare the accuracy of this linear trend model with the models from questions 1 and 2. – What does the trend model forecast Netflix’s stock price will be at the end of 2021 (December 2021)? 4. (5 points) Graphing your Trend Forecast – Using the results from your linear trend model in question 3, create a forecast through the end of 2022. – Create a line graph that shows the actual data through April 2021, and then the forecast through 2022. Make sure the forecast portion is differentiated on the graph (by color and/or dashed line). – Discuss how you feel about the accuracy of the linear trend model based on the graph. 5. (7 points) Seasonality in Netflix Stock Price? – Create dummy variables for Quarters 2, 3, and 4 in the year. Quarter 2 is April to June, 3 is July to September, and 4 is October to December. – Run a regression with the dummies for quarters 2, 3, and 4 and the linear trend (t) variable that we used in questions 3 and 4. – Interpret the coefficients on the quarter dummy variables. Is there evidence that there is seasonality in Netflix’s stock price? – What does this trend with the seasonality model forecast Netflix’s stock price will be at the end of 2021 (December 2021)? – Overall, was it a good idea to include the seasonal component in our forecasting model (as compared to the model with just the trend in Question 3)? 6. (6 points) Autoregression – Run a regression where you use a 1-period lag of the Netflix stock price as the independent variable. – What does this model predict the price will be at the end of May 2021? – Discuss the results of this regression model. How does the forecast accuracy of this model compare to the previous models? 7. (4 points) Summarize your work to this point – Discuss the various methods used to try and forecast the price of Netflix stock going forward. Which do you place more trust in, which do you disregard? – Based on your analysis so far, and anything else you want to incorporate, if you are forced to forecast one number for the price of Netflix stock at the end of May 2021, what would that number be? Explain briefly. Note: for the rest of the assignment, you should cut the sample to the period of January 2019 through April 2021 (28 observations) 8. (8 points) Using the data from January 2019 to April 2021 (28 observations), examine the fit of different trend models. – First, run a simple linear trend model in which t = 1 in January 2019, 2 in February 2019, and so on. – Using the prediction equation from the linear trend model, create a graph that shows the actual and predicted values (trend line) for the period of January 2019 to April 2021. – Next, run a cubic trend model. This model should include the linear trend variable (t), as well as the squared and cubed versions of t. – Using the prediction equation from the cubic trend model, create a graph that shows the actual and predicted values (trend line) for the period of January 2019 to April 2021. – Based on the statistical results and the graphs you created discuss the fit of these two models. Which does a better job of matching the actual trends in the data? 9. (7 points) Forecasting with our different trend models – Use the prediction equations for the linear and cubic trend models to forecast each through the end of 2022. On one chart shows the actual Netflix stock price from January 2019 through April 2021, then the forecast paths for each of the trend models. (NOTE: this graph should not have the predicted trend lines from January 2019 to April 2021 like the graphs in question 8 had. This should be a “forecast” graph as we did in question 4). – What does each model predict the price will be at the end of 2021 (December 2021)? – Between the accuracy measures and your graphs in questions 8 and 9, which of the two models do you feel more comfortable using to forecast the price of Netflix stock going forward? Explain. https://assignmenttaste.com/wp-content/uploads/2022/06/logo-300x75.png 0 0 admin https://assignmenttaste.com/wp-content/uploads/2022/06/logo-300x75.png admin2022-08-23 09:26:162022-08-23 09:26:16 Assignment 4 – forecasting methods | BUS 354 | University of Idaho
{"url":"https://assignmenttaste.com/2022/08/23/assignment-4-forecasting-methods-bus-354-university-of-idaho/","timestamp":"2024-11-02T21:24:01Z","content_type":"text/html","content_length":"56804","record_id":"<urn:uuid:a6bfb828-b543-4df5-875a-a7ff153e91a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00341.warc.gz"}
Anatomy Drawing Lessons Drawing Of Maple Leaf Drawing Of Maple Leaf - Web learn how to draw maple leaves with easy step by step drawing lessons for kids. Follow the steps to make your own. Get free printable coloring page of this drawing. Unlike other leaves, a maple leaf drawing requires a lot more detail because of its sharp and jagged edges. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using the following steps… show more. My channel has some videos that how to draw a leaf. Web how to draw a maple leaf. This leaf is from my maple tree in my front yard. Using straight lines draw the central veins. Web how to draw a maple leaf | super coloring. Its different species differ in appearance, properties, and range of distribution. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using the following steps… show more. Using a curve mark the shape of the stalk. Web to draw a maple leaf, start by drawing a square on a sheet of paper. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using. Get free printable coloring page of this drawing. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts. Grab small outlines to print out, too! If you have a difficult shape to draw, first you must try to see how it is put together. Learn how to draw a maple leaf with this guide from wikihow:. Web this short tutorial shows how to draw a maple leaf in four steps with simple illustrated examples and quick explanations. Web learn how to draw a maple leaf real easy. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts. Then, draw a vertical line through the. Unlike other leaves, a maple leaf drawing requires a lot more detail because of its sharp and jagged edges. Web learn how to draw maple leaves with easy step by step drawing lessons for kids. Web the maple leaf is one of the most iconic and popular leaves on the planet and it’s quite stunning to draw. Maple leaf drawing. The maple leaf is a national symbol of canada. Web learn how to make a maple leaf drawing,and finish it off with a fun watercolor technique called lazy stippling. Unlike other leaves, a maple leaf drawing requires a lot more detail because of its sharp and jagged edges. Then, draw a vertical line through the middle of the square to. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using. Then, draw a vertical line through the middle of the square to make the stem of your leaf. If you have a difficult shape to draw, first you must try to see how it is put together.. Web learn how to draw a realistic maple leaf in just 12 easy steps. See the playlist for more thanksgiving drawings • how to draw thanksgiving stuff real. Follow the steps to make your own. Using straight lines draw the central veins. Web if you need large maples leaves templates for craft projects or medium maple leaf patterns for garland,. The maple leaf is a national symbol of canada. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts. Web this quick tutorial shows how to draw a maple leaf in three steps. Learn how to draw a maple leaf with this guide from wikihow: Maple leaf step. Using straight lines draw the central veins. Web to draw a maple leaf, start by drawing a square on a sheet of paper. Web learn how to draw a realistic maple leaf in just 12 easy steps. Then, draw a vertical line through the middle of the square to make the stem of your leaf. Then, draw a vertical line. Web to draw a maple leaf, start by drawing a square on a sheet of paper. Maple leaf step by step drawing. Its different species differ in appearance, properties, and range of distribution. Web learn how to draw maple leaves with easy step by step drawing lessons for kids. Web how to draw a maple leaf. Maple leaf drawing step by step. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts. To draw a maple leaf, start by drawing a square on a sheet of paper. You can print out the base construction lines and start drawing on tracing paper or you can. Drawing Of Maple Leaf - Web learn how to draw a maple leaf in this simple, step by step drawing tutorial about press copyright contact us creators advertise developers terms privacy policy. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using the following steps… show more. The maple leaf is a national symbol of canada. Today's topic is the japanese maple. Maple leaf drawing step by step. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts. Perfect for beginners and experienced artists alike. Follow the steps to make your own. Then color it in realistically with oil pastels! Web this short tutorial shows how to draw a maple leaf in four steps with simple illustrated examples and quick explanations. You can print out the base construction lines and start drawing on tracing paper or you can draw the grid layout yourself using the following steps… show more. Web maple leaf drawing in 3 variants will be described in this article! But maple is a tree valued primarily for its wood. Then, draw a vertical line through the middle of the square to make the stem of your leaf. Perfect for beginners and experienced artists alike. By following the simple steps, you too can easily draw a perfect maple leaf. Web learn how to draw a maple leaf in this simple, step by step drawing tutorial about press copyright contact us creators advertise developers terms privacy policy. For instance, a maple leaf is not an easy shape to draw. Today's topic is the japanese maple. Follow the steps to make your own. The maple leaf is a national symbol of canada. Maple leaf drawing step by step. Web to draw a maple leaf, start by drawing a square on a sheet of paper. Get free printable coloring page of this drawing. See the playlist for more thanksgiving drawings • how to draw thanksgiving stuff real. Unlike Other Leaves, A Maple Leaf Drawing Requires A Lot More Detail Because Of Its Sharp And Jagged Edges. To draw a maple leaf, start by drawing a square on a sheet of paper. Web how to draw a maple leaf. Web learn how to draw a maple leaf in this simple, step by step drawing tutorial about press copyright contact us creators advertise developers terms privacy policy. Web if you need large maples leaves templates for craft projects or medium maple leaf patterns for garland, they're here! Next, Draw 6 V Shapes Along The Stem, Which Will Be The Ribs Of Your Maple Leaf. Web this short tutorial shows how to draw a maple leaf in four steps with simple illustrated examples and quick explanations. See the playlist for more thanksgiving drawings • how to draw thanksgiving stuff real. Set the object limits on the paper. Web maple leaf drawing in 3 variants will be described in this article! Its Different Species Differ In Appearance, Properties, And Range Of Distribution. The maple leaf is a national symbol of canada. Then color it in realistically with oil pastels! If you have a difficult shape to draw, first you must try to see how it is put together. But maple is a tree valued primarily for its wood. Web The Maple Leaf Is One Of The Most Iconic And Popular Leaves On The Planet And It’s Quite Stunning To Draw. This leaf is from my maple tree in my front yard. Web learn how to draw a maple leaf illustration using alcohol markers. Web learn how to draw a maple leaf real easy. Web this tutorial shows how to draw a maple leaf with detailed examples that offer a comprehensive breakdown of it’s different parts.
{"url":"https://revivalportal.goodwood.com/art/anatomy-drawing-lessons/drawing-of-maple-leaf.html","timestamp":"2024-11-10T03:29:56Z","content_type":"text/html","content_length":"34707","record_id":"<urn:uuid:ddc3950e-55a8-4b9e-8a73-fe09ab79a7a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00828.warc.gz"}
SEBA Class 10 Mathematics Chapter 6 Tringles - Dev Library SEBA Class 10 Mathematics Chapter 6 Tringles SEBA Class 10 Mathematics Chapter 6 Tringles Solutions, SEBA Class 10 Maths Textbook Notes in English Medium, SEBA Class 10 Mathematics Chapter 6 Tringles Notes in English to each chapter is provided in the list so that you can easily browse throughout different chapter Assam Board SEBA Class 10 Mathematics Chapter 6 Tringles Notes and select needs one. SEBA Class 10 Mathematics Chapter 6 Tringles Also, you can read the SCERT book online in these sections Solutions by Expert Teachers as per SCERT (CBSE) Book guidelines. SEBA Class 10 Mathematics Chapter 6 Tringles Question Answer. These solutions are part of SCERT All Subject Solutions. Here we have given SEBA Class 10 Mathematics Chapter 6 Tringles Solutions for All Subject, You can practice these here. Chapter – 6 1 Fill in the blanks using correct word given in the brackets: (i) All circles are __________. (congruent, similar) Ans: Similar (ii) All squares are __________. (similar, congruent) Ans: similar, (iii) All __________ triangles are similar. (isosceles, equilateral) Ans: Equilateral (iv) Two polygons of the same number of sides are similar, if (a) their corresponding angles are __________ and Ans: Equal (b) Their corresponding sides are __________. (equal, Ans: Proportional 2. Give two different examples of pair of (i) Similar figures. Ans:Two equilateral triangles with sides 1 cm and 2 cm (ii) Non-similar figures 3. State whether the following quadrilaterals are similar or not: Quadrilateral PQRS and ABCD are not similar as their corresponding sides are proportional, i.e. 1:2, but their corresponding angles are not equal. 1. In figure.6.17. (i) and (ii), DE || BC. Find EC in (i) and AD in (i). Let EC = x cm It is given that DE || BC. By using basic proportionality theorem, we obtain 2. E and F are points on the sides PQ and PR respectively of a ∆PQR. For each of the following cases, state whether EF || QR. (i) PE = 3.9 cm, EQ = 3 cm, PF = 3.6 cm and FR = 2.4 cm In the given figure, LM || CB By using basic proportionality theorem, we obtain 5.In the following figure, DE || OQ and DF || OR, show that EF || QR. 6. In the following figure, A, B and C are points on OP, OQ and OR respectively such that AB || PQ and AC || PR. Show that BC || QR. 7. Using Basic proportionality theorem, prove that a line drawn through the mid-points of one side of a triangle parallel to another side bisects the third side. (Recall that you have proved it in Class IX). 8. Using Converse of basic proportionality theorem, prove that the line joining the midpoints of any two sides of a triangle is parallel to the third side. (Recall that you have done it in Class IX). Consider the given figure in which PQ is a line segment joining the mid-points P and Q of line AB and AC respectively. i.e., AP = PB and AQ = QC It can be observed that 9. ABCD is a trapezium in which AB || DC and its diagonals intersect each other at the point O. Show that AO/BO = CO/DO Ans: Let us consider the following figure for the given question. 1. State which pairs of triangles in the following figure are similar? Write the similarity criterion used by you for answering the question and also write the pairs of similar triangles in the symbolic form: Ans: (i) ∠A = ∠P = 60° ∠B = ∠Q = 80° ∠C = ∠R = 40° Therefore, ∆ABC ∼ ∆PQR [By AAA similarity criterion] Ans: Do yourself. Ans: The given triangles are not similar as the corresponding sides are not proportional. Ans: The given triangles are not similar as the corresponding sides are not proportional. Ans: The given triangles are not similar as the corresponding sides are not proportional. In ∆DEF, ∠D +∠E +∠F = 180º (Sum of the measures of the angles of a triangle is 180º.) 70º + 80º +∠F = 180º ∠F = 30º Similarly, in ∆PQR, ∠P +∠Q +∠R = 180º (Sum of the measures of the angles of a triangle is 180º.) ∠P + 80º +30º = 180º ∠P = 70º In ∆DEF and ∆PQR, ∠D = ∠P (Each 70°) ∠E = ∠Q (Each 80°) ∠F = ∠R (Each 30°) ∴ ∆DEF ∼ ∆PQR [By AAA similarity criterion] 2. In the following figure, ∆ODC ∼ ∆OBA, ∠BOC = 125° and ∠CDO = 70°. Find ∠DOC, ∠DCO and ∠OAB Ans: Do yourself 3. Diagonals AC and BD of a trapezium ABCD with AB || DC intersect each other at the point O. Using a similarity criterion for two triangle.show that AO/OC = OB/OD In ∆DOC and ∆BOA, ∠CDO = ∠ABO [Alternate interior angles as AB || CD] ∠ DOC = ∠BAO [Alternate interior angles as AB || CD] ∠DOC = ∠BOA [Vertically opposite angles] ∴ ∆DOC ∼ ∆BOA [AAA similarity criterion] 5. S and T are point on sides PR and QR of ∆PQR such that ∠P = ∠RTS. Show that ∆RPQ ∼ ∆RTS. In ∆RPQ and ∆RST, ∠RTS = ∠QPS (Given) ∠R = ∠R (Common angle) ∴ ∆RPQ ∼ ∆RTS (By AA similarity criterion) 6. In the following figure, if ∆ABE ≅ ∆ACD, show that ∆ADE ∼ ∆ABC. It is given that ∆ABE ≅ ∆ACD. ∴ AB = AC [By CPCT] …………………(1) And, AD = AE [By CPCT] …………….(2) In ∆ADE and ∆ABC, 7. In the following figure, altitudes AD and CE of ∆ABC intersect each other at the point P. Show that: (i) ∆AEP – ∆CDP In ∆AEP and ∆CDP, ∠AEP = ∠CDP (Each 90°) ∠APE = ∠CPD (Vertically opposite angles) Hence, by using AA similarity criterion, ∆AEP – ∆CDP (ii) ∆ABD – ∆CBE In ∆ABD and ∆CBE, ∠ADB = ∠CEB (Each 90°) ∠ABD = ∠CBE (Common) Hence, by using AA similarity criterion, ∆ABD – ∆CBE (iii) ∆AEP – ∆ADB In ∆PDC and ∆BEC, ∠PDC = ∠BEC (Each 90°) ∠PCD = ∠BCE (Common angle) Hence, by using AA similarity criterion, ∆PDC – ∆BEC (iv) ∆PDC – ∆BEC In ∆PDC and ∆BEC, ∠PDC = ∠BEC (Each 90°) ∠PCD = ∠BCE (Common angle) Hence, by using AA similarity criterion, ∆PDC – ∆BEC 8. E is a point on the side AD produced of a parallelogram ABCD and BE intersects CD at F. Show that ∆ABE ∼ ∆CFB In ∆ABE and ∆CFB, ∠A = ∠C (Opposite angles of a parallelogram) ∠AEB = ∠CBF (Alternate interior angles as AE || BC) ∴ ∆ABE – ∆CFB (By AA similarity criterion) 9. In the following figure, ABC and AMP are two right triangles, right angled at B and M respectively, prove that: In ∆ABC and ∆AMP, ∠ABC = ∠AMP (Each 90°) ∠A = ∠A (Common) ∴ ∆ABC – ∆AMP (By AA similarity criterion) 10. CD and GH are respectively the bisectors of ∠ACB and ∠EGF such that D and H lie on sides AB and FE of ∆ABC and ∆EFG respectively. If ∆ABC – ∆FEG, Show that: 11. In the following figure, E is a point on side CB produced of an isosceles triangle ABC with AB = AC. If AD ⊥ BC and EF ⊥ AC, prove that ∆ABD – ∆ECF Ans: It is given that ABC is an isosceles triangle. ∴ AB = AC ⇒ ∠ABD = ∠ECF In ∆ABD and ∆ECF, ∠ADB = ∠EFC (Each 90°) ∠BAD = ∠CEF (Proved above) ∴ ∆ABD ∼ ∆ECF (By using AA similarity criterion) 12. Sides AB and BC and median AD of a triangle ABC are respectively proportional to sides PQ and QR and median PM of ∆PQR (see the given figure). Show that ∆ABC – ∆PQR. 13. D is a point on the side BC of a triangle ABC such that ∠ADC = ∠BAC. Show that In ∆ADC and ∆BAC, ∠ADC = ∠BAC (Given) ∠ACD = ∠BCA (Common angle) ∴ ∆ADC ∼ ∆BAC (By AA similarity criterion) We know that corresponding sides of similar triangles are in Proportion. 14. Sides AB and AC and median AD of a triangle ABC are respectively proportional to sides PQ and PR and median PM of another triangle PQR. Show that Ans: Do yourself. 15. A vertical pole of a length 6 m casts a shadow 4m long on the ground and at the same time a tower casts a shadow 28 m long. Find the height of the tower. Let AB and CD be a tower and a pole respectively. Let the shadow of BE and DF be the shadow of AB and CD respectively. At the same time, the light rays from the sun will fall on the tower and the pole at the same angle. Therefore, ∠DCF = ∠BAE And, ∠DFC = ∠BEA ∠CDF = ∠ABE (Tower and pole are vertical to the ground) ∴ ∆ABE ∼ ∆CDF (AAA similarity criterion) Therefore, the height of the tower will be 42 metres. 16. If AD and PM are medians of triangles ABC and PQR, respectively where 2. Diagonals of a trapezium ABCD with AB || DC intersect each other at the point O. If AB = 2CD, find the ratio of the areas of triangles AOB and COD. Since AB || CD, ∴ ∠OAB = ∠OCD and ∠OBA = ∠ODC (Alternate interior angles) In ∆AOB and ∆COD, ∠AOB = ∠COD (Vertically opposite angles) ∠OAB = ∠OCD (Alternate interior angles) ∠OBA = ∠ODC (Alternate interior angles) ∴ ∆AOB ∼ ∆COD (By AAA similarity criterion) 3. In the following figure, ABC and DBC are two triangles on the same base BC. If AD Ans: Let us draw two perpendiculars AP and DM on line BC. In ∆APO and ∆DMO, ∠APO = ∠DMO (Each = 90°) ∠AOP = ∠DOM (Vertically opposite angles) ∴ ∆APO ∼ ∆DMO (By AA similarity criterion) 4. If the areas of two similar triangles are equal, prove that they are congruent Ans: Let us assume two similar triangles as ∆ABC ∼ ∆PQR. 5. D, E and F are respectively the mid-points of sides AB, BC and CA of ∆ABC. Find the ratio of the area of ∆DEF and ∆ABC. D and E are the mid-points of ∆ABC. 6. Prove that the ratio of the areas of two similar triangles is equal to the square of the ratio of their corresponding medians. 7. Prove that the area of an equilateral triangle described on one side of a square is equal to half the area of the equilateral triangle described on one of its diagonals. Let ABCD be a square of side a. Therefore, its diagonal = √2a Two desired equilateral triangles are formed as ∆ABE an ∆DBF. Side of an equilateral triangle, ∆ABE, described on one of its sides = a Side of an equilateral triangle, ∆DBF, described on one of its Diagonals We know that equilateral triangles have all its angles as 60º and all its sides of the same length. Therefore, all equilateral triangles are similar to each other. Hence, the ratio between the areas of these triangles will be equal to the square of the ratio between the sides of these triangles. 8. ABC and BDE are two equilateral triangles such that D is the mid-point of BC. Ratio of the area of triangles ABC and BDE is (a) 2 : 1 (b) 1 : 2 (c) 4 : 1 (d) 1 : 4 We know that equilateral triangles have all its angles as 60º and all its sides of the same length. Therefore, all equilateral triangles are similar to each other. Hence, the ratio between the areas of these triangles will be equal to the square of the ratio between the sides of these triangles. Let side of ∆ABC = x Hence, the correct answer is (C). Sides of two similar triangles are in the ratio 4 : 9. Areas of these triangles are in the ratio (a) 2 : 3 (b) 4 : 9 (c) 81 : 16 (d) 16 : 81 Ans: If two triangles are similar to each other, then the ratio of the areas of these triangles will be equal to the square of the ratio of the corresponding sides of these triangles. It is given that the sides are in the ratio 4:9. Therefore, ratio between areas of these triangles = Hence, the correct answer is (D). 1. Sides of triangles are given below. Determine which of them are right triangles. In case of a right triangle, write the length of its hypotenuse (i) 7cm, 24 cm, 25 cm. Ans: Therefore, given sides 7cm, 24cm, 25cm make a right triangle. (ii) 3cm, 8 cm, 6 cm. Ans: (3)² + (6)² ≠ (8)² ( not a right triangle). (iii) 50cm, 80 cm, 100 cm. Ans: (50)² + (80)² + (100)² Not a right triangle. (iv) 13cm, 12 cm, 5 cm. Ans: (iv) (12)² + (5)² = 144 + 25 = 169 = (13)2 Therefore, given sides 13cm, 12cm, 5cm make a right triangle. 3. In fig a triangle right angled at A and ACBD. Show that: 4. ABC is an isosceles triangle right angled at C. Prove that 2AC^2 5. ABC is an isosceles triangle with AC = BC If AB^2 = 2AC^2 prove that ABC is a right triangle. 6. ABC is an equilateral triangle of side 2a. Find each of its altitudes. Ans: In equilateral Delta*ABC each side is 2a. 7. Prove that the sum of square of the sides of a rhombus is equal to th sum of the square of its diagonals. Ans: ABCD is a rhombus in which AB = BC = CD = DA = a(say) Its diagonals AC and BD are right bisectors of each other at O. 8. In fig, O is point the interior of a triangle ABC, ODBC and of OF 丄AB. Show that: (i) OA^2 + OB^2 + OC^2 – OD2 -OE^2 – OF^2 AF^2 + BD^2 + CE^2 9. A ladder 10m long reaches a window 8m above the ground. Find the distance of the foot of the ladder from base of the wall. 10. A guy attached to a vertical pole of height 18m is 24m long and has a stake attached to the other end. How far from the base of the pole should the stake be driven so that the write will be Ans: Let x metre be the distance of the stake from the base of the pole. 11. An aeroplane leaves an airport and flies due north a speed of 1000 km per 1000 other aeroplanes leave the speed of 1000 and flies due west speed of 1200 km per hour. How far apart will be the 12. Two poles of height 6m and 11m stand on a plane ground. If the distance between the feet of the poles is 12m, find the distance between the tops. Ans: x = (12)2 + 52 = 144 + 25 = 169 ⇒ x = 13 ∴ Distance between the tops = 13m 13. D and E are point on the sides CA and CB respectively of a triangle ABC right angled at C. Prove that AE² + BD² = AB² + DE². 14. The perpendicular from A on side BC of a ∆ABC intersects BC at D such that DB = 3CD. Prove that 2AB² = 2AC^2 + BC^2 Ans: BD = 3CD 15.MIn an equilateral triangle ABC, D is a point on side B such that BD = 1/3 * BC Prove that: 16. In an equilateral triangle, prove that three times the square of on one side is equal to four times the square of one of its altitudes. Ans: ∆ABC is an equilateral triangle of side a (say) Ans: ADBC, AD = x (say) 17. Tick the correct answer and justify : In ∆ABC, AB = 6√3 cm, AC = 12 and BC = 6 cm. The triangle B is: (a) 120° (b) 60° (c) 90° (d) 450 Ans: Correct is (C) Justification: Here AC² = AB2 + BC2 ⇒ ∆ABC is right angled at B ⇒ ∠B = 90° Also BC = 6 cm and AB = 6√3cm ⇒ BC <AB ⇒ ∠ACB can not be more than 450 ⇒ ∠A = 30° AC2 = (12)² = 144 BC2 + AB² = 62 + (6√3) = 36 + 108 = 144 1. In fig, PS is the bisector of QS ∠PQ ߡPQR of APQR. Prove that QS/SR = PQ/PR 2. In fig, D is a point on hypotenuse AC of ∆ABC, such that BD⊥AC, DM⊥BC and DN⊥AB. Prove that: 3. In fig., ABC is a triangle in which ∠ABC > 90° and ADICB produced. Proved that AC² = AB² + BC2 + 2BC.BD Ans: An obtuse triangle, with obtuse angle at B and ADICB produced. To prove : AC² = AB² + BC² + 2BC. BD Proof : Since AADB is a triangle, right angle at D. By Pythagorus Theorem, AB² = AD² + BD2 → (1) Again AADC is a right triangle with right angle at D. 4. In fig., ABC is a triangle in which ∠ABC <90° and ADBC. Prove that AC^2 = AB² + BC-2BC. BD. Ans: given ΔABC in which ABC< 90° AD丄BC To prove: AC^2 = AB² + BC-2BC. BD. Proof: Since ADB is a right triangle with right angle at D. By Pythagorus Theorem, AB² = AD² + BD² … (i) Again AADC is a right triangle with ∠D = 90° 5. In fig., AD is a median of a triangle ABC and AM1BC. Prove that: 6. Prove that the sum of the square of the diagonals of a parallelo- gram is equal to the sum of the square of its sides. Ans: Let ABCD be a parallelogram with ∠B as obtuse and ∠A as acute. 7. In fig., two chords AB and CD intersect each at the point P. Prove. Ans: (i) AAPC – ADPB (ii) ΔΑΡ.ΡΒ = CP. DP. Ans: In AAPC and ADPB [Angles in the same segments] 8. In fig., two chords AB and CD of a circle intersect each other at the point P (when produced) outside the circle. Prove that : 9. In fig., D is a point of side BC of △ABC such that BD CD AB AC Prove that AD is the bisector of ∠BAC. Ans: In a right angled ΔABC, AB = 1.8 BC = 2. 4m AC2 = (I.8)2 + (2.4)2 = 3.24 + 5.57 = 9 ⇒ AC = √9.= 3m. Hence the original length of the string AC (when taut) is 3m. (ii) When Nazima pulls in the string at the rate of 5 cm/sec. then the length of the string decrease = 5 x 12 = 60cm = 0.6m in 12 seconds Remaining length of the string (AD) after 12 seconds pull =(3-0.60)=2.40 m Rightarrow AD = 2.4m . AD^2 + DB^2 + AB^2 ⇒ DB^2 = AD2 – AB2 = (2040)2 – (1.80)2 = 1.587 = 1 . 59m ∴ DB = 2.52m Horizontal distance (DE) of the from Nazima = (1.59 + 1.2)m = 2.79. Hi, I’m Dev Kirtonia, Founder & CEO of Dev Library. A website that provides all SCERT, NCERT 3 to 12, and BA, B.com, B.Sc, and Computer Science with Post Graduate Notes & Suggestions, Novel, eBooks, Biography, Quotes, Study Materials, and more. 1 thought on “SEBA Class 10 Mathematics Chapter 6 Tringles” Solution all subjects Leave a Comment
{"url":"https://devlibrary.in/class-10-mathematics-chapter-6-solutions","timestamp":"2024-11-12T16:18:03Z","content_type":"text/html","content_length":"586758","record_id":"<urn:uuid:6f6bd806-d8e6-48db-aece-5f29e39d86e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00713.warc.gz"}
Unit Price Calculator + Online Solver With Free Steps The Unit Price Calculator finds the product price per unit weight (USD/gram) given the price and weight. Then, it converts the result to HUF and several other currencies. What Is the Unit Price Calculator? The Unit Price Calculator is an online tool that takes a product’s price and weight as input to find the price per unit weight in US Dollars. Finally, it converts this value into Hungarian Forints, your local currency, and several other currencies. The calculator interface consists of two text boxes right-labeled “USD” (for United States Dollar) and “g” (for grams). In the former, you enter the price in USD for a specific weight of the product in question. In the latter, you enter this weight in grams. The calculator uses the above information to find the price per unit weight in USD. Then, it accesses the current exchange rate for USD and other currencies to get the equivalent in the other How To Use the Unit Price Calculator? You can use the Unit Price Calculator to find the cost of a certain amount (weight) of a product in HUF, your local currency, and optionally others if you know the cost of that amount of product in Step 1 Enter the product’s price in USD into the first text box labeled “USD.” Step 2 Enter the product’s weight in grams (g) into the second text box labeled “g.” Step 3 Press the Submit button to get the results. The results show up in a new window and are quite extensive. The primary sections are “Input Interpretation” and “Result.” The former shows the input as interpreted by the calculator in the form of a statement. You can use it to verify that the calculator is working as intended. The latter shows the price per unit weight in HUF. Additional result sections include: 1. Local Currency Conversion for $x: The calculator checks the location of your IP address to determine your local currency and finds the equivalent price per unit weight in it by using the relevant exchange rate (USD to local) from the database. 2. Additional Currency Conversions for $x: The equivalent price per unit weight in several other currencies. Finally, there is the Exchange History section. It shows a graph indicating the history of the exchange rates for USD to HUF over a selected time. You can select between a maximum range of 10 years and a minimum of 1 month. Additionally, the section provides the maximum, minimum, and average exchange rates over the selected time duration. How Does the Unit Price Calculator Work? The Unit Price Calculator works by solving the following equation: \[ \textsf{price per unit weight in HUF} = \frac{\textsf{price in USD}}{\textsf{weight in grams}} \times \textsf{exchange rate} \tag*{(1)} \] \[ \textsf{price per unit weight in USD} = \frac{\textsf{price in USD}}{\textsf{weight in grams}} \] HUF and USD are currencies – Hungarian Forint and United States Dollar. The calculator inputs are the price in USD and the product’s weight in grams, while the price in HUF is the calculated result. Note that the right-hand side of equation (1) is the price of some product per unit weight. Its units are USD/gram or $/gram. For example, if 1 kilogram of yogurt costs $1.50, then: price in USD = 1.50 USD weight in grams = 1000 grams price per unit weight of yogurt = 1.50 USD / 1000 g = 0.0015 USD/g Exchange Rates and Currency Conversion The calculator finally converts the cost per gram in USD to HUF so that the result’s units are HUF/g. It does this by multiplying the above result with the exchange rate. The value of the currency conversion factor is different depending on the currencies involved: \[ \textsf{exchange rate} = \frac{\textsf{equivalent amount in currency to convert to}}{\textsf{amount in currency to convert from}} \] For our case of USD and HUF, as of August 24, 2022, Google Finance states 1 USD = 409.35 HUF. Therefore, for our example: exchange rate = 409.35 HUF / 1 USD = 409.35 HUF/USD The above value is known as the exchange rate. It changes (or stays the same) roughly every hour of the day. Continuing our example, the cost of 1 gram of yogurt in HUF would be: price per unit weight of yogurt = 0.0015 USD/g x 409.35 HUF/USD = 0.614025 HUF/g Therefore, the cost of 1 kilogram of yogurt in HUF equals: price of 1 kg yogurt in HUF = 0.61 x 1000 = 614.025 HUF All the additional currency conversions (local and foreign) for the USD price per unit weight use the same principle. It also forms the basis for the Exchange History graph, which tracks the exchange rates of USD to HUF over a much longer duration. Exchange Rate Source The calculator sources exchange rates from Wolfram|Alpha Knowledgebase – Wolfram Language financial data. As this database updates every few days (you can view the latest update date on the calculator results), the results may be off by a few days. What Are Unit Prices? Unit prices are the prices concerning one unit of anything. Consider the question, “how much does a specific amount of something cost?” This “amount” is fixed to 1 of some unit or measurement for unit prices. When we talk about weights, unit weights look like 1 kg, 1 ounce, 1 pound, etc. Then the cost is always for 1 unit of that product, which could be the weight, volume, etc. Unit pricing is helpful for a variety of reasons. It provides sellers a convenient method of calculating costs and sales based on the amount of product sold. For consumers, it helps make economic decisions when buying in bulk or comparing prices between brands. For example, if 3 kilograms of yogurt (bulk) cost 4.0 USD, but 1 kilogram of yogurt costs 1.5 USD, the unit price of the former scenario is smaller (4/3 = 1.33 < 1.5/1 = 1.5). However, the cost per weight is not the only deciding factor since food goes bad. Therefore, it might be better to go with 3 kilograms (bulk) when buying for a large gathering as it will save money. Units Other Than Grams Note that in the calculator, despite the second text box being labeled g, you may use any other unit for your calculation, such as ounce, kilogram, etc. We saw with the yogurt example that the results were small units, so it might be more convenient to use price per kilogram instead. Suppose we wanted 5 kg of yogurt, which cost us $7.5. To use price per kilogram instead of a gram, enter “7.5” as the cost and “5” as the weight. The result for this input is 614.03 HUF, the price per kilogram! Solved Examples Example 1 Find the unit cost of a gram of gold in HUF; if 20 grams of gold cost $1117.6. How much would 25 grams of gold price in HUF? The unit price in USD is given by: unit price of gold in USD = 1117.6 / 20 = 55.88 $/gram Assuming an exchange rate of 409.35, we use equation (1) to get: unit price of gold in HUF = 55.88 x 409.35 = 22,874.478 HUF/gram Then the cost of 25 grams of gold would be; price of 25 g of gold in HUF = 22,874.478 x 25 = 571,861.95 HUF
{"url":"https://www.storyofmathematics.com/math-calculators/unit-price-calculator/","timestamp":"2024-11-10T09:27:36Z","content_type":"text/html","content_length":"157267","record_id":"<urn:uuid:2b88b58e-13fc-454c-9482-37572790c0ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00864.warc.gz"}
Summer Assignment I'm in the middle of writing a summer assignment for Honors Pre-Calculus students and I need your help! I haven't taught this course before, and the only Algebra 2 I've taught was a CP class that struggled. The current summer assignment is a random assortment of problems that neither of us who will be teaching the course next year want to assign, let alone grade! So, here is what I have so far, major inspiration taken from 's project: Function Family Album . Feel free to skim or skip right to my questions at the bottom to see what I'm requesting feedback on. Summer Assignment Your mission is to create a resource which summarizes the types of functions you learned about in Algebra II. This assignment will give you an opportunity to showcase your creativity, but will also be an excellent reference for you to use throughout the course; so be sure to include enough information to refresh your memory on all the topics included. You may choose any format you like to present this information (poster, pamphlet, scrapbook or anything else you can think of) so long as it includes all the required aspects. Families of Functions: Linear, Absolute Value, Quadratic, Cubic, Exponential, Radical, Logarithmic, Rational For each family you will provide the following information: • Parent Function graph (ex: f(x)=x^2) • One shift (horizontal or vertical) • One stretch or shrink When you have completed this section you will have 18 graphs all labeled with a title, an equation in function notation and a description. Be sure to have at least one of each type of transformation (horizontal and vertical shifts, stretch and shrink) among all the graphs. Choose one of the following: *Coming soon* (equations for cubic that’s increasing and decreasing, quadratic with x intercepts, radical with x and y intercepts etc.) For this function you will provide more detail: • Graph and table of values • Domain and range • Intercepts (x and y) • Intervals of increasing and decreasing • Maxima and minima (local and/or global) • End behavior • Whether or not each of the following points is on your curve *coming soon* (list points where at least one is on each option) • Value of function at x=__ and y=__ *coming soon* • Make up a situation that this graph might describe (feel free to be as silly or serious as you like; if your function looks like someone’s velocity as they fly to the moon write that! Just provide enough detail to match parts of the story to parts of the graph) Does this seem like too much? Do I encourage technology or require them to graph by hand? I obviously need to fill in some functions and other details. Do you have a favorite function that would work well for the last section? Is this enough detail if I provide an example on my course webpage? We will also be available via email and have a few summer sessions at the coffee shop in town. 3 comments: 1. I think this is fabulous. What are you using for logarithmic? I'd use log base2 of x, but I'd be interested in discussion on that. I'd use f(x)=1/x for the rational. I definitely couldn't ask my (college) pre-calc students to do this before the course. I might use something similar it as a midterm assignment. Thanks! 2. That sounds like a useful assignment because then they can glance back throughout the year. What about links to foldable sites or a couple of others to give them inspiration. ... Maybe you can require that at least one is done by hand and at least one is done by technology (if you like). ... Do you want to make it a requirement that it should fit in their notebooks (what if someone makes a poster ... is that okay, what will they do with it throughout the year). Anyway. Love the assignment :) 3. I'll have to ask the Alg2 teachers what log they use the most. f(x)=1/x is a good choice, I should probably provide the parent functions to eliminate confusion. Shireen- love the idea of linking to foldables. Making something that fits into their notebook is definitely a good idea. I will ask the other teacher but I think requiring at least one to be done by hand (maybe the last, in depth one?) and the others by their preference sounds like a plan. Thanks for your input!
{"url":"https://drawingonmath.blogspot.com/2012/06/summer-assignment.html","timestamp":"2024-11-02T14:44:51Z","content_type":"text/html","content_length":"58744","record_id":"<urn:uuid:c9c064fe-1668-478b-a58f-765da37289c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00834.warc.gz"}
RISC Activity Database author = {Tudor Jebelean and Gabor Kusper}, title = {{Multi-Domain Logic and its Applications to SAT}}, booktitle = {{SYNASC 2008}}, language = {English}, abstract = {We describe a new formalism and special proof methods for a novel generalization of propositional logic, which is especially suitable for solving the satisfiability problem (SAT). A Multi--Domain Logic (MDL) formula is quantifier--free and contains only atoms of the form $x \in A$, where $x$ is a variable and $A$ is a constant set. For formulae in conjunctive normal form, we are interested in finding solutions (assignments to variables which satisfy all clauses). Classical propositional logic corresponds to the special case when each set is either $\{\true\}$ or $\{\ false\}$. The union of all the sets occurring for a certain variable can be seen as "the domain" of that variable, thus MDL is also a generalization of multi-valued logic, but with different domains for variables. The most distinctive feature is, however, the indication of the sub-domain in each clause. The notions of resolution, subsumption, as well as the basic steps of the DPLL method generalize in an elegant and straightforward way. As a novel MDL specific technique, {\em variable clustering} consists in creating a new variable ranging over the cartesian product of the domains of several "clustered" variables. This allows the transformation of classical SAT problems in MDL problems with less literals, and in which the propagation of information can be performed more efficiently than classical unit propagation. The basic idea of MDL originates from the earlier work of the second author on "hyper-unit" propagation (that is simultaneous propagation of several unit clauses) and on the representation and propagation of "k-literals" (generalized literals containing information on several propositional variables). Preliminary experiments with a prototype Java implementation exhibit speed--ups of up to 30 times.}, pages = {3--8}, publisher = {IEEE Society Press}, isbn_issn = {978-0-7695-3523-4}, year = {2008}, editor = {V. Negru et. al.}, refereed = {yes}, keywords = {Multi-Domain Logic, SAT, signed logic}, length = {6}, conferencename = {International Simposium on Symbolic and Numeric Scientific Computation}
{"url":"https://www3.risc.jku.at/publications/show-bib.php?activity_id=3949","timestamp":"2024-11-06T17:12:20Z","content_type":"text/html","content_length":"4997","record_id":"<urn:uuid:afd54ca0-d205-407d-b4b6-3e1fb70ba7c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00028.warc.gz"}
19.625/11304 Decoded: Why This Minuscule Figure Holds Significant Value - Creative Released 19.625/11304 Decoded: Why This Minuscule Figure Holds Significant Value Navigating the world of numbers, particularly when dealing with decimals and fractions, can often be perplexing. Take, for example, the fraction “19.625/11304.” This article will delve into what this fraction represents, its practical implications, and how you can leverage this understanding in real-world situations. By breaking down the expression, we’ll uncover its meaning and relevance. This exploration will not only clarify the fraction’s value but also illustrate its applications in various contexts. Understanding such numerical details enhances your ability to apply mathematical concepts effectively, whether in academic, professional, or everyday scenarios. Breaking Down the Expression “19.625/11304” To grasp the meaning and significance of the fraction “19.625/11304,” we need to first understand the components involved. Deciphering 19.625 The number 19.625 combines a whole number and a decimal fraction. The whole number part is 19, while the decimal .625 represents the fraction part. To better comprehend this, let’s convert 19.625 into a fraction: • Whole Number: 19 can be expressed as 19/1. • Decimal Conversion: To convert .625 into a fraction: • Write .625 as 625/1000. • Simplify this by dividing both the numerator and the denominator by their greatest common divisor (125), which reduces it to 5/8. Therefore, 19.625 as a fraction is (19.625/11304) or more precisely, (\19.625/11304}). Examining 11304 The number 11304 is an integer, meaning it does not have a fractional component. Combining the Numbers When we divide 19.625 by 11304, we can simplify the expression by first converting 19.625 into the fraction (\frac{157}{8}). This allows us to rewrite the expression as: To simplify, multiply (\frac{157}{8}) by the reciprocal of 11304, which is (\frac{1}{11304}): \frac{157 \times 1}{8 \times 11304} = \frac{157}{90432} Thus, the expression “19.625/11304” simplifies to (\frac{157}{90432}). Simplifying and Converting “19.625/11304” Final Simplification: The fraction (\frac{157}{90432}) is already in its simplest form. Since 157 is a prime number, it has no common factors with 90432 other than 1. This means that the fraction cannot be simplified further, making it as concise as possible. Converting to Decimal: For those who prefer working with decimals, you can convert the fraction (19.625/11304) into a decimal by performing the division. This results in approximately 0.001735. Therefore, the expression “19.625/11304” is roughly equal to 0.001735 when expressed in decimal form. Percentage Conversion: To express this decimal as a percentage, simply multiply it by 100. This calculation yields approximately 0.1735%. Thus, the fraction “19.625/11304” can also be represented as about 0.1735% when converted into percentage form. The Importance of Small Numbers in Various Fields While dividing small numbers like 0.001736 might initially seem like a trivial task, these calculations have significant applications across multiple fields. In finance, for instance, even the smallest decimals can have a substantial impact on calculations related to interest rates or stock prices. Precision in these figures is crucial for accurate financial analysis and decision-making. In the realm of science, particularly in fields such as physics and chemistry, exact measurements are essential. Scientific experiments often involve numbers with many decimal places, and precise calculations are necessary to achieve reliable results. Statistics also heavily relies on small numbers. Data analysis frequently involves calculations with seemingly insignificant figures that can influence major decisions. In fields like engineering and data science, even minute differences can be critical, potentially affecting the outcome of projects or research. Overall, while a result like 0.001736 may seem minor, it is indicative of how small numerical differences can be pivotal in various domains. Understanding and accurately handling these small figures is crucial for precise work and successful outcomes in many professional and academic settings. Practical Applications of Small Numbers Final Simplification: The fraction (\frac{157}{90432}) is already fully simplified. Since 157 is a prime number, it does not share any common factors with 90432 other than 1. This means that the fraction is in its most reduced form. Decimal Conversion: If working with decimals is more straightforward for you, you can convert the fraction (19.625/11304) by dividing 157 by 90432. This results in approximately 0.001735. Thus, the expression “19.625/11304” translates to roughly 0.001735 in decimal form. Percentage Representation: To express this result as a percentage, multiply the decimal value by 100. This calculation gives approximately 0.1735%. Therefore, “19.625/11304” can be represented as about 0.1735% when expressed as a percentage. Understanding Its Significance: Although the result of 0.001735 may seem negligible, small decimal values play crucial roles in various fields. In finance, tiny decimals are essential for calculating interest rates or stock prices. In science, precise measurements often involve numbers with many decimal places, crucial for accuracy in fields like physics or chemistry. Similarly, in statistics, seemingly minor numbers can significantly impact data analysis and decision-making. Even small differences can be critical in engineering and other technical fields, highlighting the importance of precise calculations. Making Sense of Small Numbers A Fun Metaphor Let’s imagine you have a very small pie that you need to divide into 11,304 pieces. Each piece would be so tiny it’s more like a crumb—approximately 0.001736 of the entire pie. While each slice is incredibly small, it is still a measurable and meaningful portion of the whole. This metaphor helps illustrate how even minuscule quantities, like 0.001736, can have significance when considered in To give a more practical perspective, consider this fraction as a percentage. The decimal 0.001736 translates to approximately 0.1736%. In everyday terms, this is akin to receiving less than two pennies from a dollar. Though the amount seems small, it is important in financial calculations and other applications where precision is critical. Common Mistakes in Calculations When working with small numbers and decimal divisions, errors can easily arise. One common mistake is misplacing the decimal point, which can significantly alter the result. For example, if you accidentally move the decimal point in 0.001736, you might end up with 17.36—an error that could lead to drastically incorrect outcomes. Such miscalculations can have serious consequences in fields like finance, science, and mathematics, where accuracy is essential. In financial contexts, for instance, a small miscalculation in decimal points can affect interest calculations or investment returns. In scientific research, precise measurements are critical for accurate results, and even minor errors can skew data significantly. Therefore, it’s crucial to double-check calculations and ensure the decimal point is placed correctly to avoid these potentially costly mistakes. Everyday Significance of Small Numbers Despite their seemingly minor nature, small numbers like 0.001736 play a crucial role in various aspects of daily life. In finance, small differences in interest rates can lead to substantial variations in the amount paid over time. For instance, even a tiny shift in interest rates can impact mortgage payments or savings account yields. In currency conversions, precision with small decimal values is essential for accurate financial transactions, especially when traveling internationally. Exchange rates are often quoted with several decimal places, and slight discrepancies can affect the amount of money you receive or spend. To better understand and compare small numbers, converting them into percentages can be quite helpful. For example, converting 0.001736 to a percentage results in 0.1736%. This conversion makes it easier to grasp the significance of such small figures and apply them in practical situations. By translating small decimals into percentages, you can more easily assess their impact and relevance in real-world scenarios. Simplifying Complex Fractions Mastering Fraction Simplification: The techniques used to simplify the fraction (\frac{19.625}{11304}) are applicable to various complex fractions. Learning how to convert decimals into fractions and simplify them is crucial not only for academic success but also for practical tasks in everyday life. Understanding these methods enables you to handle complex calculations with confidence and Practical Uses: Complex fractions have numerous practical applications. For example, in recipe adjustments, if a recipe calls for 19.625 cups of flour but you need only a fraction of this amount, like one 11,304th, using the fraction (\frac{19.625}{11304}) allows you to make precise measurements. In engineering, where precision is paramount, such fractions are vital. If a material needs to be divided into 11,304 parts, and each part requires 19.625 units, the fraction helps ensure the accurate distribution of materials. Leveraging Digital Tools: For those who prefer using digital solutions, online calculators can simplify these conversions efficiently. However, having a solid grasp of the underlying principles ensures that you can verify the accuracy of your results and apply them effectively. Understanding the fundamentals of fraction simplification empowers you to use technology more effectively and make informed decisions based on precise calculations. Utilizing Calculators and Software for Accurate Results Embracing Technology: Let’s face it—most people prefer not to manually calculate fractions like 19.62511304\frac{19.625}{11304}1130419.625. Fortunately, numerous tools are available to streamline this process. Calculators, whether those on your smartphone or dedicated devices, are indispensable for quickly obtaining accurate results. Additionally, online tools such as Google’s calculator simplify handling complex divisions, making calculations more accessible and efficient. Balancing Technology and Basic Math Skills: While technology offers convenience, it remains important to understand basic mathematical concepts. Knowing how division works helps you identify potential errors, whether you’re managing a budget or tackling a science project. A solid grasp of fundamental math principles ensures that you can use technological tools effectively and verify their outputs. Also read: 02045996870 Final Words The fraction 19.625/11304 represents a small but significant numerical value that can be simplified and understood in various contexts. Breaking it down, 19.625 is converted to the fraction 157/8, and when divided by 11304, the simplified result is 157/90432. This equals approximately 0.001735 in decimal form or about 0.1735% as a percentage. While such a small number may seem trivial, it holds practical importance in fields like finance, science, and engineering, where precision is crucial. Understanding and handling these small values helps ensure accuracy in calculations, enabling effective application in real-world situations. 1. What does the fraction 19.625/11304 represent? The fraction 19.625/11304 represents a division of the decimal 19.625 by the integer 11304. When simplified, it results in a much smaller number, which can be expressed as a fraction, decimal, or 2. How do you simplify the fraction 19.625/11304? To simplify 19.625/11304, first convert 19.625 into the fraction 157/8. Then divide 157/8 by 11304, which gives you the simplified fraction 157/90432. 3. What is the decimal form of 19.625/11304? When simplified and divided, the fraction 19.625/11304 equals approximately 0.001735 in decimal form. 4. How can I convert 19.625/11304 into a percentage? To convert 19.625/11304 into a percentage, multiply the decimal form (0.001735) by 100. This gives approximately 0.1735%. 5. What is the practical significance of small numbers like 19.625/11304? Small numbers like 0.001735 are essential in fields like finance, science, and statistics, where precision in calculations, interest rates, or measurements is crucial for accurate results. 6. How can I avoid mistakes when calculating small fractions like 19.625/11304? Ensure accuracy by correctly placing the decimal point and using reliable tools like calculators or software to verify your results. 7. Are there real-world examples where small fractions like 19.625/11304 are used? Yes, small fractions are commonly used in finance for interest rate calculations, in science for precise measurements, and in engineering for dividing materials accurately. 8. How can I calculate 19.625/11304 using a calculator? You can easily calculate 19.625/11304 by inputting the values into any standard calculator or using online tools that allow you to perform decimal or fractional divisions. 9. Can 19.625/11304 be simplified further? No, 157/90432 is the simplified form of 19.625/11304, as 157 is a prime number and cannot be reduced further. 10. Why is understanding fractions like 19.625/11304 important? Understanding and simplifying fractions like 19.625/11304 is important for achieving precise calculations, which are crucial in areas such as finance, scientific research, and technical fields. Stay informed with more detailed articles available at Creative Released.
{"url":"https://creativereleased.com/19-625-11304/","timestamp":"2024-11-13T14:30:45Z","content_type":"text/html","content_length":"239888","record_id":"<urn:uuid:896bdd6c-ce05-40ad-8d5d-70c2346cd12d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00553.warc.gz"}
Mathematics - Singapore DSA Information Master If you are a student who loves mathematics and wants to pursue it at a higher level, you might be interested in applying for the… DSA-Sec in Mathematics is a scheme that allows P6 students who have exceptional mathematical abilities and achievements to apply for admission. dsamaster16 October 2023
{"url":"https://dsamaster.com/category/dsa/mathematics/","timestamp":"2024-11-06T06:17:55Z","content_type":"text/html","content_length":"42191","record_id":"<urn:uuid:d54cc293-0090-40ec-ad5f-2d066bb35571>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00119.warc.gz"}
Tactile Graphics Language - DiGrande.it Tactile Graphics Language This manual describes in detail the Biblos Tactile Graphics Language, a real high-level programming language for constructing and drawing vector graphics, mainly used to print tactile graphics, but also useful for all other purposes where it needs to have vector graphics. The manual introduces all the basic programming concepts in general, and describes in detail the syntax and semantics of this new language. For those who approach the basics of programming for the first time, the logic and jargon used may not be entirely clear. However, the manual is published in a completely dynamic form, so it is constantly subject to changes and additions, in order to make it accessible to all. This manual, as well as the Language of Tactile Graphics, is written, conceived and developed by Giuseppe Di Grande, who owns all the copyrights and publication, in any form and by any means.
{"url":"http://www.digrande.it/en/Books/TactileLanguage","timestamp":"2024-11-12T15:43:04Z","content_type":"text/html","content_length":"23383","record_id":"<urn:uuid:72ee1bf4-54b9-48c5-987c-2ed260517203>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00361.warc.gz"}
Systems of Equations Here is a graphic preview for all of the Systems of Equations Worksheets. You can select different variables to customize these Systems of Equations Worksheets for your needs. The Systems of Equations Worksheets are randomly created and will never repeat so you have an endless supply of quality Systems of Equations Worksheets to use in the classroom or at home. We have graphing quadratic functions, graphing quadratic inequalities, completing the square. We also have several solving quadratic equations by taking the square roots, factoring, with the quadratic formula, and by completing the square. Our Systems of Equations Worksheets are free to download, easy to use, and very flexible. These Systems of Equations Worksheets are a good resource for students in the 5th Grade through the 8th Grade. Click here for a Detailed Description of all the Systems of Equations Worksheets.
{"url":"https://www.math-aids.com/Algebra/Algebra_1/Systems/","timestamp":"2024-11-08T01:54:51Z","content_type":"text/html","content_length":"29345","record_id":"<urn:uuid:318e9158-01c6-4f1a-9735-f8880994d064>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00722.warc.gz"}
Algebra and Trigonometry Enhanced with Graphing Utilities Algebra and Trigonometry Enhanced with Graphing Utilities 8th Edition Sullivan – Solution Manual Solution Manual for Algebra and Trigonometry Enhanced with Graphing Utilities, 8th Edition, Michael Sullivan, ISBN: 9780136872795, ISBN: 9780135812846 Table of Contents R. Review R.1 Real Numbers R.2 Algebra Essentials R.3 Geometry Essentials R.4 Polynomials R.5 Factoring Polynomials R.6 Synthetic Division R.7 Rational Expressions R.8 nth Roots; Rational Exponents 1. Graphs, Equations, and Inequalities 1.1 Graphing Utilities; Introduction to Graphing Equations 1.2 Solving Equations Using a Graphing Utility; Linear and Rational Equations 1.3 Quadratic Equations 1.4 Complex Numbers; Quadratic Equations in the Complex Number System 1.5 Radical Equations; Equations Quadratic in Form; Absolute Value Equations; Factorable Equations 1.6 Problem Solving: Interest, Mixture, Uniform Motion, Constant Rate Job Applications 1.7 Solving Inequalities 2. Graphs 2.1 The Distance and Midpoint Formulas 2.2 Intercepts: Symmetry; Graphing Key Equations 2.3 Lines 2.4 Circles 2.5 Variation 3. Functions and Their Graphs 3.1 Functions 3.2 The Graph of a Function 3.3 Properties of Functions 3.4 Library of Functions; Piecewise-defined Functions 3.5 Graphing Techniques: Transformations 3.6 Mathematical Models: Building Functions 4. Linear and Quadratic Functions 4.1 Properties of Linear Functions and Linear Models 4.2 Building Linear Models from Data 4.3 Quadratic Functions and Their Properties 4.4 Build Quadratic Models from Verbal Descriptions and from Data 4.5 Inequalities Involving Quadratic Functions 5. Polynomial and Rational Functions 5.1 Polynomial Functions 5.2 The Graph of a Polynomial Function; Models 5.3 The Real Zeroes of a Polynomial Function 5.4 Complex Zeroes: Fundamental Theorem of Algebra 5.5 Properties of Rational Functions 5.6 The Graph of a Rational Function 5.7 Polynomial and Rational Inequalities 6. Exponential and Logarithmic Functions 6.1 Composite Functions 6.2 One-to-One Functions; Inverse Functions 6.3 Exponential Functions 6.4 Logarithmic Functions 6.5 Properties of Logarithms 6.6 Logarithmic and Exponential Equations 6.7 Financial Models 6.8 Exponential Growth and Decay Models; Newton’s Law; Logistic Growth and Decay Models 6.9 Building Exponential, Logarithmic, and Logistic Models from Data 7. Trigonometric Functions 7.1 Angles and Their Measure 7.2 Right Triangle Trigonometry 7.3 Computing the Values of Trigonometric Functions of Acute Angles 7.4 Trigonometric Functions of Any Angle 7.5 Unit Circle Approach; Properties of the Trigonometric Functions 7.6 Graphs of the Sine and Cosine Functions 7.7 Graphs of the Tangent, Cotangent, Cosecant, and Secant Functions 7.8 Phase Shift; Sinusoidal Curve Fitting 8. Analytic Trigonometry 8.1 The Inverse Sine, Cosine, and Tangent Functions 8.2 The Inverse Trigonometric Functions (Continued) 8.3 Trigonometric Equations 8.4 Trigonometric Identities 8.5 Sum and Difference Formulas 8.6 Double-angle and Half-angle Formulas 8.7 Product-to-Sum and Sum-to-Product Formulas 9. Applications of Trigonometric Functions 9.1 Applications Involving Right Triangles 9.2 The Law of Sines 9.3 The Law of Cosines 9.4 Area of a Triangle 9.5 Simple Harmonic Motion; Damped Motion; Combining Waves 10. Polar Coordinates; Vectors 10.1 Polar Coordinates 10.2 Polar Equations and Graphs 10.3 The Complex Plane; De Moivre’s Theorem 10.4 Vectors 10.5 The Dot Product 11. Analytic Geometry 11.1 Conics 11.2 The Parabola 11.3 The Ellipse 11.4 The Hyperbola 11.5 Rotation of Axes; General Form of a Conic 11.6 Polar Equations of Conics 11.7 Plane Curves and Parametric Equations 12. Systems of Equations and Inequalities 12.1 Systems of Linear Equations: Substitution and Elimination 12.2 Systems of Linear Equations: Matrices 12.3 Systems of Linear Equations: Determinants 12.4 Matrix Algebra 12.5 Partial Fraction Decomposition 12.6 Systems of Nonlinear Equations 12.7 Systems of Inequalities 12.8 Linear Programming 13. Sequences; Induction; the Binomial Theorem 13.1 Sequences 13.2 Arithmetic Sequences 13.3 Geometric Sequences; Geometric Series 13.4 Mathematical Induction 13.5 The Binomial Theorem 14. Counting and Probability 14.1 Counting 14.2 Permutations and Combinations 14.3 Probability
{"url":"https://www.testbankmail.com/product/algebra-and-trigonometry-enhanced-with-graphing-utilities-8th-edition-sullivan-solution-manual/","timestamp":"2024-11-07T06:26:55Z","content_type":"text/html","content_length":"94779","record_id":"<urn:uuid:c076e4e6-a45b-4764-a19f-7f263d8611a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00349.warc.gz"}
Research Interests Design and analysis of computer experiments • H. Dette, A. Pepelyshev (2010) Generalized Latin hypercube design for computer experiments. Technometrics, in press.link Space filling designs, which satisfy a uniformity property, are widely used in computer experiments. In the present paper, the performance of non-uniform experimental designs, which locate more points in a neighborhood of the boundary of the design space, is investigated. These designs are obtained by a quantile transformation of the one-dimensional projections of commonly used space filling designs. This transformation is motivated by logarithmic potential theory, which yields the arc-sine measure as an equilibrium distribution. The methodology is illustrated for maximin Latin hypercube designs by several examples. In particular, it is demonstrated that the new designs yield a smaller integrated mean squared error for prediction. • H. Dette, A. Pepelyshev (2010) NPUA: A new approach for the analysis of computer experiments. Chemometrics and Intelligent Laboratory Systems 104, 333–340. link An important problem in the analysis of computer experiments is the specification of the uncertainty of the prediction according to a meta-model. The Bayesian approach, developed for the uncertainty analysis of deterministic computer models, expresses uncertainty by the use of a Gaussian process. There are several versions of the Bayesian approach, which are different in many regards but all of them lead to time consuming computations for large data sets. In the present paper we introduce a new approach in which the distribution of uncertainty is obtained in a general nonparametric form. The proposed approach is called non-parametric uncertainty analysis (NPUA), which is computationally simple since it combines generic sampling and regression techniques. We compare NPUA with the Bayesian and Kriging approaches and show the advantages of NPUA for finding points for the next runs by reanalyzing the ASET model. • A. Pepelyshev (2010) The role of the nugget term in the Gaussian process method. accepted, MODA 9---Advances in model-oriented design and analysis, Contrib. Statist. link The maximum likelihood estimate of the correlation parameter of a Gaussian process with and without of a nugget term is studied in the case of the analysis of deterministic models. • A. Pepelyshev (2009) Improvement of random LHD for high dimensions. Proceedings of the 6th St. Petersburg Workshop on Simulation, 1091-1096. link Designs of experiments for multivariate case are reviewed. Fast algorithm of construction of good Latin hypercube designs is developed. • A. Pepelyshev (2010) Fixed-domain asymptotics of the maximum likelihood estimator and the Gaussian process approach for deterministic models. in press. link The fixed-domain asymptotics of the maximum likelihood estimator is studied in the framework of the Gaussian process approach for data collected as precise observations of a deterministic computer model given by an analytic function. It is shown that the maximum likelihood estimator of the correlation parameter of a Gaussian process does not converge to a finite value and the computational stability strongly depends on the type of the correlation function. In particular, computations are the most unstable for the Gaussian correlation function, which is typically used in the analysis of computer experiments, and significantly less unstable for the stable correlation function rho(t)=e^{-|t|^\gamma} even if gamma=1.9 which is close to 2.
{"url":"https://ssa.cf.ac.uk/pepelyshev/r_computer_experiments.html","timestamp":"2024-11-03T21:22:32Z","content_type":"application/xhtml+xml","content_length":"9641","record_id":"<urn:uuid:4bc5c56b-0140-4bbf-8149-9f886d96a620>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00891.warc.gz"}
Solving Intuition About Slot Machines with BayesianInference | Wire19 How do you win at slot machines? From one point of view, it is random whether you win or lose on each pull of the machine’s arm. On the other hand, if you understand how these machines work exactly, you could design a strategy to maximize your chance of winning. Unfortunately, designing such a strategy is difficult because it involves making very complicated decisions about when to pull the In this tutorial, I would like to show you how the methodology can help us answer some of these questions. For the purpose of illustration, we have picked a casino example: Assume a Mathematician goes to Casino floor. He doesn’t have any prior information about the Math implemented by games developers in slot machines. He will have no choice but to assume that winning probability of each slot machine is equal i.e. 50%, then he will assume that winning probability θ of each slot machine follows Beta distribution with hyperparameters α=β=2 as a Prior Belief. Now if he gets some observed data D, he can estimate probabilistic model’s parameter θ (Winning Probability of Slot Machine) by using MLE (Maximum Likelihood Estimation) OR MAP (Maximum as Posteriori) as follows: In order to compute MLE or MAP, he asks for Some data and somebody on the casino floor shares a small set of data with him as follows: Slot Machine 1: Won 2 times when played only 3 Games. Slot Machine 2: Won 101 times when played game for 168 times. By intuition, not only him but anybody would also think “Slot Machine 2” is the special one, because getting 2 wins out of 3 plays on “Slot machine 1” could just happen by chance. But “Slot Machine 2 ” data doesn’t look like happening by chance. Now, how to prove mathematically if the intuition is Correct? He has an access to frequentist (MLE) approach and Bayesian Point Estimator approach (MAP). Being a Mathematician, he knows that MLE will not work because of very small size of the observed Data. So, he tries to use MAP because he has got some Data and also prior belief that winning probability of each slot machine follows Beta distribution with hyperparameters α=β=2. Assuming that the results (k wins out of n plays) follow binomial distribution with the slot machine’s winning probability θ as its parameter. The formula and results are as given below: Slot Machine 1: (2+2–1)/(3+2+2–2) = 3/5 = 60% Slot Machine 2: (101+2-1)/(168+2+2-2) = 102/170 = 60% BAM! Unlike the intuition, estimated winning probability θ by MAP for these two slot machines are exactly same. Hence, Bayesian MAP estimators also didn’t help him determine which one is the special slot machine, which is likely to leave him wondering if the common human intuition about this is fake. But really? Isn’t it looking obvious that “Slot Machine 2” is more likely to be the special one? The mystery remained unsolved until he decided to move beyond MLE & MAP and decided to calculate full distribution, which is nothing but the Bayesian inference, which returns probability density (or mass) function. To see if there really be no difference between Slot Machine 1 and Slot Machine 2, he calculates the posterior probability distribution, not only MAP estimates. The posterior probability distribution P(θ|D) is calculated as below: In Bayesian inference, we also need to calculate P(D) called marginal likelihood or evidence apart from calculating Likelihood and Prior in MAP. It’s the denominator of Bayes’ theorem and it assures that the integrated value of P(θ|D) over all possible θ becomes 1 (Sum of P(θ|D), if θ is a discrete variable). P(D) is obtained by marginalisation of joint probability. When θ is a continuous variable, the formula is as below: Considering the product rule Now, put this into the original formula of the posterior probability distribution. Calculating below is the goal of Bayesian Inference. How to calculate full distribution P(θ|D) in three parts? #1: Likelihood, P(D|θ): It is the probability that data D is observed when parameter θ is given. For the slot machine 1, D is “2 wins out of 3 plays”, and parameter θ is the winning probability of Slot machine 1. As we assume that the number of wins follows binomial distribution, the formula is as below, where n is the number of plays and k is the number of wins. #2: The prior probability distribution of θ, P(θ): It is the probability distribution expressing our prior knowledge about θ. Here, specific probability distributions are used corresponding to the probability distribution of Likelihood P(D|θ). It’s called conjugate prior distribution. Since the conjugate prior of binomial distribution is Beta distribution, we use Beta distribution to express P(θ) here, incidentally it is also matching with prior belief of mathematician when he got on to the casino floor. Beta distribution is described as below, where α and β are hyperparameters. Now we got P(D|θ)P(θ) — the numerator of the formula — as below: #3: Evidence, P(D): It is calculated as follows. Note that the possible range of θ is 0 ≤ θ ≤ 1. With Euler integral of the first kind, the above formula can be deformed to: Finally, we can obtain P(θ|D) as given below: Expected A Posteriori (EAP): The estimate by MAP is the mode of the posterior distribution but it didn’t solve his problem, therefore he is looking for other statistics for the point estimation, such as expected value of θ|D. The estimation using the expected value of θ|D is called Expected A Posteriori. Let’s estimate the winning probability of the two slot machines using EAP. From the discussion above, P(θ|D) in this case is below: Thus, the estimate is described as below: With Euler integral of the first kind and the definition of Gamma function, above formula can be deformed to below: Hence, EAP estimate of Two slot machines winning probabilities with hyperparameters α=β=2 are below: Slot Machine 1: (2+2)/(3+2+2) = 4/7 = 57.1% Slot Machine 2: (101+2)/(168+2+2) = 103/172 = 59.9% Hurray! The intuition is proved correct as “Slot Machine 2” has slightly higher winning probability than the “Slot machine 1” In a similar way, you can solve real world problems without getting into the source code of the applications or even without knowing about the business rules/hand crafted logics implemented in the This blog is authored by Sukhdev Singh AVP – Engineering, PureSoftware
{"url":"https://wire19.com/solving-intuition-about-slot-machines-with-bayesian%E2%80%AFinference/","timestamp":"2024-11-09T14:21:38Z","content_type":"text/html","content_length":"189520","record_id":"<urn:uuid:a240b9ab-2f7e-43ea-adf3-b0eeed224f3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00200.warc.gz"}
Efficient conversion of maltose into sorbitol over magnetic catalyst in extremely low acid Zhang, J., Li, J., Wu, S., and Liu, Y. (2013). "Efficient conversion of maltose into sorbitol over magnetic catalyst in extremely low acid," BioRes. 8(3), 4676-4686. Ni/Cu/Al/Fe hydrotalcite precursor was synthesized by a co-precipitation method. The activity of the reduced precursor for one-step conversion of maltose into sorbitol in the presence of H2 and extremely low phosphoric acid was investigated. XRD and XPS tests provided the essential properties of the precursor and prepared magnetic catalyst. Effects of various processing parameters towards the reaction performance were studied in detail. A desired sorbitol yield of 93.1% was attained at 458 K for 3 h with a catalyst dosage of 20%. A catalyst recycling experiment demonstrated that Ni4.63Cu1Al1.82Fe0.79 was a better catalyst and could be reused three or four times. The specific reasons for catalyst deactivation were considered in depth. Download PDF Full Article Efficient Conversion of Maltose into Sorbitol over Magnetic Catalyst in Extremely Low Acid Jun Zhang, Jibiao Li, Shubin Wu,* Ying Liu * Ni/Cu/Al/Fe hydrotalcite precursor was synthesized by a co-precipitation method. The activity of the reduced precursor for one-step conversion of maltose into sorbitol in the presence of H[2] and extremely low phosphoric acid was investigated. XRD and XPS tests provided the essential properties of the precursor and prepared magnetic catalyst. Effects of various processing parameters towards the reaction performance were studied in detail. A desired sorbitol yield of 93.1% was attained at 458 K for 3 h with a catalyst dosage of 20%. A catalyst recycling experiment demonstrated that Ni [4.63]Cu[1]Al[1.82]Fe[0.79] was a better catalyst and could be reused three or four times. The specific reasons for catalyst deactivation were considered in depth. Keywords: Hydrotalcite precursor; Maltose; Magnetic catalyst; Sorbitol Contact information: State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou, Guangdong, 510640, China; * Corresponding author: shubinwu@scut.edu.cn, amyliu@scut.edu.cn The depletion of fossil fuel reserves and their impact on the environment have led to an increasing interest in sustainable development. Recent research on the production and conversion of sorbitol has gained wide attention, since it is known as one of the 12 important target chemicals that the U.S. Department of Energy (US-DOE) selected in their biomass program (Werpy et al. 2004). Sorbitol, being the most commonly used sugar alcohol, holds the biggest market share among similar polylols, which are widely used in medicine and in the food and chemical industries. For instance, it is an important precursor for the manufacture of L-ascorbic acid, which consumes almost 15% of the world’s sorbitol production (Gallezot et al. 1994). Most importantly, high value-added chemicals such as light paraffins and H[2] can be obtained through aqueous phase reforming of sorbitol (Huber et al. 2004). In recent research, noble metal catalysts such as Ru/C (Crezee et al. 2003; Gallezot et al. 1998) and Pt/C (Yan et al. 2006) have been found to exhibit high yield and selectivity towards sorbitol. For glucose hydrogenation, the yield and selectivity of sorbitol can reach above 90% under the catalyzing of Ru-based catalysts (Zhang et al. 2011). However, the selectivity to sorbitol decreased markedly during the hydrolytic hydrogenation of cellulose, and downstream molecules such as ethylene glycol and propylene glycol were produced at the same time (Palkovits et al. 2010). Furthermore, the formation of these lower alcohols was hard to avoid during the hydrogenation process at temperatures of about 473 K. In previous work, Ru/C catalyst combined with extremely low acid was developed for direct conversion of cellobiose into sorbitol, and an optimized sorbitol yield of 87.1% was attained at 458 K and 3 MPa H[2](Zhang et al. 2012). On the other hand, the preparation costs for these catalysts are too high and will tend to limit industrial applications of sorbitol production. Concerning the high cost of cellobiose, the use of maltose as raw material for the production of sorbitol is more promising in biomass utilization.^[1] Herein, the cheap Ni/Cu/Al/Fe hydrotalcite was synthesized and used as the precursor for the preparation of the hydrogenation catalyst, which can also be easily separated in magnetic fields. The prepared magnetic catalyst was used for efficient hydrolytic hydrogenation of maltose in extremely dilute H[3]PO[4]. The effects of various processing conditions on one-step conversion of maltose into sorbitol are studied in more detail to find the optimal sorbitol yield. 5 wt.% Ru/C, 5 wt.% Pd/C, 5 wt.% Pt/C, maltose, and sorbitol were purchased from Shanghai Jingchun Reagent. All chemicals used were of analytical grade. Catalysts Preparation Ni/Cu/Al/Fe hydrotalcite-like compound (HTlc) was synthesized with use of a co-precipitation method (Velu et al. 1997). An aqueous solution (150 mL) containing appropriate amounts of Cu(NO[3])[2]•6H [2]O, Ni(NO[3])[2]•6H[2]O, Fe(NO[3])[3]•9H[2]O, and Al(NO[3])[3]•9H[2]O with cations concentration of 1 mol/L was added dropwise with vigorous stirring into 150 mL of NaOH and Na[2]CO[3] solutions ([OH^−] = 2 ([Cu^2+] + [Ni^2+] + [Al^3+] + [Fe^3+]), [CO[3]^2−] = 0.5 ([Al^3+] + [Fe^3+]). The addition took about 0.5 h. The pH was kept between 8.5 and 10. The resulting slurry was aged at 333 K for 10 h. The precipitate was filtered, washed thoroughly with distilled water, and then dried at 353 K for 24 h. Before the test, the samples were reduced at 923 K for 3 h in an H[2] flow. The chemical composition of hydrotalcite precursor was identified by ICP-AES analysis, and the obtained catalyst was denoted as Ni[4.63]Cu[1]Al[1.82]Fe[0.79]. Catalyst Characterization Chemical analysis was carried out on a Thermo Elemental ICP-AES spectrometer after dissolution of the solid sample in an HNO[3] solution. X-ray powder diffraction (XRD) was performed in a Bruker D8 Advance X-Ray Diffractometer using Cu K radiation. The operating voltage and current were 40 kV and 40 mA, respectively. The step length was 0.02° with a scanning rate of 2°/min. The crystallite size was calculated by XRD-line broadening using the Scherrer equation (Klug and Alexander 1974). X-ray photoelectron spectroscopy (XPS) measurements were made with a Kratos Ultra system employing an Al K radiation source. The binding energies for each spectrum were calibrated with a C1s spectrum of 284.6 eV. Survey spectra for each sample over a binding energy range of 0 to 1100 eV were recorded at a pass energy of 160 eV and a resolution of 1 eV per step. High-resolution spectra of Cu 2p and Ni 2p were recorded at a pass energy of 40 eV and a resolution of 0.1 eV per step, for quantitative measurements of binding energy. The pore size and volume of used[ ]catalysts were determined following the standard nitrogen adsorption method using a Micromeritics Tristar 3020 instrument. Specific surface areas were calculated using the BET equation. Prior to analyses, all samples were degassed at 393 K and 10^-4 Torr pressure for 8 h to evacuate the physically adsorbed moisture. The carbon contents of recycled samples were determined by using XRF (X-ray Fluorescence Spectrometer) technique. The measurement of acidity of the catalyst was carried out in a Micromeritics AutochemII 2920 chemisorption analyzer following an NH[3] temperature-programmed desorption (TPD) method. The sample was heated up to 873 K at the rate of 15 K/min and kept for 30 min in a flow of He gas (20 mL/min) to remove adsorbed species on the surface. Then the sample was cooled down to 373 K in He flow, followed by adsorption of NH[3] in 10% NH[3] gas flow (balance He, 20 mL/min) for 1 h. After flushing with He for 1 h to remove physically adsorbed NH[3], the TPD data were measured from 373 K to 873 K with a ramp of 15 K/min. Catalytic Reaction The conversion of maltose to sorbitol was performed with a high pressure reactor (PARR 5500). For each run, maltose, catalysts, and 0.08 wt.% H[3]PO[4] were put into the reactor; then, the reaction was carried out at several temperatures under 3 MPa H[2] for a certain time with a stirring rate of 600 rpm. After reaction, the reactor was cooled down rapidly with ice water to room temperature; then, the liquid product was collected by filtration and put in a refrigerator set to 278 K. The used catalyst was reduced again before the next use. A brief schematic process is shown in Scheme 1. The liquid products were analyzed by IC (Dionex ICS-3000) with a CarboPac PA20 Column. The eluents were NaOH and H[2]O with a flow rate of 0.25 mL/min. The sample loop had a volume of 50 L. The column temperature was 303 K. Scheme 1. Schematic process for the conversion of maltose to sorbitol Catalyst Characterization XRD test Phase identification of the prepared samples was performed by XRD, and the results are given in Fig. 1. The XRD pattern of the catalyst precursor (the inset in Fig. 1) exhibited sharp and symmetrical reflections for (003), (006), and (113) planes and broad and asymmetric reflections for (012), (015), and (018) planes, characteristic of a well-crystallized HT in carbonate form (Courty et al. 1982; Sate et al. 1988). The peaks at 11.7, 23.3, 35.0, 39.0, 47.2, and 61.6 could be assigned to the (003), (006), (012), (015), (018), and (113) diffractions of HTlc, respectively. No isolated phases of individual Ni, Cu, Al, and Fe hydroxides were observed in the XRD patterns of Ni[4.63]Cu[1]Al[1.82]Fe[0.79]–HTlc, suggesting that Ni and Cu had been homogeneously incorporated into the matrixes of Ni Fig. 1. Powder XRD patterns of used catalysts: (a) fresh; (b) used once; (c) used twice. (Inset): powder XRD pattern of catalyst precursor For the reduced sample, peaks corresponding to planes (200) at 2θ of 51.7° could be clearly observed. These peaks coincided with those in files JCPDS 4-836, indicating the presence of metallic Cu in the face-centered-cubic structure. Besides, peaks corresponding to planes (111) at 2θ of 44.2° and planes (220) at 2θ of 75.8° indicated the formation of Cu-Ni solid solution, which could be assigned to a Ni-rich alloy. These results are in line with those reported by Rao et al. (2004), Wu (2007), and Wu et al. (2007) in relation to the peak positions in the XRD diagram of the Cu-Ni alloy. Two reflections at around 20^o could be assigned to the diffraction peaks of carbon, indicating that much carbon was adsorbed on the catalyst during recycling experiments. It can be seen that the corresponding crystallite sizes of the reduced catalysts obviously increased with increasing recycling time, as given in the table in Fig. 1. That may be caused by the effect of sintering under high temperature, which would lead to the growth of crystal particle size. XPS analysis The main reason for conducting XPS analysis was to obtain information regarding the chemical environment presented in the prepared catalyst. High-resolution scans of the XPS spectra of Cu 2p and Ni 2p with different intensity scales as ordinate are shown in Fig. 2. Four obvious peaks in the diagram of Cu 2p were found in the reduced sample, representing various valence states of the Cu element. Two shake-up lines appeared at 940.6 and 960.7 eV separately. The binding energies (BE) of Cu 2p[3/2] and 2p[1/2] were around 932.3 and 953.2 eV, respectively, which were very close to that of Cu^0. Note that the binding energies of metallic Cu are around 932.6 and 953.8 eV (Moretti et al. 1989; Strohmeier et al. 1985). For the Ni element, the obtained binding energies were around 852.8 and 869.9 eV, demonstrating the presence of Ni 2p[3/2] and 2p[1/2], respectively. The binding energy of Ni 2p[3/2] in pure NiO is 854.5 eV, while for the NiAl[2]O[4] spinel it appears at 856 eV (Lenglet et al. 1997). As for the reduced catalyst, two peaks were observed at binding energies of about 853.5 and 871.0 eV that were among those of Ni^0 and Ni^2+. Two satellites at binding energies of about 860.0 and 878.0 eV were observed. It can be seen that the BE of 2p electrons of Cu^0 were lower than 932.6 and 953.8 eV. Meanwhile, those of Ni^0 were 853.5 and 871.0 eV higher than 852.8 and 869.9 eV. It was inferred that the stronger electronic negativity of metallic Cu led to electrons transferring from Ni to Cu, so electron-rich of metallic Cu led to a red shift of 2p electrons. The above results further demonstrated the existence of NiCu alloy, because the interaction between Cu and Ni in alloy might lead to an influence on the electronic density in outer layers of metallic Cu and Fig. 2. XPS spectra of Cu 2p and Ni 2p of catalyst NH[3]-TPD measurement The NH[3]-TPD profile of the prepared[ ]catalyst is shown in Fig. 3. NH[3] molecules can be adsorbed on the acid sites of catalysts and be removed during the heating process. Acid sites of magnetic[ ] catalyst were distributed in two desorption regions at about 400 K to 600 K and 650 K to 750 K. The source of acidity was attributed to the presence of Al and Fe in hydrogenation catalyst. Temperatures of NH[3] desorption peaks and the quantities of total acid sites are summarized in Fig. 3. The total amount of acid sites was 0.133 mmol/g, and both the amounts of weak and strong sites were relatively low. It was speculated that the hydration reaction was primarily completed under the catalyzing of extremely low acid. The acid sites of catalyst might have promoting effect on hydrogenation reaction, because weakly basic C=O of glucose could be activated by Brønsted acid. Fig. 3. NH[3]-TPD profile of magnetic catalyst Properties of recycled catalysts The essential characteristics of recycled catalysts were studied in detail to investigate the specific reasons for catalyst deactivation (seen Table 1). Table 1. Properties of Recycled Catalysts ^a ICP-AES analysis of residual Cu and Ni contents in corresponding reaction solution. The BET surface area and pore volume decreased significantly with the recycling of magnetic catalyst, which might be attributed to the increased amount of adsorbed carbon in used samples. It was found that 4.83 mg/L Ni and little Cu (< 0.1 mg/L) were present in reaction solution after the first use of magnetic catalyst. However, much more Ni was leached into the solution in the successive runs, which would further lower the catalyst activity. It is worth noting that the Cu could be stably present in magnetic catalyst. Catalytic Performance for the Conversion of Maltose The influence of reaction temperature on the yield of sorbitol was investigated with reaction time to find the optimum conditions to increase the sorbitol yield. It can be observed from Fig. 4 that the reaction temperature played an important role in the reaction process. At the temperature of 458 K, the yield of sorbitol grew significantly with the prolonging of reaction time. The highest level of 93.3% in sorbitol yield was achieved at 458 K after a reaction of 2 h. However, the sorbitol yield obviously decreased with increasing temperature from 458 to 478 K after the same reaction time, especially for the temperature of 478 K. It must be pointed out that sorbitol will be decomposed to some extent at higher temperatures in the H[2] atmosphere, and much lower alcohols are formed during the reaction. Therefore, excess elevation of temperature is unfavorable for selection of the desired reaction. Fig. 4. Effect of reaction temperature on the yield of sorbitol with reaction time; 0.25 g maltose, 0.05 g catalyst, 25 mL 0.08 wt.% H[3]PO[4], 3 MPa H[2] Catalyst loading is an important parameter that needs to be optimized to increase the sorbitol yield. The experiments were conducted at three different catalyst loadings (20, 30, and 40%) as a function of reaction time, and the results are given in Fig. 5. One can observe that the sorbitol yield was higher for higher catalyst loading during the first 1.5 h of the reaction. This effect was substantial, and the sorbitol yield increased from 80.8 to 88.5% after the reaction time of 1.5 h when increasing the catalyst loading from 20 to 30%. A possible explanation for this is that the increased total number of active sites led to a faster reaction rate to promote the conversion of maltose into sorbitol. With the further increase of the amount of magnetic catalyst to 40%, the sorbitol yield decreased significantly in the initial stage at the reaction temperature of 468 K. It was inferred that much produced sorbitol was further converted into lower alcohols (over 15% in total) under such high temperature in the hydrogen atmosphere. However, it can be observed that further increasing the reaction time was unfavorable for the direct conversion of maltose into sorbitol due to the reaction of deep hydrogenolysis. Fig. 5. Effect of catalyst dosage on the yield of sorbitol with reaction time; 0.25 g maltose, catalyst ( 20%–0.05 g , 30%–0.075 g, 40%–0.1 g), 25 mL 0.08 wt.% H[3]PO[4], 468 K, 3 MPa H[2] The long-term stability of the heterogeneous catalyst is an extremely important characteristic for practical usage to reduce production costs. After the reaction was finished, the spent magnetic catalyst was separated from the liquid products and was reduced in H[2] atmosphere again before the next use. As seen from Fig. 6, the yield of sorbitol decreased from 93.1% to around 84.0% in the following two runs, indicating that the catalytic activity of the catalyst had not been completely recovered after direct reduction. However, an obvious decrease in sorbitol yield from 83.5% to 75.5% was observed in the fourth run, which reached about 81.1% of that obtained for the fresh catalyst. This may be due to partial loss of active Ni sites in the catalyst during catalysis. In addition, it is proposed that the recycled samples had larger crystallite size due to the effect of sintering (as seen in Fig. 1). The BET surface area and pore volume of used samples also decreased significantly in recycling experiments, which might be attributed to the increased amount of adsorbed carbon in catalysts. Therefore the reactant might not have made good contact with the catalyst, resulting in an obvious decrease in catalyst activity. The results led to the conclusion that the catalytic activity of the used catalyst could be partly improved in multiple cycles by means of repeated reduction treatment. Accordingly, how to improve the long-term stability of the reduced Cu/Ni/Al/Fe hydrotalcite-like catalyst remains a topic for future exploration. Fig. 6. Sorbitol yield as a function of the recycling times of the catalyst; 0.25 g maltose, 0.05 g catalyst, 25 mL 0.08 wt.% H[3]PO[4], 458 K, 3 MPa H[2], 2 h As a comparison, various noble metal catalysts were introduced in one-step conversion of maltose into sorbitol, such as Ru/C, Pt/C, and Pd/C. The obtained results are clearly described in Table 2. Among the tested noble metal catalysts, Ru/C exhibited excellent activity towards sorbitol production, and a sorbitol yield of 94.5% was achieved at a temperature of 458 K. For Pt/C and Pd/C catalysts, the corresponding hydrogenation activities were much lower compared to Ru/C, and much glucose was observed in the final products. However, a yield of 93.1% in sorbitol was attained under the catalyzing of prepared magnetic catalyst as well. Furthermore, the conversion rate of maltose was almost 100% when using Ni[4.63]Cu[1]Al[1.82]Fe[0.79] and Ru/C. Thus the cheap magnetic catalyst also had promising application in sorbitol production as compared with the supported noble metal catalysts that were considered. Table 2. Conversion of Maltose over Various Hydrogenation Catalysts Conditions: 0.25 g maltose, 0.05 g catalyst, 458 K, 25 mL 0.08 wt.% H[3]PO[4], 3 MPa H[2], 2 h. 1. The present study describes a feasible and environmentally friendly catalytic process for direct conversion of maltose into sorbitol in the presence of magnetic catalyst and extremely low phosphoric acid under an H[2] atmosphere. 2. A sorbitol yield of 93.1% was attained at 458 K for 2 h under 3 MPa H[2]. 3. XRD and XPS characterization of reduced catalysts indicated that the Cu^2+ was completely reduced to Cu^0, and little unreduced NiO was found in the reduced samples. 4. A catalyst recycling experiment demonstrated that Ni[4.63]Cu[1]Al[1.82]Fe[0.79] sustained better catalytic activity, in comparison to Pd/C and Pt/C catalysts, after being reused several times. 5. The BET surface area and pore volume of used samples decreased significantly, and some of the Ni constituting the active Ni sites could be easily leached into the solutions in recycling This work was supported by the supported by National Key Basic Research Program of China (No. 2013CB228101), National Natural Science Foundation of China (No. 31270635), and the National High Technology Research and Development Program of China (863 Program, 2012AA101806). Courty, P., Durand, D., Freund, E., and Sugier, A. (1982). “C[1]-C[6] alcohols from synthesis gas on copper-cobalt catalysts,” J. Mol. Catal. 17(2-3), 241-254. Crezee, E., Hoffer, B. W., Berger, R. J., Makkee, M., Kapteijn, F., and Moulijn, J. A. (2003). “Three-phase hydrogenation of D-glucose over a carbon supported ruthenium catalyst-mass transfer and kinetics,” Appl. Catal. A: Gen. 251(1), 1-17. Gallezot, P., Cerino, P. J., Blanc, B., Flèche, G., and Fuertes, P. (1994). “Glucose hydrogenation on promoted raney-nickel catalysts,” J. Catal. 146(1), 93-102. Gallezot, P., Nicolaus, N., Flèche, G., Fuertes, P., and Perrard, A. (1998). “Glucose hydrogenation on ruthenium catalysts in a trickle-bed reactor,” J. Catal. 180(1), 51-55. Huber, G. W., Cortright, R. D., and Dumesic, J. A. (2004). “Renewable alkanes by aqueous-phase reforming of biomass-derived oxygenates,” Angew. Chem. Int. Ed. 43(12), 1549-1551. Klug, H. P., and Alexander, L. E. (1974). X-ray Diffraction Procedures for Polycrystalline and Amorphous Materials, 2^nd Ed., Wiley, New York. Lenglet, M., Hochu, F., Durr, J., and Tuilier, M. H. (1997). “Investigation of the chemical bonding in 3d^8 nickel (II) charge transfer insulators (NiO, oxidic spinels) from ligand-field spectroscopy, Ni 2p XPS and X-ray absorption spectroscopy,” Solid State Commun. 104(12), 793-798. Moretti, G., Fierro, G., Jacono, M. L., and Porta, P. (1989). “Characterization of CuO-ZnO catalysts by X-ray photoelectron spectroscopy: Precursors, calcined and reduced samples,” Surf. Interface Anal. 14(6-7), 325-336. Palkovits, R., Tajvidi, K., Procelewska, J., Rinaldi, R., and Ruppert, A. (2010). “Hydrogenolysis of cellulose combining mineral acids and hydrogenation catalysts,” Green Chem. 12(6), 972-978. Rao, G. R., Mishra, B. G., and Sahu, H. R. (2004). “Synthesis of CuO, Cu and CuNi alloy particles by solution combustion using carbohydrazide and N-tertiarybutoxy-carbonylpiperazine fuels,” Mater. Lett. 58(27-28), 3523-3527. Sate, T., Fujita, H., Endo, T., and Shimada, M. (1988). “Synthesis of hydrotalcite-like compounds and their physico-chemical properties,” React. Solid. 5(2-3), 219-228. Strohmeier, B. R., Leyden, B. E., Field, R. S., and Hercules, D. M. (1985). “Surface spectroscopic characterization of Cu/Al[2]O[3] catalysts,” J. Catal. 94(2), 514-530. Velu, S., Ramaswamy, V., and Sivasanker, S. (1997). “New hydrotalcite-like anionic clays containing Zr^4+ in the layers,” Chem. Commun. (21), 2107-2108. Werpy, T., Petersen, G., Aden, A., Bozell, J., Holladay, J., Manheim, A., Eliot, D., Lasure, L., and Jones, S. (2004). Top Value Added Chemicals from Biomass, Vol. 1, U.S. Department of Energy, Oak Ridge, TN. Wu, S. P. (2007). “Preparation of ultra fine nickel-copper bimetallic powder for BME-MLCC,” Microelectron. J. 38(1), 41-47. Wu, S. P., Ni, J., Jiao, L., and Zeng, Z. N. (2007). “Preparation of ultra-fine copper-nickel bimetallic powders with hydrothermal-reduction method,” Mater. Chem. Phys. 105(1), 71-75. Yan, N., Zhao, C., Luo, C., Dyson, P. J., Haichao Liu, H. C., and Kou, Y. (2006). “One-step conversion of cellobiose to C[6]-alcohols using a ruthenium nanocluster catalyst,” J. Am. Chem. Soc. 128 (27), 8714-8715. Zhang, J., Lin, L., Zhang, J. H., and Shi, J. B. (2011). “Efficient conversion of D-glucose into D-sorbitol over MCM-41 supported Ru catalyst prepared by a formaldehyde reduction process,” Carbohydr. Res. 346(11), 1327-1332. Zhang, J., Wu, S. B., Li, B., and Zhang, H. D. (2012). “Direct conversion of cellobiose into sorbitol and catalyst deactivation mechanism,” Catal. Commun. 29, 180-184. Article submitted: April 3, 2013; Peer review completed: July 9, 2013; Revised version received and accepted: July 25, 2013; Published: July 29, 2013. 1. ↑
{"url":"https://bioresources.cnr.ncsu.edu/resources/efficient-conversion-of-maltose-into-sorbitol-over-magnetic-catalyst-in-extremely-low-acid/","timestamp":"2024-11-05T16:12:33Z","content_type":"text/html","content_length":"154613","record_id":"<urn:uuid:d271e183-ae34-4031-95c2-ebd37ec2953c>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00548.warc.gz"}
Addition and Subtraction Worksheets Addition and Subtraction sheets Would you want to know how good your Addition and Subtraction skills are? Mycaclu helps you to test it through the Addition and Subtraction sheets calculator. This free online calculator is very effective and easy to use as it does not require any skill to master. How to use the Mycalcu Addition and Subtraction Sheets Calculator? 1. Enter the 1^st Maximum number in the first box which sets a limit for the maximum number to be used for calculations 2. Enter the 2^nd Maximum number in the first box which sets a limit for the maximum number to be used for calculations 3. Enter the number of rows 4. Click on generate Would you like to use the Multiplication and Subtraction Calculator If more than one sheet is required, then this can be achieved by increasing the number of rows so that when you print the sheets, the problems may be printed on more than one sheet
{"url":"https://mycalcu.com/addition-and-subtraction-worksheets","timestamp":"2024-11-05T22:40:21Z","content_type":"text/html","content_length":"20380","record_id":"<urn:uuid:fa189489-2620-4904-89ae-9928b9f7d848>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00737.warc.gz"}
Statistics for Data Science: Visualising Bayes Theorem Photo by Maxim Abramov on Unsplash Bayes Theorem in Statistics provides us with a way for calculating conditional probabilities in the light of new or existing evidence. It has varied and wide-ranging use cases, such as understanding the results of a medical test to improving our machine learning models. Though the mathematical formula for Bayes theorem is quite simple and easy to derive using the definition of conditional probability, understanding the intuition behind the theorem is not so trivial. I have been thinking of alternative ways to describe the theorem other than what the definition says and was inspired to write about this visual derivation using areas of the sample and event spaces, after watching this excellent video by Grant Sanderson. Before starting though, let us get the definition out of the way : Given two different events A and B, where P(B)=0, Bayes theorem says: P(A|B) = P(B|A)xP(A)/P(B) Where : P(A|B) is the conditional probability of event A occurring given B has already occurred P(B|A) is the conditional probability of event B occurring given A has already occurred P(A) probability of event A occurring on its own. P(B) probability of event B occurring on its own. The mathematical derivation is quite simple and can be found here. In this article, I will try to prove the above theorem visually. This will build upon my previous article on conditional probability, but it is not mandatory to go through it to understand the concepts here if you already are familiar with terms like ‘sample space’ and ‘events’. As in the previous article, I am going to denote the sample space by rectangle U and events by other rectangles or shapes inside U. I am also going to assume that the shapes have been drawn to a scale, such that, the area of a shape is proportional to the number of outcomes possible in the event represented by that shape, and that Area(U) = P(U) = 1. Now let us consider the following scenario : A factory produces an item using three machines — A, B, and C — which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective; similarly, 3% of machine B’s items and 1% of machine C’s are defective. If a randomly selected item is defective, what is the probability it was produced by machine A? Let us first draw the visual representation of this scenario : Image By Author In the scenario described above, our sample space U consists of all the items produced in the given factory and is represented by the outer rectangular boundary in the figure. The rectangles A, B and C represent the events of items being produced by machines A, B and C and Da, Db and Dc are the events of items being defective when they are produced by A, B and C respectively. Using our area assumption, Area(U) = 1 . Also I am assuming that the rectangle U has been drawn such that h=1. So then we have, wa + wb + wc = 1 (1) Also as per our area assumption, the number of items produced by machine A is proportional to the area of rectangle A, which is No. of items produced by machine A ∞ Area(A) Hence, by the definition of probability, The probability that an item has been produced by machine A, P(A) = No. of items produced by the machine A ÷ Total Number of items produced, P(A) = Area(A)/Area(U). //cancelling the common proportionality term putting Area(U) = 1, P(A) = Area(A) = wa * h = wa (2) P(B) = wb , P(C) = wc. P(Da) = Area(Da), P(Db) = Area(Db) , and P(Dc) = Area(Dc). If we take the heights of these rectangles D to be ha, hb and hc respectively, P(Da) = wa * ha and so on. Suppose we know that the scenario that A has already occurred, the sample space then becomes Sample space when we know that the item has been produced by machine A. Now the probability of D given A has occurred will be given by P(D|A) = Area(Da)/Area(A) P(D|A) = wa*ha/wa*h which gives P(D|A) = ha/h (3) Now we consider the second part of the scenario, which says that it is given that a randomly sampled item from the sample space U is defective. In this case, our sample space will now shrink to the shaded grey area in the above figure, consisting of rectangles Da, Db and Dc, as a defective item can only belong to this area. This is how our new sample space will look now, in the light of new evidence. New sample space if it is known that the item is defective. Image by Author If now we have to find out the probability P(A|D), that is, the probability of a given defective item coming from machine A, we can easily see that this can be written in terms of areas as P(A|D) = Area(Da) / (Area(Da) + Area(Db) + Area(Dc)) If we divide both numerator and denominator by Area(U) (=1), we get P(A|D) = Area(Da)/Area(U) / (Area(Da)+Area(Db)+Area(Dc))/ Area(U) The term Area(Da)/Area(U) is nothing but P(Da), and the denominator term is P(D) . so we get P(A|D) = P(Da)/P(D) (4) if we suppose that height of rectangle Da is ha , then P(Da) = Area(Da) = wa * ha dividing and multiplying the term by h, the height of rectangle U ( or A) , P(Da) = (wa * ha * h)/h, or P(Da) = wa * (ha/h) // removing one h, as h=1 Now from equation (2) above, we know that wa = P(A), and from equation (3), ha/h = P(D|A). so substituting these in equation 4 we get P(A|D) = P(D|A) x P(A) / P(D) which is what the formula of the Bayes theorem gives us. Note that even though I have made some assumptions about sample space and areas, in real-world scenarios it is always possible to represent our experiments like this where the shapes or areas of random events represent their probabilities. I have actually taken this idea from the Monte Carlo method of estimating areas using probabilities and reversed it to represent probabilities using Your comments and suggestion are welcome. Subscribe if you would like to read more articles like this in future. Thank you.
{"url":"https://psrivasin.medium.com/visualising-bayes-theorem-93590a17999","timestamp":"2024-11-04T18:04:56Z","content_type":"text/html","content_length":"137311","record_id":"<urn:uuid:14dce49c-d825-4bd3-a42b-54b9aa08abb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00189.warc.gz"}
Prediction Accuracy Metrics Prediction Accuracy Metrics¶ The lenskit.metrics.predict module contains prediction accuracy metrics. These are intended to be used as a part of a Pandas split-apply-combine operation on a data frame that contains both predictions and ratings; for convenience, the lenskit.batch.predict() function will include ratings in the prediction frame when its input user-item pairs contains ratings. So you can perform the following to compute per-user RMSE over some predictions: preds = predict(algo, pairs) user_rmse = preds.groupby('user').apply(lambda df: rmse(df.prediction, df.rating)) Metric Functions¶ Prediction metric functions take two series, predictions and truth. lenskit.metrics.predict.rmse(predictions, truth, missing='error')¶ Compute RMSE (root mean squared error). ☆ predictions (pandas.Series) – the predictions ☆ truth (pandas.Series) – the ground truth ratings from data ☆ missing (string) – how to handle predictions without truth. Can be one of 'error' or 'ignore'. the root mean squared approximation error Return type lenskit.metrics.predict.mae(predictions, truth, missing='error')¶ Compute MAE (mean absolute error). ☆ predictions (pandas.Series) – the predictions ☆ truth (pandas.Series) – the ground truth ratings from data ☆ missing (string) – how to handle predictions without truth. Can be one of 'error' or 'ignore'. the mean absolute approximation error Return type Working with Missing Data¶ LensKit rating predictors do not report predictions when their core model is unable to predict. For example, a nearest-neighbor recommender will not score an item if it cannot find any suitable neighbors. Following the Pandas convention, these items are given a score of NaN (when Pandas implements better missing data handling, it will use that, so use pandas.Series.isna()/ pandas.Series.notna(), not the isnan versions. However, this causes problems when computing predictive accuracy: recommenders are not being tested on the same set of items. If a recommender only scores the easy items, for example, it could do much better than a recommender that is willing to attempt more difficult items. A good solution to this is to use a fallback predictor so that every item has a prediction. In LensKit, lenskit.algorithms.basic.Fallback implements this functionality; it wraps a sequence of recommenders, and for each item, uses the first one that generates a score. You set it up like this: cf = ItemItem(20) base = Bias(damping=5) algo = Fallback(cf, base)
{"url":"https://lkpy.lenskit.org/en/0.6.1/evaluation/predict-metrics.html","timestamp":"2024-11-07T07:21:42Z","content_type":"text/html","content_length":"18486","record_id":"<urn:uuid:054b15e4-6252-4852-9a3b-9c66b56dfa0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00786.warc.gz"}
MT3510 Introduction to Mathematical Computing Up until this point, we have worked with “non-symbolic” Python features. For example, if we set \(x = 1\) and then \(y = x\), the apparent relationship between \(x\) and \(y\) disappears when we change \(x\). This is because we are working with values for \(x\) and \(y\), rather than the symbols \(x\) and \(y\) themselves - i.e. we are doing arithmetic rather than algebra. The module SymPy is Python’s Computer Algebra System (CAS). You might have used other CASs like Maple or Mathematica; SymPy has the major advantage of being free and open source. It has less features than the commercial packages, but is steadily growing. First we will import sympy, and use init_printing to get SymPy to output maths nicely using LaTeX - this is optional. import sympy as sym Creating symbols# To create “symbols” to manipulate, use sym.symbols. Note that it takes a tuple of strings. x, y, z = sym.symbols(("x", "y", "z")) It’s worth noting that your variable names can be different from the names that SymPy stores for the symbols: # This is a recipe for confusion - you better have a good reason to do it! u, v = sym.symbols(("s", "t")) # This is a more reasonable thing to do x0 = sym.symbols(("x_0")) If you know that you are only interested in real values of the variables, use real=True when creating it. There are other assumptions you can put on symbols (like being positive) - see here for a s = sym.symbols("s", real=True) Expressions and substitution# We can use symbols to create expressions, mostly using the same syntax as for regular Python variables: (3 * (x + y)**2 * sym.exp(x*y))/sym.log(z) Note that we used the SymPy version of exp and log, rather than the numpy version. This is important - the numpy version simply will not work. This is because numpy.exp expects a number (or an array of numbers), rather than a symbol. There are SymPy equivalents of most of the standard mathematical functions you would use in numpy. Let’s assign this expression to a variable: V = (3 * (x + y)**2 * sym.exp(x*y))/sym.log(z) V looks like a function of three variables, but it is not a Python function, which is why we cannot “call” it with three arguments: TypeError Traceback (most recent call last) Input In [8], in <cell line: 1>() ----> 1 V(0, 1, 2) TypeError: 'Mul' object is not callable Instead, we should “substitute” values for \(x\), \(y\), and \(z\) like so: Note that this has not changed V itself - it has simply returned a new expression where \(x\) has been replaced with \(2\). To substitute multiple values, pass a list of (symbol, replacement) pairs: V.subs([(x, 2), (y, 3), (z, 2)]) We are not restricted to substituting in numbers. Suppose that we have a parameterisation \(x = \sin(u)\), \(y = \cos(u)\) for \(u \in [0, 2\pi)\). We could make these substitutions like so: u = sym.symbols("u") V.subs([x, sym.sin(u)]) TypeError Traceback (most recent call last) Input In [12], in <cell line: 3>() 1 u = sym.symbols("u") ----> 3 V.subs([x, sym.sin(u)]) File ~/opt/anaconda3/envs/jupyterbook/lib/python3.10/site-packages/sympy/core/basic.py:933, in Basic.subs(self, *args, **kwargs) 931 sequence = list(sequence) 932 for i, s in enumerate(sequence): --> 933 if isinstance(s[0], str): 934 # when old is a string we prefer Symbol 935 s = Symbol(s[0]), s[1] 936 try: TypeError: 'Symbol' object is not subscriptable Note that the expression above is still being stored as a symbolic expression rather than as a float, which helps to avoid rounding errors and the other perils associated with floats. If you really want a float, use evalf, providing it with a dict of symbol : value pairs. # The first argument to evalf controls how precise the output is. # The default is 15 places. V.evalf(30, subs={x: 2, y: 3, z: 2}) Exact values# When creating expressions, we want to avoid floats when possible. In the following example, we try to create the exact expression \(\frac{8}{3}x\) but instead get a rounded float coefficient. This is because the fraction \(8/3\) is evaluated by Python before SymPy can get involved. There are several ways of avoiding this: you can create the fraction as a Rational number: from sympy import Rational u = Rational(8, 3) * x Alternatively, you can “split up” your fraction like so: The same is true of roots: instead of doing You should do Manipulating expressions# SymPy comes with the ability to simplify, factorise, and expand expressions. As an example, we will work with the cubic \(p = (x-2)(x + 3)(x + 7)\) P = (x - 2) * (x + 3) * (x + 7) To go the other way, use factor: An interesting feature is that P == Q evaluates to False. This is because they are not “structurally” the same expression, even if they are the same “algebraic” expression. The same is true of P - Q == 0: In order to check whether P and Q represent the same expression, the simplest thing to do is to simplify the expression P - Q and see whether it is \(0\). Note that in general, it is not clear what is meant by a “simpler” expression, so simplify may return results you do not expect. It is also theoretically impossible for simplify to always work out whether an expression is equal to 0, though it will usually manage it for most reasonable expressions. Solving equations# One variable# To find the roots of an expression, use solveset. For example, to find the roots of the polynomial \(Q(x) = x^2 - x - 6\): Q = x**2 - x - 6 If we don’t want roots (i.e. solutions to \(Q(x) = 0\)) but instead solutions to \(Q(x) = a\) for some \(a \in \mathbb{R}\), simply find the roots of \(Q - a\). For example, to get the solutions of \ (Q(x) = 1\) do: If we’re only interested in the real solutions, we can use domain=sym.Reals: sym.solveset(Q + 7, domain = sym.Reals) Note that solveset does not give us the multiplicity of roots: sym.solveset(x**2 + 2*x + 1) For polynomials only, we can get the multiplicities using roots: sym.roots(x**2 + 2*x + 1) You can also obtain the roots of an expression in terms of another variable: # solve x^2 + yx + 1 = 0 for x sym.solveset(x**2 + y*x + 1, x) To access these solutions, you can convert them to a list: sols = sym.solveset(x**2 + y*x + 1, x) sols = list(sols) Alternatively, you can iterate through them directly: sols = sym.solveset(x**2 + y*x + 1, x) for sol in sols: print(f"{sol} evaluated at y = 1: {sol.evalf(subs={y: 1})}") -y/2 - sqrt((y - 2)*(y + 2))/2 evaluated at y = 1: -0.5 - 0.866025403784439*I -y/2 + sqrt((y - 2)*(y + 2))/2 evaluated at y = 1: -0.5 + 0.866025403784439*I Solving systems of equations# Suppose we want to find stationary points of the following system of ODES \[ \frac{du}{dt} = (u+1)v \ \frac{dv}{dt} = v(u + 2v). \] That is, we want to solve the system \[ (u+1)v = 0 \ v(u + 2v) = 0. \] First we create our symbols. We will use real=True to only get real solutions to our system. u, v = sym.symbols("u v", real=True) Now our system is represented by the following expressions (which SymPy will assume we are trying to find roots of, as above) eqs = [(u + 1) * v, v*(u + 2*v)] And we can solve like so: sols = sym.solve(eqs, [u, v]) Note that unlike solveset, the output from solve is already in list form. One of the nice features of SymPy is its ability to perform differentiation (both partial and total). Let’s consider a moderately complicated function with a high likelihood of mistakes if we took derivatives by hand, \(V(x) = (\tan^2(x) + 1)^3\sinh(x^3)\): V = (sym.tan(x) ** 2 + 1)**3 * sym.sinh(x**3) To differentiate, use sym.diff, specifying which variable to differentiate with respect to (here there is of course only one option): # Differentiate V with respect to x sym.diff(V, x) To take higher derivatives, just add a number to the end: # d^2/dx^2 (V): sym.diff(V, x, 2) Alternatively, you can write out the variable you are differentiating with respect to repeatedly: Partial Derivatives# Now let’s consider a function of three variables \(x, y, z\): V = (3 * (x + y)**2 * sym.exp(x*y))/sym.log(z) We can compute \(\frac{\partial V}{\partial x}\) in precisely the same way as above: The same would work for any of the other first partial derivatives; just replace x with the variable of interest. If we instead wanted \(\frac{\partial^2 V}{\partial z\partial x}\), we could obtain it like so: In general, to compute \(\frac{\partial^N V}{\partial x_1\partial x_2 \cdots \partial x_N}\), use sym.diff(V, x_N, x_(N-1), ..., x_2, x_1). Remember that the order can matter (though very often does not for the functions we are interested in). Unevaluated derivative# It is possible to construct an object representing a derivative without actually computing it: D = sym.Derivative(V, x, y) This might seem like an odd thing to do, but you might want to build up an expression involving a very messy derivative without having it expand and make it difficult to see what’s going on. To evaluate such a derivative, use doit: You can also use doit on any expression in which D appears. CASs are very well suited to performing differentiation for us, as there’s nothing particularly “clever” going on - the rules for differentiation are relatively straightforward and you could apply them by hand. Integration, on the other hand, can be very difficult. SymPy can integrate many functions that would be annoying to do manually. For example, the function \(f(x) = x^{5} e^{2x} \sin(\ frac{x}{3})\) can be integrated using repeated integration by parts, but it would be a little unpleasant. SymPy can compute it easily: f = x**3 * sym.exp(2*x) * sym.sin(x/3) Definite integrals# To compute a definite integral, just provide integrate with the limits as well. For example, to compute \(\int_{-3}^4 f(x) dx\) we would do # add the argument (x, -3, 4) sym.integrate(f, x, (x, -3, 4)) To evaluate definite integrals with limits at infinity, use sym.oo (two o’s). For example, to compute \(\int_{-\infty}^0 f(x)dx\): sym.integrate(f, x, (x, -sym.oo, 0)) Multiple integrals# Computing multiple integrals works just like computing multiple derivatives: just give the variables and limits in the order you want the integrals to be performed. As an example, here is how to compute \(\int_{-\infty}^{0} \int_{-2\pi}^{y} x^2 e^y dx\, dy\): g = x ** 2 * sym.exp(y) sym.integrate(g, (x, -2*sym.pi, y), (y, -sym.oo, 0)) Unevaluated integrals# As with derivatives, you can leave an integral in an unevaluated form. This can be useful to check that you have entered the limits correctly and are integrating in the correct order. sym.Integral(g, (x, -2*sym.pi, y), (y, -sym.oo, 0)) To evaluate the integral, use doit: I = sym.Integral(g, (x, -2*sym.pi, y), (y, -sym.oo, 0)) Special functions# Many integrals do not have nice solutions, i.e. cannot be expressed in closed-form in terms of standard functions. Some of these integrals have solutions in terms of special functions like the error function or the gamma function: sym.integrate(sym.exp(-x**2), x) sym.integrate(sym.exp(-x**3), (x, 0, 3)) Sometimes SymPy simply cannot perform the integration, in which case it just returns the integral: sym.integrate(sym.sin(x) * sym.exp(-x**2)) If we want to produce plots from expressions, it is convenient to convert the expression into a form that numpy can understand. To do this, use the lambdify function: import numpy as np T = sym.cos(x) * (x**2 + y**2) # First argument is the arguments of the new function (in order) # Second argument is the expression to convert # Third argument is the module to use (typically numpy) U = sym.lambdify([x, y], T, "numpy") Now the function can be plotted as usual: import matplotlib.pyplot as plt %matplotlib inline xp = np.arange(-5, 5, 0.01) yp = np.arange(-5, 5, 0.01) X, Y = np.meshgrid(xp, yp) Z = U(X, Y) fig, ax = plt.subplots(subplot_kw={"projection": "3d"}, figsize=(8, 8)) ax.plot_surface(X, Y, Z, cmap="coolwarm"); We might want to just look at a “slice” of that surface, say through the plane \(y = 1\). We would do the same thing, but substitute in \(y = 1\) first: T2 = T.subs(y, 1) U2 = sym.lambdify(x, T2, "numpy") %matplotlib inline X = np.arange(-5, 5, 0.01) Z = U2(X) fig, ax = plt.subplots(figsize=(8, 8)) ax.plot(xp, Z); ax.set_ylabel('T(x, 1)',fontsize=14);
{"url":"https://danl21.github.io/docs/1_IntroNotebooks/20%20SymPy.html","timestamp":"2024-11-11T03:09:27Z","content_type":"text/html","content_length":"94443","record_id":"<urn:uuid:4b19984c-e533-4b13-a0dd-c3d9aaccaac0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00099.warc.gz"}
How do you convert units of weight to units of mass? in context of weight to mass 31 Aug 2024 Title: Converting Units of Weight to Units of Mass: A Theoretical Framework Abstract: The distinction between weight and mass is a fundamental concept in physics, with weight being a measure of the force exerted on an object by gravity and mass being a measure of its resistance to changes in motion. While units of weight are often expressed in terms of force (e.g., Newtons), units of mass are typically measured in kilograms or grams. In this article, we explore the theoretical framework for converting units of weight to units of mass. Introduction: Weight and mass are related but distinct physical quantities. Weight is a measure of the gravitational force acting on an object, while mass is a measure of its intrinsic property that determines its resistance to changes in motion. The conversion between these two quantities requires a fundamental understanding of their relationship. Theoretical Background: Weight (W) can be expressed as the product of mass (m) and acceleration due to gravity (g): W = m × g where g is approximately 9.81 meters per second squared on Earth’s surface. Mass, on the other hand, is a measure of an object’s inertia, which remains constant regardless of its location or motion. The unit of mass in the International System of Units (SI) is the kilogram Converting Weight to Mass: To convert units of weight to units of mass, we can rearrange the equation above to solve for mass: m = W / g This formula indicates that mass is equal to weight divided by acceleration due to gravity. Therefore, to convert a unit of weight to a unit of mass, we simply divide the weight value by the acceleration due to gravity. Conclusion: In conclusion, converting units of weight to units of mass requires an understanding of their theoretical relationship. By using the formula m = W / g, we can transform units of weight into units of mass, providing a more fundamental and intrinsic measure of an object’s property. • [1] Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of physics. John Wiley & Sons. • [2] Serway, R. A., & Jewett, J. W. (2018). Physics for scientists and engineers. Cengage Learning. Note: The references provided are a selection of popular physics textbooks that cover the topics discussed in this article. Related articles for ‘weight to mass’ : Calculators for ‘weight to mass’
{"url":"https://blog.truegeometry.com/tutorials/education/5f0cc25ce452ae016ea8e4ac53d3b3ff/JSON_TO_ARTCL_How_do_you_convert_units_of_weight_to_units_of_mass_in_context_of.html","timestamp":"2024-11-06T07:34:01Z","content_type":"text/html","content_length":"16355","record_id":"<urn:uuid:4ca28b00-cc73-41a2-880d-238ac17eddd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00038.warc.gz"}
How long is GRE practice? The overall testing time for the GRE ® General Test is about three hours and 45 minutes. There are six sections, one of which is an unidentified/unscored section. Are the ETS GRE practice tests harder? I actually did better on the practice tests than the real one. In my experience, I felt like the test was so much harder than any ETS provided prep materials. I scored on PP1 160V/163Q PP2 160V/160Q and got a 157V/155Q on the actual test. How do I give ETS GRE practice test? How to Get the Free GRE Practice Tests from ETS 1. Step 1: Create a free account at https://www.ets.org/gre/ 2. Step 2: Sign in to your ETS account. 3. Step 3: Click “Shop for Test Preparation” 4. Step 4: Add the first two products to your cart. 5. Step 5: Review your shopping cart and click “Submit” Is PowerPrep 1 accurate? But speaking generally, it is very close to the scores that you keep in the PowerPrep tests. I scored 326 and 329 if I remember correctly in my PowerPrep tests and I scored 329 in my final exam. Many of my friends had the same experience. One of my friend score 330 and 334 in his tests and got 333 in his final test. How can I study for GRE in 20 days? General Tips: 1. Give a diagnostic Test before starting your preparation. 2. While giving test don’t jump AWA section. 3. Keep last two days for review and get done with learning and problem solving phase before that. 4. Make practice of siting on one place of 5 hours. 5. On test day keep yourself hydrated. Are ETS practice tests accurate? The ETS practice exams provide the an accurate exam experience and use an accurate scoring algorithm (in comparison to other practice exams). While it’s certainly no guarantee, if you take care of business on test day, you should be able to score in a similar range to your practice exams scores. Good luck!!
{"url":"https://www.farinelliandthekingbroadway.com/2021/12/20/how-long-is-gre-practice/","timestamp":"2024-11-05T23:38:20Z","content_type":"text/html","content_length":"41748","record_id":"<urn:uuid:ec893aae-2c15-4d76-8420-238f6a2297c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00030.warc.gz"}
Scott - Tutor for High School English Certified Tutor I am a certified teacher in the state of Washington. I graduated from Eastern Washington University with an Elementary Education Degree and a minor in math. After graduation,I moved to South Korea to teach English for a year and a half. At the moment, I am a sub in the Cheney School District where I am one of the most requested subs and am more than capable to teach every subject. My favorite subject is math. I am a regular sub at Cheney High School for Math. Algebra and Geometry are my main subjects, but I have taken classes in Trigonometry and Stats. As a teacher, I am very active and engaging. I ask the students lots of questions to check their comprehension and never leave a question unanswered. Connect with a tutor like Scott Sports, fishing, and movies Tutoring Subjects College English Elementary School Math High School English ISEE Prep What is your teaching philosophy? I believe that students don't truly learn the material unless it is presented in a way that is engaging and fun. Therefore, teachers need to put in extra effort to help students understand that material by not being a cookie cutter teacher. What might you do in a typical first session with a student? In the first session, I would start just by introducing myself - telling the students about where I am from, what I have done and how I teach. Then, I would ask the students some questions about themselves. Then, it would be time to get into the tutoring lesson. I would be able to use the student's interests to help with certain parts of the lesson. How can you help a student become an independent learner? Well that is always a goal. If a student can learn to do their work by themselves then they have truly succeeded. If the student really has trouble working without constant reminding, I would start by giving them goals to work towards. If you give them just a few problems to complete without interruption, then they will receive a reward. Then, you can keep upping the amount of required problems. The other side is if the student becomes too distracted and does something else they don't receive a reward. If they can eventually come in and do all of their work without being asked they would not need tutoring anymore. How would you help a student stay motivated? Motivation can be increased in many different ways. If students are competitive, then I could try more of a who can do the most technique, or maybe tell them that they should try and beat what they did last time. If other students are just easily distracted that just means that they aren't interested in the material. So the material needs to be changed or presented in a way that will keep their attention. Lastly, there is always just checking on the student's work and hounding them when necessary. If a student has difficulty learning a skill or concept, what would you do? The student has trouble learning the skill for one or two reasons. First, the material was not presented in a way that the student understands. So it needs to be explained a different way. The second reason is that the student learns differently. So, maybe a hands-on approach would be the proper way to go. If the material is math, then bring in blocks or something visual for the student to How do you help students who are struggling with reading comprehension? Reading comprehension doesn't just happen overnight. It requires repetition and explanation. If the student doesn't understand what they just read, then it needs to be explained to them. Visual explanations work best, but a verbal one could help if it is said correctly. Another problem could just be that they are reading too difficult of a book. What strategies have you found to be most successful when you start to work with a student? My strategies when working with students are very simple. I keep everything at a high level of energy. I move around talking to students and answering questions individually as often as I can. If there is a recurring question then it's a whole group discussion where I ask the students input on what we are working on and what would happen next for that particular question. How would you help a student get excited/engaged with a subject that they are struggling in? I start with showing that I am excited and interested in the topic. I usually have input ranging from lots to some on almost every topic. Then, I start throwing out random facts that are interesting for the students. Lastly, I try to bend the topic around into something that they will find interesting. What techniques would you use to be sure that a student understands the material? I ask a lot of follow-up questions. I start by asking if they have any questions, then follow it up with why don't you walk me through the process. We will do that as many times as I think are needed so that the student can do the problems on their own. How do you build a student's confidence in a subject? By showing them that they can do it. That even if it is a difficult topic or subject, everything can be broken down to very basic forms. Then, I teach the student how to start, piece by piece. How do you evaluate a student's needs? I judge the students by their body language, attitude, and how much they can complete on their own. How do you adapt your tutoring to the student's needs? All students learn differently; therefore, my teaching style needs to change with them. Student teaching taught me to be gentle with the students who were having trouble learning. In Korea, I learned that there is a time to be strict and that is when students are just about to become unruly. Some students learn best by talking and a lengthy explanation, while others do better with hands-on What types of materials do you typically use during a tutoring session? I'm a big fan of a whiteboard or something that I can write on that is big enough for many people to see. Depending on the topic, counting blocks would also be great. However, a pencil and paper are usually enough to explain what needs to be done. Call us today to connect with a tutor like Scott (647) 952-2150
{"url":"https://www.varsitytutors.com/ca/tutors/878126581","timestamp":"2024-11-11T06:55:03Z","content_type":"text/html","content_length":"450692","record_id":"<urn:uuid:c09bd849-fe58-4e04-81f0-cd6e748d9faa>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00753.warc.gz"}
[Solved] Solve each of the following system of equations in R. ... | Filo Solve each of the following system of equations in R. Not the question you're searching for? + Ask your question Was this solution helpful? Video solutions (5) Learn from their 1-to-1 discussion with Filo tutors. 25 mins Uploaded on: 10/20/2022 Was this solution helpful? 7 mins Uploaded on: 10/11/2022 Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 14 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Maths XI (RD Sharma) View more Practice more questions from Linear Inequalities Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Solve each of the following system of equations in R. Question Text ] Updated On Nov 2, 2022 Topic Linear Inequalities Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 5 Upvotes 578 Avg. Video Duration 12 min
{"url":"https://askfilo.com/math-question-answers/solve-each-of-the-following-system-of-equations-in-r-frac-2x-3-4-2-geq-frac-4x-3","timestamp":"2024-11-13T02:39:20Z","content_type":"text/html","content_length":"518747","record_id":"<urn:uuid:d0567d2d-6f23-4643-a95b-42c4aec059b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00413.warc.gz"}
Broadcasting on Two-Dimensional Regular Grids We study a specialization of the problem of broadcasting on directed acyclic graphs, namely, broadcasting on 2D regular grids. Consider a 2D regular grid with source vertex X at layer 0 and k+1 vertices at layer k\geq 1, which are at distance k from X. Every vertex of the 2D... Show more
{"url":"https://synthical.com/article/Broadcasting-on-Two-Dimensional-Regular-Grids-39acaa4a-ffc2-11ed-9b54-72eb57fa10b3?","timestamp":"2024-11-12T16:09:16Z","content_type":"text/html","content_length":"70551","record_id":"<urn:uuid:042ddeb8-6a77-4c1b-8e69-913279727a74>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00549.warc.gz"}
Convert Circular Mil to Are Please provide values below to convert circular mil to are [a], or vice versa. Circular Mil to Are Conversion Table Circular Mil Are [a] 0.01 circular mil 5.067074790975E-14 a 0.1 circular mil 5.067074790975E-13 a 1 circular mil 5.067074790975E-12 a 2 circular mil 1.013414958195E-11 a 3 circular mil 1.5201224372925E-11 a 5 circular mil 2.5335373954875E-11 a 10 circular mil 5.067074790975E-11 a 20 circular mil 1.013414958195E-10 a 50 circular mil 2.5335373954875E-10 a 100 circular mil 5.067074790975E-10 a 1000 circular mil 5.067074790975E-9 a How to Convert Circular Mil to Are 1 circular mil = 5.067074790975E-12 a 1 a = 197352524139 circular mil Example: convert 15 circular mil to a: 15 circular mil = 15 × 5.067074790975E-12 a = 7.6006121864624E-11 a Popular Area Unit Conversions Convert Circular Mil to Other Area Units
{"url":"https://www.unitconverters.net/area/circular-mil-to-are.htm","timestamp":"2024-11-12T03:02:11Z","content_type":"text/html","content_length":"10521","record_id":"<urn:uuid:dabe34aa-8b30-4009-bbce-934773364e88>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00068.warc.gz"}
Free Printable Multiplication Chart 100X100 Free Printable | Multiplication Chart Printable Free Printable Multiplication Chart 100X100 Free Printable Free Printable Multiplication Chart 100X100 Free Printable Free Printable Multiplication Chart 100X100 Free Printable – A Multiplication Chart is a handy tool for youngsters to discover just how to multiply, divide, and find the smallest number. There are many usages for a Multiplication Chart. What is Multiplication Chart Printable? A multiplication chart can be utilized to help kids discover their multiplication facts. Multiplication charts can be found in lots of kinds, from complete page times tables to single page ones. While private tables work for presenting chunks of information, a full web page chart makes it much easier to review realities that have already been mastered. The multiplication chart will normally include a left column as well as a leading row. When you want to discover the item of two numbers, pick the first number from the left column and the 2nd number from the top row. Multiplication charts are practical understanding tools for both grownups and children. Children can use them in the house or in institution. Multiplication Chart 100×100 Printable are offered on the web and also can be published out and also laminated flooring for sturdiness. They are a terrific tool to utilize in mathematics or homeschooling, as well as will provide a visual pointer for youngsters as they learn their multiplication facts. Why Do We Use a Multiplication Chart? A multiplication chart is a diagram that shows exactly how to increase 2 numbers. You pick the initial number in the left column, move it down the column, as well as then choose the 2nd number from the top row. Multiplication charts are valuable for many factors, consisting of aiding kids find out just how to divide and also simplify portions. Multiplication charts can additionally be handy as workdesk sources since they serve as a constant reminder of the trainee’s development. Multiplication charts are likewise beneficial for assisting students remember their times tables. As with any type of skill, remembering multiplication tables takes time and technique. Multiplication Chart 100×100 Printable Free Printable Multiplication Chart 100X100 Free Printable Free Printable Multiplication Chart 100X100 Free Printable A To Z Multiplication Chart To 100 Free Printable Multiplication Chart Multiplication Chart 100×100 Printable If you’re looking for Multiplication Chart 100×100 Printable, you’ve come to the appropriate area. Multiplication charts are offered in different layouts, including full size, half size, as well as a variety of cute designs. Multiplication charts and also tables are vital tools for children’s education. These charts are terrific for usage in homeschool math binders or as classroom posters. A Multiplication Chart 100×100 Printable is an useful tool to reinforce mathematics facts and also can assist a youngster find out multiplication swiftly. It’s likewise a fantastic tool for miss counting and also discovering the times tables. Related For Multiplication Chart 100×100 Printable
{"url":"https://multiplicationchart-printable.com/multiplication-chart-100x100-printable/free-printable-multiplication-chart-100x100-free-printable-10/","timestamp":"2024-11-11T16:56:05Z","content_type":"text/html","content_length":"28407","record_id":"<urn:uuid:2bb932a5-0e54-456d-8f8c-5a4cb1c36e74>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00134.warc.gz"}
Probability Theory This is a past course that has already concluded. If you are looking for a current course, please click here. Welcome to Probability Theory! For course info and policies, please see the syllabus. For grades, log into Moodle. If you need help or have questions, please contact Prof. Wright. Prof. Wright's office hours: Mon. 12:45–2:00pm in RNS 160R, Tues. 10–11am on Zoom, Wed. 3:15–4:15 in RMS 405, Thurs. 10–11am in RMS 405, Fri. 12:45–2:00pm in RNS 160R, and other times by appointment Help sessions: Mon. and Thurs. 7–8pm in Tomson 186 Have a great spring break! No class March 28 – April 1.
{"url":"http://mrwright.org/teaching/math262s22/index.php","timestamp":"2024-11-04T02:16:31Z","content_type":"text/html","content_length":"53016","record_id":"<urn:uuid:9b5ae3ac-9ae0-49d2-b813-12473ce07d87>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00426.warc.gz"}
Math Contest Repository CIMC 2022 Part A - Question 5, CEMC UWaterloo (Canadian Intermediate Mathematics Contest 2022, Part A, Question 5, CEMC - UWaterloo) Magnus and Viswanathan play a game against each other three times: • In each game, each player’s score is a positive integer. The players’ scores at the end of the three games are six different integers. • In each game, the winner’s score is at least $25$ points. If the winner’s score is $25$, their opponent’s score must be at most $23$. If the winner’s score is greater than $25$, their opponent’s score must be exactly $2$ less than the winner’s score. • Viswanathan wins either the first game or the second game, but not both. • Viswanathan wins the third game with a score of $25$. • Each player’s score in the second game is the average of their scores in the first and third games. What was Magnus’s score in the third game? Please login or sign up to submit and check if your answer is correct. flag Report Content You should report content if: • It may be offensive. • There is something wrong with it (statement or difficulty value) • It isn't original. Thanks for keeping the Math Contest Repository a clean and safe environment!
{"url":"https://mathcontestrepository.pythonanywhere.com/problem/cimc22a5/","timestamp":"2024-11-13T15:19:40Z","content_type":"text/html","content_length":"10967","record_id":"<urn:uuid:ffe9385a-2eef-419a-a0be-46fec8e3247f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00182.warc.gz"}
Model Scaling Transformation Model Scaling Transformation Good scaling of models can greatly improve the numerical properties of a problem and thus increase reliability and convergence. The core.scale_model transformation allows users to separate scaling of a model from the declaration of the model variables and constraints which allows for models to be written in more natural forms and to be scaled and rescaled as required without having to rewrite the model code. class pyomo.core.plugins.transform.scaling.ScaleModel(**kwds)[source] Transformation to scale a model. This plugin performs variable, constraint, and objective scaling on a model based on the scaling factors in the suffix ‘scaling_factor’ set for the variables, constraints, and/or objective. This is typically done to scale the problem for improved numerical properties. Supported transformation methods: By default, scaling components are renamed with the prefix scaled_. To disable this behavior and scale variables in-place (or keep the same names in a new model), use the rename=False argument to apply_to or create_using. >>> from pyomo.environ import * >>> # create the model >>> model = ConcreteModel() >>> model.x = Var(bounds=(-5, 5), initialize=1.0) >>> model.y = Var(bounds=(0, 1), initialize=1.0) >>> model.obj = Objective(expr=1e8*model.x + 1e6*model.y) >>> model.con = Constraint(expr=model.x + model.y == 1.0) >>> # create the scaling factors >>> model.scaling_factor = Suffix(direction=Suffix.EXPORT) >>> model.scaling_factor[model.obj] = 1e-6 # scale the objective >>> model.scaling_factor[model.con] = 2.0 # scale the constraint >>> model.scaling_factor[model.x] = 0.2 # scale the x variable >>> # transform the model >>> scaled_model = TransformationFactory('core.scale_model').create_using(model) >>> # print the value of the objective function to show scaling has occurred >>> print(value(model.x)) >>> print(value(scaled_model.scaled_x)) >>> print(value(scaled_model.scaled_x.lb)) >>> print(value(model.obj)) >>> print(value(scaled_model.scaled_obj)) propagate_solution(scaled_model, original_model)[source] This method takes the solution in scaled_model and maps it back to the original model. It will also transform duals and reduced costs if the suffixes ‘dual’ and/or ‘rc’ are present. The scaled_model argument must be a model that was already scaled using this transformation as it expects data from the transformation to perform the back mapping. ○ scaled_model (Pyomo Model) – The model that was previously scaled with this transformation ○ original_model (Pyomo Model) – The original unscaled source model Setting Scaling Factors Scaling factors for components in a model are declared using Suffixes, as shown in the example above. In order to define a scaling factor for a component, a Suffix named scaling_factor must first be created to hold the scaling factor(s). Scaling factor suffixes can be declared at any level of the model hierarchy, but scaling factors declared on the higher-level models or Blocks take precedence over those declared at lower levels. Scaling suffixes are dict-like where each key is a Pyomo component and the value is the scaling factor to be applied to that component. In the case of indexed components, scaling factors can either be declared for an individual index or for the indexed component as a whole (with scaling factors for individual indices taking precedence over overall scaling factors). In the case that a scaling factor is declared for a component on at multiple levels of the hierarchy, the highest level scaling factor will be applied. It is also possible (but not encouraged) to define a “default” scaling factor to be applied to any component for which a specific scaling factor has not been declared by setting a entry in a Suffix with a key of None. In this case, the default value declared closest to the component to be scaled will be used (i.e., the first default value found when walking up the model hierarchy). Applying Model Scaling The core.scale_model transformation provides two approaches for creating a scaled model. In-Place Scaling The apply_to(model) method can be used to apply scaling directly to an existing model. When using this method, all the variables, constraints and objectives within the target model are replaced with new scaled components and the appropriate scaling factors applied. The model can then be sent to a solver as usual, however the results will be in terms of the scaled components and must be un-scaled by the user. Creating a New Scaled Model Alternatively, the create_using(model) method can be used to create a new, scaled version of the model which can be solved. In this case, a clone of the original model is generated with the variables, constraints and objectives replaced by scaled equivalents. Users can then send the scaled model to a solver after which the propagate_solution method can be used to map the scaled solution back onto the original model for further analysis. The advantage of this approach is that the original model is maintained separately from the scaled model, which facilitates rescaling and other manipulation of the original model after a solution has been found. The disadvantage of this approach is that cloning the model may result in memory issues when dealing with larger models.
{"url":"https://pyomo.readthedocs.io/en/stable/model_transformations/scaling.html","timestamp":"2024-11-06T13:40:45Z","content_type":"text/html","content_length":"26186","record_id":"<urn:uuid:e797c0f6-d109-4499-8e10-e70e81151f59>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00402.warc.gz"}
Aryabhata | Achievements, Biography, & Facts | Britannica Indian astronomer and mathematician While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. External Websites Britannica Websites Articles from Britannica Encyclopedias for elementary and high school students. Quick Facts Also called: Aryabhata I or Aryabhata the Elder 476, possibly Ashmaka or Kusumapura, India Also Known As: Aryabhata I Aryabhata the Elder Top Questions How did Aryabhata become famous? What did Aryabhata discover? What was Aryabhata’s legacy? Aryabhata (born 476, possibly Ashmaka or Kusumapura, India) was an astronomer and the earliest Indian mathematician whose work and history are available to modern scholars. He is also known as Aryabhata I or Aryabhata the Elder to distinguish him from a 10th-century Indian mathematician of the same name. He flourished in Kusumapura—near Patalipurta (Patna), then the capital of the Gupta dynasty—where he composed at least two works, Aryabhatiya (c. 499) and the now lost Aryabhatasiddhanta. Aryabhatasiddhanta circulated mainly in the northwest of India and, through the Sāsānian dynasty (224–651) of Iran, had a profound influence on the development of Islamic astronomy. Its contents are preserved to some extent in the works of Varahamihira (flourished c. 550), Bhaskara I (flourished c. 629), Brahmagupta (598–c. 665), and others. It is one of the earliest astronomical works to assign the start of each day to midnight. Aryabhatiya was particularly popular in South India, where numerous mathematicians over the ensuing millennium wrote commentaries. The work was written in verse couplets and deals with mathematics and astronomy. Following an introduction that contains astronomical tables and Aryabhata’s system of phonemic number notation in which numbers are represented by a consonant-vowel monosyllable, the work is divided into three sections: Ganita (“Mathematics”), Kala-kriya (“Time Calculations”), and Gola (“Sphere”). Britannica Quiz Numbers and Mathematics In Ganita Aryabhata names the first 10 decimal places and gives algorithms for obtaining square and cubic roots, using the decimal number system. Then he treats geometric measurements—employing 62,832/20,000 (= 3.1416) for π, very close to the actual value 3.14159—and develops properties of similar right-angled triangles and of two intersecting circles. Using the Pythagorean theorem, he obtained one of the two methods for constructing his table of sines. He also realized that second-order sine difference is proportional to sine. Mathematical series, quadratic equations, compound interest (involving a quadratic equation), proportions (ratios), and the solution of various linear equations are among the arithmetic and algebraic topics included. Aryabhata’s general solution for linear indeterminate equations, which Bhaskara I called kuttakara (“pulverizer”), consisted of breaking the problem down into new problems with successively smaller coefficients—essentially the Euclidean algorithm and related to the method of continued fractions. With Kala-kriya Aryabhata turned to astronomy—in particular, treating planetary motion along the ecliptic. The topics include definitions of various units of time, eccentric and epicyclic models of planetary motion (see Hipparchus for earlier Greek models), planetary longitude corrections for different terrestrial locations, and a theory of “lords of the hours and days” (an astrological concept used for determining propitious times for action). Aryabhatiya ends with spherical astronomy in Gola, where he applied plane trigonometry to spherical geometry by projecting points and lines on the surface of a sphere onto appropriate planes. Topics include prediction of solar and lunar eclipses and an explicit statement that the apparent westward motion of the stars is due to the spherical Earth’s rotation about its axis. Aryabhata also correctly ascribed the luminosity of the Moon and planets to reflected sunlight. The Indian government named its first satellite Aryabhata (launched 1975) in his honour. Takao Hayashi
{"url":"https://www.britannica.com/biography/Aryabhata-I","timestamp":"2024-11-14T14:38:06Z","content_type":"text/html","content_length":"103514","record_id":"<urn:uuid:f6ee1dd0-c0be-41cd-9f1a-6dd58b8a5959>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00727.warc.gz"}
Final Exam | Probabilistic Systems Analysis and Applied Probability | Electrical Engineering and Computer Science | MIT OpenCourseWare « Previous Exam Information The Final Exam covers the entire course, however the emphasis is on the material not covered in Quiz 1 and Quiz 2. Preparation Activities Review the Practice Exams and Solutions from previous years: Exam and Solutions Take the exam on your own and check your answers when you’re done. Congratulations! You’ve completed Probabilistic Systems Analysis and Applied Probability. You have learned all the basic tools of probability theory, the main concepts of statistical inference (both Bayesian and classical), and has been exposed to some classes of random processes. This is already enough to allow application of probabilistic methods to many real-world applications. Further Study For those wishing further study, there are the following possible directions: « Previous
{"url":"https://ocw.mit.edu/courses/6-041sc-probabilistic-systems-analysis-and-applied-probability-fall-2013/pages/final-exam/","timestamp":"2024-11-02T15:06:43Z","content_type":"text/html","content_length":"98914","record_id":"<urn:uuid:21413ac2-a1ef-4f56-91fe-d7824c0d5111>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00421.warc.gz"}
Tensor networks and phase transitions in machine learning Speaker: Cris Moore (Santa Fe Institute) Date: 10/15/24 Abstract: Suppose we observe a matrix of data with a low-rank “signal” obscured by noise. The standard way to find the signal, at least approximately, is PCA (principal component analysis): just look at the eigenvectors of the matrix. For Gaussian noise, random matrix theory tells us exactly how well this works: that is, the accuracy we can achieve as a function of the signal-to-noise ratio. For tensors, such as three-index tables A_{ijk}, the situation is much more complex. Here there seems to be a “statistical-computational gap,” namely a regime where finding the signal is possible but exponentially hard. Physically, this corresponds to a “glass transition,” where the optimum becomes hidden behind an energy barrier. Mathematically, it means that we believe no polynomial-time algorithm exists, and that exhaustive search is necessary. I’ll give evidence for this exponential hardness by showing that no algorithm remotely similar to PCA can work. Along the way, I’ll give an introduction to tensor networks — a generalization of matrix products and traces that everyone should know about. This is joint work with Tim Kunisky (Yale) and Alex Wein (UC Davis).
{"url":"https://math.dartmouth.edu/~acms/seminar_pages/MooreF24.html","timestamp":"2024-11-09T07:11:31Z","content_type":"application/xhtml+xml","content_length":"21626","record_id":"<urn:uuid:6a949294-b6a3-4a6a-a079-2ee78bb590a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00758.warc.gz"}
Test-Retest Coefficient of Equivalence & Stability — testRetestCES The testRetestCES function computes the test-retest Coefficient of Equivalence and Stability (Schmidt, Le & Ilies, 2003). dat = NULL, moments = NULL, testDat = NULL, retestDat = NULL, parallelTests = "means", sortItems = FALSE, convertToNumeric = TRUE, digits = 4 # S3 method for testRetestCES print(x, digits = x$input$digits, ...) A dataframe. For testRetestCES, this dataframe must contain the items in the scale at both measurement moments. If no dataframe is specified, a dialogue will be launched to allow the user to select an SPSS datafile. If only one dataframe is specified, either the items have to be ordered chronologically (i.e. first all items for the first measurement, then all items for the second measurement), or the vector 'moments' has to be used to indicate, for each item, to which measurement moment it belongs. The number of columns in this dataframe MUST be even! Note that instead of providing this dataframe, the items of each measurement moment can be provided separately in testDat and retestDat as well. Used to indicate to which measurement moment each item in 'dat' belongs; should be a vector with the same length as dat has columns, and with two possible values (e.g. 1 and 2). testDat, retestDat Dataframes with the items for each measurement moment: note that the items have to be in the same order (unless sortItems is TRUE). A vector indicating which items belong to which parallel test; like the moments vector, this should have two possible values (e.g. 1 and 2). Alternatively, it can be character value with 'means' or 'variances'; in this case, parallelSubscales will be used to create roughly parallel halves. If true, the columns (items) in each dataframe are ordered alphabetically before starting. This can be convenient to ensure that the order of the items at each measurement moment is the same. When TRUE, the function will attempt to convert all vectors in the dataframes to numeric. Number of digits to print. The object to print An object with the input and several output variables. Most notably: Input specified when calling the function Intermediate values and objects computed to get to the final results The value of the test-retest Coefficient of Equivalence and Stability. This function computes the test-retest Coefficient of Equivalence and Stability (CES) as described in Schmidt, Le & Ilies (2003). Note that this function only computes the test-retest CES for a scale that is administered twice and split into two parallel halves post-hoc (this procedure is explained on page 210, and the equations that are used, 16 and 17a are explained on page 212). This function uses equations 16 and 17 on page 212 of Schmidt, Le & Ilies (2003): in other words, this function assumes that one scale is administered twice. If you'd like the computation for two different but parellel scales/measures to be implemented, please contact me. Schmidt, F. L., Le, H., & Ilies, R. (2003) Beyond Alpha: An Empirical Examination of the Effects of Different Sources of Measurement Error on Reliability Estimates for Measures of Individual-differences Constructs. Psychological Methods, 8(2), 206-224. doi:10/dzmk7n if (FALSE) { ### This will prompt the user to select an SPSS file ### Load data from simulated dataset testRetestSimData (which ### satisfies essential tau-equivalence). ### The first column is the true score, so it's excluded in this example. exampleData <- testRetestSimData[, 2:ncol(testRetestSimData)]; ### Compute test-retest alpha coefficient #> Coefficient of Equivalence and Stability: 0.428 #> To help assess whether the subscales (automatically generated using means) are parallel, here are the means and variances: #> Mean subscale a1, time 1: 109.2926 (variance = 219.5932) #> Mean subscale a2, time 1: 167.7709 (variance = 556.0309) #> Mean subscale a1, time 2: 110.8976 (variance = 189.5549) #> Mean subscale a2, time 2: 170.8491 (variance = 613.3707)
{"url":"https://ufs.opens.science/reference/testRetestCES.html","timestamp":"2024-11-08T12:10:53Z","content_type":"text/html","content_length":"13968","record_id":"<urn:uuid:a394e593-c119-461b-a4c5-6076b94804c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00243.warc.gz"}
Use factor theorem to factorize the following polynomials completely: - WorkSheets Buddy Use factor theorem to factorize the following polynomials completely: Use factor theorem to factorize the following polynomials completely: (i) 4x^3 + 4x^2 – 9x – 9 (ii) x^3 – 19x – 30 More Solutions: Leave a Comment
{"url":"https://www.worksheetsbuddy.com/use-factor-theorem-to-factorize-the-following-polynomials-completely/","timestamp":"2024-11-12T23:24:48Z","content_type":"text/html","content_length":"141794","record_id":"<urn:uuid:b428ba7b-bedf-4992-adb6-0c407268b572>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00028.warc.gz"}
Big Numbers Today, let's talk about very large numbers. The University of Houston Mathematics Department presents this program about the machines that make our civilization run, and the people whose ingenuity created them. Think of a big number. Perhaps you thought of a billion or a trillion. A kid may have shouted "gazillion!" A trillion is big. But much larger numbers appear in very real situations. For example, the number of ways in which a deck of cards can be arranged is 52 x 51 x 50 x ... and keep on multiplying all the way to 1. Once you are done multiplying these 52 numbers you will get a number 67 digits long. This number has 55 more digits than a trillion and is close to the number of atoms in the universe. The number of different arrangements of a deck of cards is vast: If you shuffle a deck well, you can be certain that nobody has played quite the same game ever before. Many games lead to similarly large numbers of permutations: the number of positions in checkers has 20 digits. For chess it is a number over 40 digits long. Large numbers appear frequently whenever we count the possible ways of arranging objects. You have probably heard of the Infinite Monkey Theorem: If you put some monkeys in a room and let them pound away on typewriters, they will eventually produce all the works of Shakespeare. Even if this was true, it would take an unimaginably long time. In his story The Library of Babel, the Argentinian writer Luis Borges presents a similar idea. He described a strange universe of hexagonal cells lined by shelves of books. Each book is unique and consists of a different arrangement of 25 letters and symbols on 410 pages. All possible books that can be written with these 25 characters are somewhere in the library. Borges writes that therefore the library contains "the minutely detailed history of the future, the archangels' autobiographies ... the translation of every book in all languages, the interpolations of every book in all books." The Argentinian writer Jorge Luis Borges [Wikipedia: Photographer, José María Pepe Fernández (1929-2006), 1969 Paris.] From Borges' description we can compute the enormity of the Library of Babel. The total number of books in the library has about a million digits. There simply is no useful analogy for numbers this large: the number of stars in the sky, and even the number of grains of sand needed to fill the entire visible universe do not even come close. The best I came up with is that "This number is really, really big" with perhaps a few more repeats of "really." If you have trouble imagining such enormity, you are not alone. Some mathematicians even say that doing so is not only pointless, but meaningless. These ultrafinitists deny the existence of numbers that are too big to be physically realized. Think of all the different ways in which you can hang the shirts in your closet, or all the ways cars could be arranged in the parking lot you may be passing. The possibilities are vast. Ours is just one of all the possible realities. But all other possibilities have been described somewhere in some book in the vast Library of Babel. This is Krešo Josić at the University of Houston, where we are interested in the way inventive minds work. (Theme music) You can check here for a detailed explanation of how many books are in the library, or go here to download Borges' number. The Infinite Monkey Theorem is discussed at https://en.wikipedia.org/wiki/Infinite_monkey_theorem. You can read here about ultrafinitists https://en.wikipedia.org/wiki/Ultrafinitism. For a peek into the interactions between people who take this view and others in the mathematics community, you can read the opinions of Doron Zeilberger http://www.math.rutgers.edu/~zeilberg/OPINIONS.html This episode first aired on May 22, 2013.
{"url":"https://engines.egr.uh.edu/episode/2885","timestamp":"2024-11-13T04:42:12Z","content_type":"text/html","content_length":"32682","record_id":"<urn:uuid:8636e92b-f587-48a9-9607-204989f4d7bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00164.warc.gz"}
Publications about 'Noise amplification' Back to MJ's Publications Publications about 'Noise amplification' 1. S. Samuelson. Performance tradeoffs of accelerated first-order optimization algorithms. PhD thesis, University of Southern California, 2024. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient descent, Gradient-flow dynamics, Heavy-ball method, Nesterov's accelerated method, Nonnormal dynamics, Noise amplification, Optimization, Transient growth. [bibtex-entry] 2. H. Mohammadi. Robustness of gradient methods for data-driven decision making. PhD thesis, University of Southern California, 2022. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Data-driven control, Gradient descent, Gradient-flow dynamics, Heavy-ball method, Integral quadratic constraints, Linear quadratic regulator, Model-free control, Nesterov's accelerated method, Nonconvex optimization, Nonnormal dynamics, Noise amplification, Optimization, Optimal control, Polyak-Lojasiewicz inequality, Random search method, Reinforcement learning, Sample complexity, Second-order moments, Transient growth. [bibtex-entry] 1. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms. IEEE Trans. Automat. Control, 2024. Note: Doi:10.1109/TAC.2024.3453656. Keyword(s): Accelerated first-order algorithms, Control for optimization, Convergence rate, Convex optimization, Gradient descent, Fundamental limitations, Heavy-ball method, Nesterov's accelerated method, Nonnormal dynamics, Noise amplification, Second-order moments. [bibtex-entry] 2. L. Ballotta, M. R. Jovanovic, and L. Schenato. Can decentralized control outperform centralized? The role of communication latency. IEEE Trans. Control Netw. Syst., 10(3):1629-1640, September 2023. Keyword(s): Controller architecture, Fundamental limitations, Networks, Networks of dynamical systems, Noise amplification, Performance bounds, Topology design. [bibtex-entry] 3. H. Mohammadi, M. Razaviyayn, and M. R. Jovanovic. Robustness of accelerated first-order algorithms for strongly convex optimization problems. IEEE Trans. Automat. Control, 66(6):2480-2495, June 2021. Keyword(s): Accelerated first-order algorithms, Consensus networks, Control for optimization, Convex optimization, Integral quadratic constraints, Linear matrix inequalities, Noise amplification, Second-order moments, Semidefinite programming. [bibtex-entry] Back to MJ's Publications This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Last modified: Sat Oct 5 22:00:41 2024 Author: mihailo. This document was translated from BibT[E]X by bibtex2html
{"url":"https://viterbi-web.usc.edu/~mihailo/Keyword/NOISE-AMPLIFICATION.html","timestamp":"2024-11-11T23:41:11Z","content_type":"text/html","content_length":"21652","record_id":"<urn:uuid:841dc630-968c-4ec8-80a8-0853541461b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00786.warc.gz"}
Tabular And Bar Graph Data Interpretation Quiz-3 Data Interpretation Tabular and Bar Graph Data Interpretation Quiz-3 Tabular and Bar Graph Data Interpretation Quiz-3 Team LST Tabular and Bar Graph Data Interpretation Hello Aspirants, As we all know that Data Interpretation is a vital part of quantitative aptitude section for every competitive exams. Data Interpretation is the process of making sense out of a collection of data that has been processed. This collection of data present in the form of charts.(Like – Tabular Chart, Bar Chart, Pie Chart, Line Chart, Missing Data Chart, Caselet Chart and Radar Chart).So here, In this article we will provide different charts with some questions.These Data Interpretation questions are important for Bank, SSC, SEBI, NABARD, RBI, LIC, and Other state exams. You can attempt these questions & boost your preparation for your examination. In the Banking exams Data Interpretation questions asked in the Prelims as well as Mains exam.There are 3-4 Data Interpretation asked in the mains exam (Bank).You want to score more in the Data Interpretation section then you should practice more and more Data Interpretations questions. This “Tabular and Bar Graph Data Interpretation Questions” is also important for other banking exams such as SBI PO, IBPS PO, IBPS Clerk, SBI Clerk, IBPS RRB Officer, IBPS RRB Office Assistant, IBPS SO, SBI SO and other competitive exams. Tabular and Bar Graph Data Interpretation Quiz – 3 Directions:(1-5) Answer the questions based on the information given below. The bar graph below shows the survey of the students of 5 different colleges and their duration of sleeping time in a day. Note1: All the students of the college participated in the survey. Note2: Total number of students in any college = Number of students who sleeps (less than 6 hours + 6-8 hours + 8-10 hours + more than 10 hours) in a day. 1.Find the ratio of the total number of students in Hindu College to the total number of students in Ramjas College. C. 157:143 D. 141:127 E. None of these 2.If the ratio of the total number of girls sleeping more than 10 hours of all the colleges together to the total number of boys sleeping more than 10 hours of all the colleges together is 192:149, then find the total number of girls of all the colleges together with more than 10 hours sleeping duration in a day. C. 794 D. 742 E. None of these 3.The total number of students in Shri Ram College is what percent of the total number of students in Hansraj College? C. 82.68% D. 87.36% E. None of these 4.The total number of students from all the colleges together who sleeps for 8-10 hours daily is what percent of the total number of students from all the colleges together? C. 36.8% D. 48.7% E. None of these 5.Find the difference between the total number of students from all the colleges together who sleeps for less than 6 hours and the total number of students from all the colleges together who sleeps for 6-8 hours daily. C. 2719 D. 2671 E. None of these Directions:(6-10) Answer the questions based on the information given below. The table below shows the sales (in Rs. crores) of different companies and all the companies started in the year 2010. │Company│Sales in 2010│Sales in (2010 + 2011)│Sales in (2010 + 2011 + 2012)│Sales in (2010 + 2011 + 2012 + 2013) │ │ A │ 96 │ 170 │ 228 │ 330 │ │ B │ 78 │ 142 │ 194 │ 268 │ │ C │ 62 │ 134 │ 202 │ 294 │ │ D │ 76 │ 150 │ 218 │ 290 │ │ E │ 112 │ 188 │ 260 │ 344 │ 6.The total sales by company A and company E together in 2011 is what percent of the total sales of all the companies together in 2011? C. 32.33% D. 41.67% E. None of these 7.The sales of company A, B and C in 2014 increases by 5%, 25% and 20% respectively with respect to the sales of the given companies in the previous year, then find the total sales of company A, B and C together in 2014. A. Rs. 298.4 crores B. Rs. 321 crores C. Rs. 310 crores D. Rs. 294 crores E. None of these 8.Find the ratio of the total sales of all the companies together in the year 2011 to the total sales of all the companies together in the year 2012. C. 11:8 D. 64:57 E. None of these 9.Find the difference between total sales of all the companies together in 2010 and the total sales of all the companies together in 2013. A. Rs. 13 crores B. Rs. 21 crores C. Rs. 7 crores D. Rs. 18 crores E. None of these 10.The total sales of company A and D together in the given period of time is what percent of the total sales of all the companies together in the given period of time? C. 48.2% D. 45.7% E. None of these
{"url":"https://letsstudytogether.co/data-interpretation-quiz-3/","timestamp":"2024-11-04T12:34:17Z","content_type":"text/html","content_length":"250123","record_id":"<urn:uuid:86f5d52d-7639-4a29-94d1-c215cb8354ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00254.warc.gz"}
Teaching Math with Picture Books In the middle of doing some research, I found an article by Marilyn Burns about math and reading. I kept digging and found this list of great picture books for teaching math. Wow! Math Picture Books You might have noticed I’ve started discussing picture books here on MathFour.com. With this list by Marilyn Burns, I’m looking forward to doing a lot more. Here’s the list of math in picture books that I have discussed so far: Do you have any favorite math picture books? Share the titles in the comments. And don’t forget to share this list on twitter! This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! 8 Responses to Teaching Math with Picture Books 1. Speaking of Marilyn Burns, we’ve been reading The Greedy Triangle. A book that’s not math, but still has math in it and is fun is “Press Here” by Hevre Tullet. My son loved it and after a few readings made his own “book pages” with sticker dots that had some simple instructions I had to follow. □ Putting those on my list now, Yelena! I have told myself that I can’t buy new books until I publish info about the ones I have. Guess I’ll be getting on my publishing stick! 🙂 Thanks for stopping by! 2. Zero the Hero has been a favorite with all 3 of my under 6 children □ Thanks for the info, Carmen! Is it this one? http://www.drjean.org/html/monthly_act/act_2006/01_Jan/pg03d.html 3. No, its actually this one: http://us.macmillan.com/zerothehero/JoanHolub □ Very cool, Carmen. Thanks so much for sharing it! 4. I love 1,2,3 By The Sea. A great counting book. You can also get it on the ipad- □ Thanks for the information, Janice. I’m going to have to con you out of a copy of this! Leave a reply This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"http://mathfour.com/picture-books/teaching-math-with-picture-books","timestamp":"2024-11-08T02:20:47Z","content_type":"text/html","content_length":"49498","record_id":"<urn:uuid:884b36c7-5dac-4950-b7bf-9e8d498350af>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00174.warc.gz"}
Ask Hackaday: Is There A Legit Use For Operator Precedence? Computing is really all about order. If you can take data, apply an operation to it, and get the same result every single time, then you have a stable and reliable computing system. So it makes total sense that there is Operator Precedence. This is also called Order of Operations, and it dictates which computations will be performed first, and which will be performed last. To get the same results every time, you must perform addition, multiplication, power functions, bitwise math, and all other calculations in a codified order. The question I’ve had on my mind lately is, does this matter to us or just the compiler? Which Would You Do? As I was banging out some microcontroller code last weekend, I started looking at the number of parenthesis I was using. See, I like total control over what this computer etched into a shard of glass is doing. So I don’t depend on precedence, but this made me wonder if I’m doing it wrong. So I asked on Twitter which of following lines of code people would use: a |= 1 &lt;&lt; 1 + c; a |= 1 &lt;&lt; (1 + c); It is not surprising that everyone grabbed a torch and pitchfork in support of choosing parens at every opportunity. The consensus was that the next person reading the code will have a much easier time and will understand your intent. And that next person is more often than not you — how embarrassing if you can’t work out your own intent. Do yourself a favor and use parenthesis! He’s Unhappy with Precedence, and He Wrote the Language! In C it’s easy to understand how this is all built into the syntax. The equals sign assigns a value to a variable, so equals needs to have really low precedence otherwise that assignment will happen before the operations are performed. And function calls must happen before any other operations so there is actually data available to operate upon. It works. I rely on precedence in these two cases and don’t (necessarily) place everything to the right of an equals sign in parentheses. But not every operator is this easy to rely upon. The funny thing is that these rules didn’t spring to existence at the start of computer languages, but were developed alongside them. The footnotes of Wikipedia’s order of operations article yields an interesting tidbit from Dennis Ritchie, creator of the C language, in his book The Development of the C Language: Today, it seems that it would have been preferable to move the relative precedences of & and ==, and thereby simplify a common C idiom: to test a masked value against another value, one must write if ( (a & mask) == b ) ... where the inner parentheses are required but easily forgotten. In addition to a good chuckle, this article also taught me a new term: infelicity. Is There a Legit Use for Operator Precedence? So the big question remains. Why do we teach operator precedence in computer science if popular opinion is almost universally against relying upon it? Is there more value than just base The most legitimate use for trusting the compiler to follow the same invisible rules you have in your head is the International Obfuscated C Code Contest which is currently open for entries. That contest’s goal is to produce the hardest to read code and “To show the importance of programming style, in an ironic way.” But I wonder if there are other interesting uses like writing polyglot code, or compiler specific code. Let us know in the comments below. 111 thoughts on “Ask Hackaday: Is There A Legit Use For Operator Precedence?” 1. Language grammars have to define some kind of precedence. It also makes sense to explicitly specify that precedence in documentation, so that other compilers of the same language will generate the same output. A student on a CS track (as opposed to a software engineering track) should certainly know how to define a grammar and build a compiler. Those of us from more of a software engineering background at least need to know about it, even if we haven’t memorized all the precedence tables for our chosen language. If nothing else, we’ll need to debug code that wasn’t as explicit in its use of parens. Learning about precedence is here to stay, unless an RPN-based language takes over the world. I would personally like to see that, but I’m weird that way. 1. Couldn’t you mutate lisp to and from an rpn-based format? Or would that require all lisp functions to have a fixed number of arguments? I am not a lisp fan, but I believe it’s operators are prefix, like ‘+ a b’ rather than infix, but pre-fix to post-fix aka rpn should be feasible… 1. Sure. Lisp basically forces the issue by having parens everywhere. Or as I once heard it said, Lisp is like writing raw abstract syntax trees. 1. There’s the old joke that LISP stands for Lots of Irritating Stupid Parenthesis. 2. Language grammars that have infix operators… Some calculators use only postfix operators and don’t need parentheses because of that. 1. So yes, RPN. Or something Lispy (which is just parens for everything). 1. And Forth. 2. Just to be clear here, Lisp doesn’t use RPN; it’s notation is more like Polish Notation than Reverse Polish Notation. 3. Decades ago I wrote a compiler for the Amiga. It had no precedence even in assignment. A + B -> C * 8 -> D for example I found it ever easy to write for, but require a little more thought to write the statements. PS. On the Amiga there was a character “->” so I was not using compound symbols. 4. The old 1980’s symbolic math package MuMath, which was just a self-hosted layer called MuSimp on top of the underlying MuLisp, actually exposed and let you set the precedence for all of its operators. Each operator had a “property list” number for Left Binding Power (“LBP”) and Right Binding Power (“RBP”), as well as special INFIX and PREFIX settings. Higher numbers bound first. You could examine the MuMath source, change them at will, or create new operators. (In practice, the common operators of + – / * were already assigned, and changing them would turn the current system into goulash, so you would have to write an entire extra language layer on top of the current one. Not much room in 64K CP/M.) See section 13-25 and 13-29 of the following: 5. Why do they “have” to? If no precedence were defined within the language then an error would be thrown where you implied one without parenthesis. 1. Agreed–and I’m curious as to why no language that I know of works like that. No time to pursue it right now, but since even primitive interpreters and compilers supported precedence, I wonder if requiring parens would actually make the grammar more complex. 1. Check out smalltalk, only left-associative, no precedence. 1. I meant a language that had no precedence and forced you to use parens to specify the order of operations, rather than an implied left-to-right order. I know of Smalltalk (also MUMPS and APL which others have mentioned). C-family: a * x * x + b * x + c // multiplication has precedence over addition Smalltalk: a * x * x + (b * x) + c “needs parens to make the multiplication happen first” In the hypothetical language: ((a * (x * x)) + (b * x)) + c As [Lee] suggested, not having enough parens would cause an error. 2. Interestingly, you didn’t ask about a |= (1 << (1 + c)); Assignment operators are operators, too. Everyone just knows they have low precedence 3. Precedence vs brackets. Fast to program vs fast to debug. 1. For any non-trivial program: speed to program == speed to debug 1. No, you are confusing “fast to type into a keyboard”, with “speed to program”. Einstein said make it as simple as possible, but no simpler, It is getting into “too simple” when you cut out the brackets and parentheses to sacrifice elementary readability for typing speed (not even coding speed).This is the coding equivalent of being nastily passive-aggressive towards future debuggers. Just because you typed it quickly into the computer, doesn’t mean the computer is programmed, and it certainly doesn’t mean the code is easily debuggable. If I sprinkle catnip onto my keyboard, then point a laser pointer at it, then my cats will be the worlds fastest typists, but I wouldn’t want to debug their work. 1. I think you misunderstood Shannon’s point. Shannon is saying that debugging IS part of programming, and that making debugging easier (by including explicit grouping) actually reduces the total time, even if it takes you a few extra miliseconds to type the grouping symbols. That’s the reason for his ‘non-trivial’ caveat. Leaving out grouping symbols only saves total time in very trivial cases. In other words, he’s saying the same thing you did. 1. I think you are right. My apologies Shannon. I read your post wrong. 2. That’s the point. Code relying on precedence does not scale. It does not scale across the number if lines, the number if programmers, the number if years to support and the number of languages good code is ported to. All it provides, is faster code line entry. 3. Most of the time spent in writing a program is spent in reading reference documents and staring at the screen thinking about why it doensn’t work – not actually writing code. 4. @frezik is correct about the necessity of precedence. There must be rules. That being said, after programming C for some 25 years, I put EVERYTHING in parentheses. Operation precedence may be unambiguous, but programmers’ understanding of it is not. It’s necessary to remember that you’re not just communicating with a compiler, but also human programmers (including your future self). 1. There must be rules, yes. It isn’t necessary that the rules take the form of operator precedence. As TZ pointed out, APL evaluates everything from right to left. (That has implications, such as 2 + 3 * 5 != 3 * 5 + 2.) Another possibility is that grouping is mandatory and any line of code with more than one operator which doesn’t include grouping is a syntax error. But I fully agree with you that just because precedence rules exist doesn’t mean it’s a good idea to depend on them rather than on explicit grouping. 1. Well but most people also learn basic arithmetic in school and languages that don’t respect those rules are royal pain in the butt to use unless they make the evaluation order explicit by design (e.g. Lisp). The operator precedence rules in most programming languages are derived from there and are not an arbitrary choice of the language designer. The compiler doesn’t really care which way the evaluation order is (as long as it is defined unambiguously) but the human behind the keyboard certainly does. So I would caution against getting too creative with this because fundamentally any computer code is mathematics – and people work with certain assumptions how things work there (e.g. that multiplication goes before addition). Anything deviating too far from the commonly accepted rules is only adding an extra mental effort to the programmer. And re explicit grouping – yes and no. As long as only standard operators (i.e no bitwise ops) are used, there is no reason to really do that. It only clutters the code with visual noise that makes the expression that much harder to understand. With bitwise operators, comparisons and similar, it depends – usually it is better to parenthesize to make the intentions explicit (or even better – don’t write long complex expression but split them up). 1. Exactly. In doing math without a computer, one must know the order of operations and how parenthesis are used to make exceptions to it for things like forcing adding of some numbers to be done before multiplication. They’re the rules everyone is taught, and many learn, early in school. Creating a new way with a computer programming language just because you can is only going to make the language harder to learn and will confuse anyone attempting to read the code if they’re not familiar with the languages different ways. Good old BASIC is an ideal tool for teaching the concepts of algebra because it shows there’s real and practical uses for algebra – and actively shows how the math works. 2. David Riesz: Well said! Parentheses are themselves a form of documentation. 3. Occam and Smalltalk (and many more) have no operator precedence rules so it obviously isn’t necessary. 5. APL is a programming language that doesn’t have operator precedence. Simply because that language has so many operators that no one would be able to remember their precedence. Evaluation is always right to left. 6. I think it goes back to mathematical notation. y = m * x + b. A*x*x+b*x+c. So if you see 2+3*5, it will mean do the multiply first. TI had algebraic calculators with precedence and parenthesis, HP used RPN. You can do everything with RPN – and there’s the Forth language. 5 3 * 2 + Since mathematical notation has the idea of precedence, it naturally carried over into FORTRAN and C. 1. That’s what I came to comment. Order of precedence precedes computers. Every polynomial equation implicitly sets the multiplications before the additons/subtractions. So obviously it has meaning to people. What made things difficult was when the notation of mathematics, which extended beyond the expression of combinations of numbers, to the combination of numerical expressions with logical ones. I think C got this wrong. I should be able to say “if a < b and b < c" without parenthesis, with the "and" being done last, but that's not how C sees things. I can't count the number of times I've gotten burned by this. Hm. I just said "precedence precedes" and "C sees" in the same paragraph. 1. It is true that algebraic notation forced precedence rules (after some long debate). The same rules have much more recently been adopted in some places when teaching arithmetic. I was wondering about order precedence versus operator precedence and wonder if you either one is sufficient. If you included algebraic expressioins, then you need to address cases where A*B does not equal B*A, both mathematically, and numerically as in finite precision of computer calculations. On the other hand, this topic has been flogged to death by compiler writing teams for half a century. 1. It has definitely been flogged to death. Still, I’ll give it one more lash: when I’m writing an “if” clause, it always bothers me that I have to write it as “if (((a > 0) && (b > a)) || (a 0 && b > a) || a < -1). This is because I believe the numeric operators should ALWAYS have precedence over the logical ones. Why? Because you never ever need to do "a && b" when a and b are numerical. You CAN, but there's never a good reason to. The only things that should have a lower precedence than logical operations are parenthesis and assignment. Please continue the flagellation if you have a good contradictory example. 2. I’m not sure why this wouldn’t be the case, since the relational operators , = have higher precedence than &&. So, the && will be done last in C, providing that the first test is true. C doesn’t process further than it needs to to get the result, so if you assume that both sides of the && will be evaluated, then that will cause problems. So, “if (a < b && b < c)" would be evaluated the same as "if ((a < b) && (b < c))". If a < b is false, then b < c will never be evaluated. 2. Usually when i see precedence understanding issues it’s because of a ternary jammed in somewhere. var result = 3 + x == 0 ? 1 : 2 * 8; Which is it: A) 3 + ((x == 0) ? 1 : 2) * 8 B) ((3 + x) == 0) ? 1 : 2) * 8 C) 3 + ((x == 0) ? 1 : (2 * 8)) D) ((3 + x) == 0) ? 1 : (2 * 8) IMO, always include parenthesis 1. I just never use the ternary operator. The code is far clearer with if..then, and runs just as fast because they both optimize to the same code. 7. It starts with conventional math though. Should a = b + c*d evaluate as a = (b + c)*d? I think most people would say no. So now you have operator precedence… 1. Most people would say ‘No’ because they learned the accepted order in math class. There’s nothing sacred about that order. If you (and everyone else) had been taught from childhood that addition/subtraction had precedence over multiplication/division, math would still work just fine and you’d find our current system to be counterintuitive and puzzling. 1. Just had this discussion today. Once you know algebraic notation, it seems natural and simple. But most people forget that they spent between six and eight years learning it, prior to actually learning algebra, which is another couple years before getting to calculus, statistics, and other `higher’ maths. Hindsight being 20-20, there are other notations that are more consistent, and just as easy to use– the notation we use grew to be what it is because it is useful, eliminating the need for lots of extra parens and conscious analysis of expression structure, despite the inconsistencies (left vs right associativity, multiple forms for some operations, etc). Compare ((a*b-c*d)^(1/2))/((a*c+d*d)^(1/2)) to \frac{\sqrt{ab-cd}}{\sqrt{ac+bd}}. Which is easier to grok, knowing the system? There are many other equivalent systems, some of which are more consistent and easier to learn, but the one we have works quite well. The question becomes: should the languages we use for programming reflect, at least to a reasonable approximation, our prior structures? 1. “The question becomes: should the languages we use for programming reflect, at least to a reasonable approximation, our prior structures?” The answer is simply yes, you gave a precise example in your first paragraph. Imagine if calculus and statistics followed completely different notations, it would slow down the learning process and create problems for people trying to remember which notation to use for which specialty. The thing about the system that is currently taught is that it is the system that is easiest to convey over time to developing minds. The vast majority of those other systems are actually based on the original system and solve problems that our adult minds can understand and comprehend but if you would teach those to developing minds you would most likely lose a lot of the kids along the way. Also your example equation structures don’t work for your example as most children are taught the square root symbol before they are taught that the square root is a fractional exponent. They are also taught that if the variables are declared then you do not need the multiplication symbols but those variables should be explicitly declared as in your second formula AB could be one variable instead of A*B. Finally and i know i am being pedantic but those equations are definitely not the same…. in the first one the denominator has the term a*c+d*d under the square root sign and in the second it is ac+bd. 2. If you are young enough. In school I never had arithmetic order “precedence”. In fact I did not know it was a thing until I started teaching math and physics at 50. I was surprised that conventional computer language ideas have percolated back down to grade school and changed the way arithmetic is taught. Also text book dependent in the 20th century. 1. “In school I never had arithmetic order “precedence”.” Are you sure about that? What did ax² + bx + c = 0 mean to you, then? For several centuries, this has meant ((a * (x²) + (b * x) + c)) = 0, very distinctly different from ((((a * x)²) + b) * x) + c = 0. You just learned that so long ago, you didn’t remember learning it. I’m not saying you’re old, I’m just saying it becomes reflexive after a while. I learned it a number of decades ago, long enough that exposure to computer programming wasn’t a pervasive thing, but I do remember learning 1. Your example is algebraic. I never had it in arithmetic. 2. Okay, well, it never comes up until you’re writing equations, which is introduced in algebra, so yeah, it just doesn’t come up in basic arithmetic. How could it be? Each operation is presented as a separate problem. Did you manage to get all the way through high school without taking algebra? 3. A-Aron, how many red tiles are there on the floor? Lets see, 3 + 7 + 2 x 9 + 7 x 3 + 12 + 1 4. Okay, Comedicles, that’s a good example. So if you learned how to solve that, someone must have defined the order of precedence for you, even though of course they didn’t use that 3. Multiplication IS addition. Division IS subtraction. 1. Really? I thought that Subtraction was addition, and division was multiplication. I certainly don’t see how e*pi can be viewed as addition. 2. Only if you first saw Logarithms. 8. Because you need to be able to read code that someone else wrote who didn’t use parenthesis. 1. A proper IDE would allow you to highlight the confusing expression and click “parenthesize”. The IDE would then insert parentheses as needed to show the order of evaluation. 9. The day you miss type if ( data & mask == 1) if ( data && mask = 1) You will pray for the language definition not to reject unparenthised code Of course that would not prevent errors such as If ( (data && mask) == 1) Unless the language is highly typed. I trust a safe language should never do anything you haven’t specifically written including strong typing and no hidden type casts Programming speed is _IMHO_ far less important than reliability and maintainability 10. Shame on you!!! Your poor algebra teacher is surely turning over in their grave (if they are deceased) <– condition is precedent upon action!!! LOL Precedent is natural to humans so – you crawl before you walk etc. So if you want to change the natural order of operations you must have a way of indicating your new order of operations. Using parenthesizes to denote your new order of operation is extremely helpful during code debugging since it is easier to work from innermost operations defined by parenthesizes to outer most result. just my $0.02 11. Remember that C was developed when screens only had precious number of characters in the width. 80 x 25. Every character counted. The norm was to eliminate all superfluous characters. So it was normal to use precedence instead of overuse parens. We have the luxury of multiple monitors and small fonts today, so to we have a luxury to be more verbose with our code. 1. Your timeline about the development of C is off a bit. C was developed on systems which used 45 baud teletypes, not screens. Elimination of superfluous characters was even a bigger deal 1. Probably 110 baud, but yeah, extra characters slowed things down. 2. C was developed on teletypes (printers with a keyboard). CRT’s were way to expensive. 1. Finally, an explanation for why no one had televisions! 1. CRTs existed long before TVs 3. 64×16. 1K of RAM. 1. 72 x 1440 (assuming a 200′ roll of paper). 12. In MUMPS (https://en.wikipedia.org/wiki/MUMPS), there is no operator precedence, except for parentheses. Everything else is evaluated strictly from left to right. It takes a few day do wrap your head around that, or five minutes if you are used to use parentheses despite operator precedence. 1. There isn’t any in Forth, being RPN. Or one can say the order of the operators produces the order of the operations. 1. Comedicles: Or, one can say that Forth’s developers decided that people ought to be able to think like stack computers. 1. Yes, which they unconsciously do anyway. You have to mentally stack operators in infix. If I say please calculate 27 * and stop to yawn, what are you going to do? Just like infix calculators. They do nothing when you enter the operator and wait till you hit “ENTER”, a key you don’t need with RPN. 1. “hit “ENTER”, a key you don’t need with RPN”. So, what key do you hit after the first number? RPN uses “enter”, infix uses “=” at the end. Both serve the same function. 2. Dang. I meant =. [4 enter 3 times] versus [4 x 3 =]. In the Forth case ‘enter’ is an action. It pushes 4 to TOS. [4 x (3 + 2) =] is a couple ways in Forth like [4 enter 3 enter 2 + x]. Or [3 enter 2 + 4 x], which is only 6 keys versus 8 for infix (and the way a good compiler does it). And the “advantages” grow with complexity, but it takes a lot of practice. There is a remarkable book from the peak times for HP calculators, and just before personal computers, by John A. Ball called “Algorithms for RPN Calculators” that has everything in it with analysis to minimize keystrokes and even a section on the ideal calculator. I wish there had been room for derivation and description of the algorithms mathematically. I don’t think a lot of people bought it because of the timing. It is a heck of a resource https://www.amazon.com/Algorithms-RPN-calculators-John-Ball/dp/0471030708 13. please add gpg for non login to web. simple put text signing with gpg and put to the comment 14. Personally, I use parenthesis where the order is not immediately clear. e.g. With pretty much all bitwise or bit shift operations that are more complex than two variables / constants, because I don’t expect that one has a higher precedence than another; or when combining boolean And and Or conditions (e.g. if ((a && b) || c) { doSomething(); }). For basic arithmetic they’re not needed, and nor with comparisons, even when there are multiple comparisons and/or multiple operations on one or both sides of the comparison. When things get long I’ll start adding them, or even better, adding line breaks or temporary variables. 15. It doesn’t matter where or if you use parentheses – the goal is READABILITY (well, in most cases!) 16. C was developed at Bell Labs who could afford whatever equipment they wanted. In this case, it was on VT-100 CRTs which displayed more than 80 columns. As for precedence, I hope that all of you live long enough to use life support equipment where the programmer disdained precedence and made a mistake! 17. The confusion in coding comes from it’s singles line use. Maths has subscript and superscript. Your example ,1 << ( 1 + c ) is actually 1 subscript [base] * base superscript [1 + c] where base is 2 Obviously 1 + c has precedence as it's all superscript. 1. Yes yes yes. I do wish the languages had super and sub and Greek symbols. Even just arrays with subscripts would be great. 18. “Parenthesize Defensively”. Everybody who implements a language has their own One True Order of Operations, and its been that way since the beginning of the computer language, and probably all the way back to how you read the paintings on the wall. 1. “Parenthesize defensively” If only that were always possible. The other week I was trying to do something ever-so-slightly complicated in OpenSCAD, and discovered that it doesn’t DO parentheses in expressions! I had to use intermediate variables for things that really shouldn’t need them. 1. Really? I just tried and ((1+2)*3+1)*2; result 20 (1+2)*3+1*2; result 11; 1+2*3+1)*2; result 16 1+2*3+1*2; result 9 in assigning a variable, and as a dimension of a cube. with version 2015.03-3 1. I don’t want to try to duplicate the problem right now, but I think it was when I attempted to do arithmetic within a parameter list. 19. I won’t use them for arithmetic like 2*i + j. But when logic operators get involved, always add parens. 20. I’m surprised nobody’s talking about Group Theory, magma and monoid. They are the fundamentals structure used to define how algebraic operation should behave and in this case: https:// My guess is that precedence is based on distributivity: Anyway Group Theory is an interesting topic ! 21. Doesn’t matter; what an operation does needs to be apparent to a programmer when they are reading the code. 22. Regarding ()’s and readability, is there a way (say in Linux World and unicode) to use a dot for multiplication instead of a frickin asterisk? Maybe some auto-substitution? Or a setting in an IDE? Think of the children, and the generations to come. 1. The dot represents AND in logic expressions. 1. And that is just wrong! Anyway, my KB doesn’t have a dot. 1. You could either still type ‘*’ and have the IDE display it as the proper character (U+22C5 DOT OPERATOR), or maybe type a combo like Alt-. Code stored in text files is probably not going away any time soon, but it doesn’t need to be presented to humans that way. That is also my answer to the great tabs versus spaces argument–the IDE should be able to show the user the code in their preferred manner, including things like showing ‘*’ as a dot when it represents the multiplication operator, while saving it in the manner understood by the compiler and other tools. If the user wants the language’s exponentiation operator hidden and the right-hand side displayed in superscript, why not? If someone is unhappy with Python’s lack of braces, the IDE could display braces where they would be in a language that had them. 23. Operator precedence is only a half the problem. The other half is operator associativity. 1+2+3+4 is commutative. It evaluates to 10 no matter which ‘+’ operation you evaluate first. 1-2-3-4 non-commutative. It can evaluate to -8 (((1-2)-3)-4) (left-associative), or -2 (1-(2-(3-4))) (right-associative). It can also equal 0 (1-2)-(3-4). A compiler or interpreter has to choose an order of operations when it evaluates expressions, and having 1-2-3-4 return different values in different contexts is embarrassing. Operator associativity and precedence are side effects of the effort to make the same sequence of characters evaluate the same way every time. In practical terms, the whole subject can be reduced to saying, “let’s spend decades evolving software that’s so hard to design, understand, modify, or debug that we have to use other software tools just to write the code. Then let’s elaborate that notation so much that the spec to generate a compiler becomes an impenetrable mudball of its own. As payoff for all that effort, programmers will be able to use a form of notation they’ll screw up on a regular basis and waste countless hours trying to fix. That’s *much* easier than getting programmers to use RPN — whose interpreter can be assigned as homework for first year CompSci students — because they consider that ‘unfamiliar and hard’.” 1. Associativity only comes up when your compiler is trying to optimize what you told it to do. There are plenty of cases where compilers optimize things based on the assumption that mathematical operations return exact results, which of course they can’t, as soon as you start using floating point arithmetic. 2. Every computer language springs to life when somebody decides that existing languages don’t really express his thought process, or what he imagines that process ought to be. There are two major approaches to translating human thought into computer procedures: 1) trying to force computers to work more like humans think, and 2) trying to force humans to think more like computers work. Forth is what you get with the latter, BASIC with the former, and C when the developer just can’t make up their mind. Does this cause holy wars? Yes, mike stone, yes it does. 1. The truth has never been sso hilarious… and so painfully cclear. 24. Smalltalk prioritizes unary message patterns over binary patterns, and keyword messages last. Parens are used to specify order of ops otherwise. There are no “operators” as such in Smalltalk; any method will do. Instead of asking whether or not we “need” precedence (with is dependent on grammar, and we do need grammar) you might ask: what is an operator, and do we need a special noun to describe something that’s just a function/method? In Smalltalk, the following evaluates to -3. 1 – 2 * 3 As long as we know that what we really want to write here is “1 – (2 * 3)” we’re fine. This will trip up newcomers, but ultimately that’s a matter of learning the language. My understanding is that Smalltalk’s designers were influenced by APL’s lack of complex precedence rules, and thought that it was better for a programmer to explicitly state intent than to rely on clever grammar rules to do the thinking for them. My own two cents is that I’d prefer to be w/o complex precedence rules, because it reduces the complexity of the grammar to be rid of them. Less complicated grammar means less cognitive load on the learner. The only challenge is ingrained assumptions about the ordering of terms in algebraic notation. 1. “you might ask: what is an operator,” In historical C, that’s simple: an operator is a primitive — it represents one or a few opcodes that do the corresponding operation. It’s completely different from a function call, which does just what it sounds like — pushes arguments on the stack and jumps to the function. In a systems language which is basically just a portable shorthand for assembler, that distinction is vital. There’s places in systems programming where you don’t want to be making function calls at all, and places where you can, but need to be mindful of the performance hit. (Of course, things like inline functions, whether user-specified or generated by compiler optimization, blurred this initially clear distinction.) “and do we need a special noun to describe something that’s just a function/method? But if an operator really is just a function/method, that means you’re using a newer language where C’s distinction doesn’t make sense. It’s incredibly unfortunate that so many language designers unthinkingly copied C’s syntax without understanding why it was made that way. If you’re going to write a high-level language where everything is burdened with function calls or equivalent overhead, your syntax really should make sense with that, instead of keeping a now-arbitrary division of functions into “functions” with explicit argument lists and “operators” with implicit left- and right- arguments. 25. A pretty stupid and poorly thought out article. Yes we need this and there are many ways to do it, it is all about design choices by whoever writes the language. A ripe field for argument and disagreement. I guess the idea of the article was to toss the ball in the air and watch the fun in the comments. 26. Watch out for which language, and if optimization is on or off. I had issues way back with a C compiler and an IEEE engineering package. floating point representation A = B*C – B*C A = (B*C) – (B*C) One would expect the same result. The first case leaves one of the B*C results in the CPU and operates against the other that was thrown out into a register. If the CPU has extra precision that the register doesn’t, you may not get zero, but a tiny amount, but only for some values of B and C. In the second case, using the () can force each B*C result out into a register before performing the subtraction. Always a zero. Unless optimization is on, then it doesn’t force the last B*C result out into a register, and you may get a tiny result instead of zero. Isn’t variable behaviour fun! Of course a truly smart compiler does A = 0.0 Several studies have shown that over the life of most code, significantly more will be spent maintaining it than on the original writing. So $ wise for total cost, it’s worth the cost of some extra time for readability, both for original debugging and for the usually inevitable maintenance down the line. The problem is where to draw the line. 1. This isn’t a problem with languages or compilers; it’s a problem with numerical calculation in general. I use floating point arithmetic only when really necessary, and then only when I can guarantee that rounding errors aren’t going to cause trouble. Don’t ask me why. 1. Yes it’s a general problem, but it’s a compiler too when the result can be different for code that looks like it should produce identical results. Might you have been checking for a zero instead of zero +/- a tolerance? 27. This makes me wonder what Microsoft’s programmers didn’t do for many years with Calc.exe from it’s first appearance in Windows all the way through early versions of Windows 10. You could get the square root of an even number where the square root is also an even number, then poke + then = *and get an incorrect result* because while the *display* was correct the number calculated internally was slightly wrong. For example enter 4 then square root to get “2” then + then =. The result will NOT be 4 in a version of calc.exe with the bug. A version of Calc.exe with that bug fixed was included a while ago in some update to Windows 10. What was done to fix it? Are there any other long standing calc.exe bugs that have recently been Dunno if it was also updated in 8.x. Most likely has not been fixed for Windows 7 and I’m pretty certain they haven’t bothered to produce an updated version for Vista. Absolutely certain Microsoft will never be releasing a bugfixed calc for anything older – but it would be cool if MS would drop a “Bug Reduced Calc.exe Pack for Windows 1.0 through 7”. 1. Your answer is in these two blog posts and the handful of things linked in them. TL;DR The calculator used to use IEEE floating point in it, but people complained so they rewrote the engine to do arbitrary precision arithmetic for basic operations. No one really noticed, and instead started complaining that it generated small errors in sqrt. The team finally extended the engine to handle perfect roots so that error is gone as of early 2018 1. Heck, if you ran it on a Pentium 75, the errors in the FPU would probably cancel out the errors in Microsoft’s calculator. 2. Maybe use Microsoft Mathematics™ instead. 28. The Forth fans have already spoken, but I’ll add in my heresy here: parentheses are a byproduct of the language / algebraic system used. They are not a necessary evil, they’re just evil. :) Here’s how it works in Forth. Get all the things you need to operate on together, then do the operation. And if you follow this rule, you will never need parentheses or operator precedence. Things simply go in the order they’re written in. 3 * (1 + 4) only needs parentheses b/c we’ve defined “*” as being a look-ahead operator: X * Y. Because of the look ahead, you need to make sure that all the “aheads” happen at the right times and you need to argue about whether the “+” happens before or after the “*”. Contrast with everything else you do in life. For instance cooking. Get your ingredients together, then cook. Or carpentry. Gather up your tools and raw materials, plan, and then start sawing. Or as I like to say: 1 4 + 3 * . If you want to add two numbers together, throw them both on the stack first, then add. Multiply? Get your multiplicands (?) together first, then multiply. This is also why LISP is parenthesis hell: everything is delayed in LISP. It’s like the mirror image of Forth in that respect. Final nail in the parenthesis / op-precedence coffin? The ALU/CPU doesn’t have any of that nonsense going on. Get your values set up in registers, and then call the operation. Forth and RPN just mimic this (logical? necessary?) temporal ordering, and remove a lot of hoop-jumping by doing so. Yes, I know you were taught math the counterintuitive way, where “+” could somehow look into the future. Sorry about that. It’s really pathological — there’s this system of writing down numerical operations, but it doesn’t go in the order (left-right) that you write things down. Instead, there’s this set of arbitrary conventions about the ordering, coupled with some extra symbols whose sole reason for existence is overriding that ordering when it’s not what you meant. Wake up, sheeple!! (I’m only half joking. It’s really worth the exercise to wrap your head seriously around a different worldview, if only just to recognize that the one you grew up with as “normal” is in fact truly bizarre.) 29. MS Excel has an interesting bug with precedence. Try =-2^2 and it will give you 4. It should give -4. I came across the bug when valuing options using the Black Scholes Merton formula. I can imagine that many students and bankers have lost marks and money due to this bug. 30. i usually just spam parentheses everywhere when im not sure what the precedence is. i know the order of the standard math operations but get to bitwise logic and i don’t have a clue. so that part of the code is a parentheses salad. and different programming languages have different rules so parentheses is good practice if you use a lot of languages. 31. As others have pointed out, operator precedence is language dependent. The examples given in the article, and in the pull-quote by Ritchie, are mainly about C-like languages (C, C++, Objective C, Java, C#, C–, etc), which inherited the operator structure of C. C has an excessive number of operators (48) and a large number of precedence levels (18). The levels of precedence are supposed to make it easier to write readable, compact code. And they do, to a degree. But it’s hard to remember all those operators and their relative levels. Ritchie even admits they got some levels wrong (it is possible that there is not an ordering that makes sense all the time; the “natural” binds-tighter relationship could be non-transitive). As such, it’s sensible to defensively add parenthesis just to be safe. Most of the other languages listed that people say don’t have these problems generally have much fewer operators and precedence levels. Without that complexity the result doesn’t require defensive parenthesation to make the result readable. Order of operations is useful; operators are useful, but C goes overboard in quantity, causing problems. 32. I’m voting for using strict from left to right precedence. That will mean of course that a = b + c; the assignment will be done before the addition, and then you may get an unused code warning. How about random precedence? Every now and then you will get uncomprehensible errors in calculations because you forgot to add one of the 20 sets of parentheses in a simple calculation. Believe me. operator precedence is a good thing. Part of the reason for the complex rules for precedence in C are not for uses to be used, but for the compiler writers, to ensure code portability. 33. Regardless of all the heat this article brings in the comments, it mentioned one of the sites I think my browser opened more than any other was the wikipedia article about c and c++ Operators: Shouldn’t be a gold but an diamond or platin article and be printed in every binder of every it-book. It will end all talks in seconds! 34. I’m surprised nobody’s mentioned PostScript yet. It is a general purpose, stack oriented programming language using postfix notation. It doesn’t use parentheses. The order of operations will be obvious to those familiar with FORTH or RPN calculators. PostScript has some features for describing layout of text on a page, so it has become popular for this purpose. It is rarely used outside of the context of typesetting these days, but it is capable of much more. If a language worth learning makes you think a bit differently after you’ve learned it, then I believe PostScript is worth learning. It avoids the parentheses issue. Parentheses and order of operations issues only happen with infix notation. 35. Interesting post and an interesting question posed. The only reason that I can think of to rely on operator precedence is to avoid the “tyranny of the parentheses”. We want concise code that is easy to read. That usually means that certain operations should be written as a single line of code: y = m*x + b; This is nice and easy to read, and looks almost exactly like the textbook equation. If we have no precedence beyond left to right evaluation, we would have two write it at least as: y = (m*x + b); But if there is no precedence at all, it needs to become: y = ((m*x) + b); Gets a little more opaque, but you get the idea. Those parentheses start to stack up, and unlike written math, most software languages have just one form of parentheses. You can’t use a mixture of round, square, and curly braces to improve readability! In a world of no precedence to only way to avoid parenthetical explosion in anything beyond a trivial operation is to do it in multiple lines: y = (m*x); y = (y + b); I don’t know about you, but that is a lot harder for me to read than the very first line. 1. It’s always possible to get away from the “tyranny of parentheses” by using intermediate variables, but then you have the tyranny of intermediate variables. Two things: 1) with strict left-to-right, there would never be any “how is the compiler going to do this” question, but it would be very easy to accidentally use the way we write equations by hand and get it wrong. Which I think is why the languages that define an order of precedence were designed that way. 2) most text editors these days have parentheses matching highlighting of some sort, so even with dozens-deep sets of parentheses, you can verify you did what you intended. 1. Agreed on both points. Parentheses matching has saved my butt many a time, but even with it, sometimes it can be difficult to see at a glance what a statement actually does. 2. I think you are making things overly complicated. Adding redundant parentheses doesn’t change anything unless you have some unstated language definition in mind, redundant parentheses are simply stripped, so: y =(m*x + b) is equivalent to y =m*x + b y =((m*x) + b) is equivalent to y =(m*x) + b y =(m*x); y =(y+b) is equivalent to y =m*x; y =y+b If parentheses are supported operator precedence and evaluation order doesn’t matter for y = (m*x) + b and will be treated as y’ = m*x; y = y’ + b internally. TL;DR y = (m*x) + b is enough. 1. In the post he specifically talks about assignment having the lowest precedence. If you don’t assume any operator precedence, you have to assume that you don’t know if assignment will trump some other operator. In that case, you would need parentheses to force the order of operations you intend. I realize that if you don’t assume that assignment is the lowest precedence, some statements start to have no effect, but I was trying to keep with the theme of the post. 36. Mathematically, order of precedence/operations are merely to allow the use of shorthand marks for a series of additions. Every operation breaks down to that. The use of heuristics help to ensure that tthe results wwill be consistent aand accurate rregardless of who uses it. Applying this thinking to computer programming, all commands break down to a series of elementary instruction set. Also, given that each computer can have a completely different set of instruction set, consistency and accuracy can suffer if a standard is not in use. Seeing as how it’ll add too much already to the compiler/interpreter, parentheses are used to help reduce the the size of them since parentheses can be easily use to override precedence without taking up too much memory. Please be kind and respectful to help make the comments section excellent. (Comment Policy) This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://hackaday.com/2019/02/19/ask-hackaday-is-there-a-legit-use-for-operator-precedence/","timestamp":"2024-11-09T04:41:04Z","content_type":"text/html","content_length":"299803","record_id":"<urn:uuid:923ab3a3-045e-4305-a876-5b4196902508>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00352.warc.gz"}
In pursuit of a JavaScript OpenPGP Implementation Recently I’ve been studying the OpenPGP specification in hopes of developing a Javascript implementation. I’m aware that someone has already done this, however I’m not a fan of using GPL-licensed code (I prefer more permissive MIT-style licensing). Furthermore, I have an academic curiosity towards learning and doing this myself, and thus I set out to learn all there is to know about PGP Shortly into reading the spec, I realized that there is a lot of math and number theory involved in OpenPGP, and found that I’ve become rusty since dropping out of my math major (I left halfway through my undergrad cryptography course—d'oh!). Additionally, there is a lot of technical notation within the spec that is over and above anything I’ve encountered before. This isn’t the sort of stuff you can just search for on Google and quickly have the knowledge piped into your brain. There is much learning to be done, and not a lot of existing hand-holding online to help, and that’s why I’m writing this. Consider this an online version of my math notebook. If I can help you learn PGP in the process of helping myself, then great. Disclaimer: I am not an authority on the subject of cryptography. I’m doing my best, but the information contained herein may be inaccurate, incomplete, or simply wrong. I encourage you to fact-check and work through the examples on your own. Please let me know if you find any errors in my posts. Scalar and big-endian numbers I assume you’ve read up to section 3 in the OpenPGP specification. Despite it being an ugly .txt file, it’s a surprisingly good read. You have a basic conceptual understanding of what PGP encryption is and how it works. Great. Well, section 3.1 is where it really starts getting technical, so let’s start off with a quote: 3.1 Scalar Numbers Scalar numbers are unsigned and are always stored in big-endian format. Using n[k] to refer to the kth octet being interpreted, the value of a two-octet scalar is ((n[0] << 8) + n[1]). The value of a four-octet scalar is ((n[0] << 24) + (n[1] << 16) + (n[2] << 8) + n[3]). Let’s pick this apart. An unsigned number means that it’s above zero, but what the hell is “big-endian format?” According to Wikipedia, a big-endian number has its most significant digits first. That’s easy—we use big-endian decimal numbers all the time in our daily lives. For example, if you find a sack containing $1,024 in a dark alley, you know that 1 represents the thousands digit, 0 represents the hundreds digit, 2 represents the tens, and 4 represents the 1’s. Simple enough. Now, we have to start thinking like a computer. An octet is a more specific way of saying a byte, or 8 binary bits. Be familiar with binary. It’s the most basic way a computer represents information and is a base-2 numeric notation (also in big-endian format) where each digit represents a power of two. For example, decimal 187 can be represented by binary octet 10111011 as we see below: 187 represented in binary: Summing 128 + 32 + 16 + 8 + 2 + 1 = 187. Make sense? Getting back to the specification, we need to learn some notation. Specifically, how do we interpret: the value of a two-octet scalar is ((n[0] << 8) + n[1]) This is easiest to explain by way of example. Supposing you have a scalar number consisting of two octets 10111011 and 01000101 (69, lol). Respectively, these may be referred to by n[0] and n[1]. With n[0] << 8, we perform the operation of shifting n[0]’s binary representation over to the left by 8 binary places, thus multiplying n[0]’s value by 2^8. This can best be visualized: 10111011 << 8: Summing 32768 + 8192 + 4096 + 2048 + 512 + 256 = 47872 (which is 187 * 28). Now by adding in our value for n[1], we get 47872 + 69 = 47941. This can be represented in binary (10111011 << 8) + 01000101: In other words, we compute the value of a scalar number by taking the binary representations of each of its octets and smushing them together into one big long binary number. Whew. Multiprecision Integers 3.2. Multiprecision Integers Multiprecision integers (also called MPIs) are unsigned integers used to hold large integers such as the ones used in cryptographic calculations. An MPI consists of two pieces: a two-octet scalar that is the length of the MPI in bits followed by a string of octets that contain the actual integer. Section 3.2 gives a couple of examples, which I will explain, but first you’ll need a basic understanding of hexadecimal numbers. With binary, each digit (between 0 and 1) represents a power of two. With decimal, each digit (between 0 and 9) represents a power of 10. With hexadecimal, each digit (between 0 and …15?) represents a power of 16. Since we don’t have any numeric characters to represent numbers above 9, we start using letters of the alphabet! So we count 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. lol Here’s why we care about hexadecimal: it easily converts to binary for computer applications, since 16 is a power of 2, and it gives us a much more efficient way for notating octets than writing out their full binary representation. For example, we can write 249 in binary as: 249 in binary: But it's a hell of a lot more concise in hexadecimal: 249 in hexadecimal: Now, consider our first example using hexadecimal numbers in section 3.2: The string of octets [00 01 01] forms an MPI with the value 1. In this simple example, the first two octets 00 01 indicate the length of the multiprecision integer in bits, whereas the final 01 represents the actual integer. If we convert hexidecimal octets 00 01 01 to binary, we have 00000000 00000001 00000001. So the first two octets tell us that the length of our MPI is 1 bit, and the final octet is the binary representation for the number 1 (which is indeed just 1 bit). The string [00 09 01 FF] forms an MPI with the value of 511. If we convert this to binary octets, we have 00000000 00001001 00000001 11111111. The first two octets, again, form a scalar number indicating the length of the MPI in bits. Converting 00000000 00001001 to decimal, we see that our MPI is 9 bits long. The final two octets 00000001 11111111 represent the integer itself. We can smush these octets together into one binary scalar number 000000111111111 which has decimal representation 511. Notice that all significant digits occur in the first 9 bits (starting from the right), as indicated by our length scalar. With the notation for multiprecision integers, we can represent extremely large integers (up to 65,536 bits), far beyond the typical 32-bit memory limit that a computer imposes on a regular integer value. This will become important later on in our cryptographic calculations, where we work with very large prime numbers.
{"url":"https://www.obsessivefacts.com/blog/tagged/octet","timestamp":"2024-11-11T13:59:01Z","content_type":"text/html","content_length":"44052","record_id":"<urn:uuid:35d021fe-0f74-4c64-8a0f-9b9ae99187b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00136.warc.gz"}
1965 AHSME Problems/Problem 36 Given distinct straight lines $OA$ and $OB$. From a point in $OA$ a perpendicular is drawn to $OB$; from the foot of this perpendicular a line is drawn perpendicular to $OA$. From the foot of this second perpendicular a line is drawn perpendicular to $OB$; and so on indefinitely. The lengths of the first and second perpendiculars are $a$ and $b$, respectively. Then the sum of the lengths of the perpendiculars approaches a limit as the number of perpendiculars grows beyond all bounds. This limit is: $\textbf{(A)}\ \frac {b}{a - b} \qquad \textbf{(B) }\ \frac {a}{a - b} \qquad \textbf{(C) }\ \frac {ab}{a - b} \qquad \textbf{(D) }\ \frac{b^2}{a-b}\qquad \textbf{(E) }\ \frac{a^2}{a-b}$ Solution 1 $[asy] import geometry; point O=(0,0); point A=(10,5); point B=(10,0); point C; point D; line OA=line(O,A); line OB=line(O,B); // Lines OA and OB draw(OA); draw(OB); // Points O, A, and B dot(O); label("O",O,S); dot(A); label("A",A,NW); dot(B); label("B",B,S); // Segments AB, BC, and CD draw(A--B); pair[] x=intersectionpoints(perpendicular(B,OA),(O--A)); C=x[0]; dot(C); label("C", C, NW); draw(B--C); pair[] y=intersectionpoints(perpendicular(C,OB), (O--B)); D=y[0]; dot(D); label("D", D, S); draw(C--D); // Right Angle Markers markscalefactor=0.075; draw(rightanglemark(O,B,A)); draw (rightanglemark(B,C,O)); draw(rightanglemark(O,D,C)); // Alpha Labels markscalefactor=0.15; draw(anglemark(O,A,B)); draw(anglemark(C,B,O)); label("\alpha", (9.6,4.4)); // Length Labels label("a", midpoint(A--B), E); label("b", midpoint(B--C), E); label("c", midpoint(C--D), W); [/asy]$ For simplicity, let the first perpendicular from $\overleftrightarrow{OA}$ to $\overleftrightarrow{OB}$ be $\overline{AB}$, and let the second perpendicular have foot $C$ on $\overleftrightarrow{OA}$ . Further, let the perpendicular from $C$ to $\overleftrightarrow{OB}$ have foot $D$ and length $c$, as in the diagram. Also, let $\measuredangle OAB=\alpha$. From the problem, we have $AB=a$ and $BC =b$. By AA similarity, we have $\triangle OCB \sim \triangle OBA$, so $\measuredangle CBO=\alpha$ as well. In $\triangle ABC$, we see that $\sin\alpha=\frac{b}{a}$, and, in $\triangle CDB$, $\sin\ alpha=\frac{c}{b}$. Equating these two expressions for $\sin\alpha$, we get that $\frac{b}{a}=\frac{c}{b}$, or, because $a,b,c>0$, $b=\sqrt{ac}$. Thus, $b$ is the geometric mean of $a$ and $c$. Note that if we remove the first perpendicular (i.e. the one with length $a$), we are left with a smaller version of the original problem, which will have the same equation for the limit (but this time expressed in terms of $b$ and $c$ rather than $a$ and $b$). Thus, if we let the length of the fourth perpendicular be $d$, then $c$ will equal the geometric mean of $b$ and $d$, and so on for the infinitude of perpendiculars. Thus, because the length of a given perpendicular (except the first one) is the geometric mean of the two adjacent perpendiculars, the lengths of the perpendiculars form a geometric sequence. Because the sequence's first two terms are $a$ and $b$, it has common ratio $\frac{b}{a}$. Because $b<a$, the common ratio is positive and less than $1$, so the sequence's infinite geometric series converges. This infinite sum is given by $\frac{a}{1-\frac{b}{a}}=\boxed{\frac{a^2}{a-b}}$, which is answer choice $\fbox{\textbf{(E)}}$. Solution 2 (Answer choices, intuition) Let $a$ and $b$ be measured with some units of length (say, meters). The limit of the sum of the lengths of the perpendiculars would, then, be measured in meters as well. Performing dimensional analysis on each of the answer choices, we can eliminate options (A) and (B), because they have units $\frac{\text{meters}}{\text{meters}}$, so they are dimensionless. Unfortunately, our other three options all have units of meters. Now, we have a $\frac{1}{3}$ chance of guessing the answer correctly, but we can go further. Think of what happens as $\measuredangle BOA$ approaches $0^{\circ}$. Then, $a$ approaches $0$, and the perpendiculars have nearly no space to "bounce between" the two lines, and so they likely have zero total length. Likewise, if we think of what happens as $\ measuredangle BOA$ approaches $90^{\circ}$, all of the perpendiculars except the first one (with length $a$) go to zero. From this intuition, one would think that $a$ would have a larger impact upon the total length of the perpendiculars. Out of the three answer choices left, this conjecture is only consistent with choice $\boxed{\textbf{(E) }\frac{a^2}{a-b}}$. To reinforce this decision, think about what happens as $\measuredangle BOA$ approaches $90^{\circ}$ again. We would expect the sum to approach $a$ and $b$ to approach $0$. Plugging in $b=0$ into the expression in choice (E), we get $\frac{a^2}{a}=a$, which is what we expected. On the other hand, options (C) and (D) equate to 0, which is clearly false. See Also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/1965_AHSME_Problems/Problem_36","timestamp":"2024-11-08T19:02:13Z","content_type":"text/html","content_length":"58902","record_id":"<urn:uuid:6b83e0f5-5bcc-4440-9784-ddd855c05af6>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00052.warc.gz"}
Game Theory, Conditional Preferences, and Social Influence Neoclassical noncooperative game theory is based on a simple, yet powerful synthesis of mathematical and logical concepts: unconditional and immutable preference orderings and individual rationality. Although this structure has proven useful for characterizing competitive multi-player behavior, its applicability to scenarios involving complex social relationships is problematic. In this paper we directly address this limitation by the introduction of a conditional preference structure that permits players to modulate their preference orderings as functions of the preferences of other players. Embedding this expanded preference structure in a formal and graphical framework provides a systematic approach for characterizing a complex society. The result is an influence network that allows conditional preferences to propagate through the community, resulting in an emergent social model which characterizes all of the social relationships that exist and which leads to solution concepts that account for both group and individual interests. The Ultimatum game is presented as an example of how social influence can be modeled with conditional preferences. Citation: Stirling WC, Felin T (2013) Game Theory, Conditional Preferences, and Social Influence. PLoS ONE 8(2): e56751. https://doi.org/10.1371/journal.pone.0056751 Editor: Attila Szolnoki, Hungarian Academy of Sciences, Hungary Received: November 26, 2012; Accepted: January 14, 2013; Published: February 25, 2013 Copyright: © 2013 Stirling, Felin. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: No external sources of funding were used in this work. Competing interests: The authors have declared that no competing interests exist. The fundamental doctrine of game theory is that players make choices on the basis of preferences that take into account all factors that can influence their behavior. Classical game theory requires that preference orderings be categorical, meaning that they are individual, unconditional, and immutable. Arrow [1, p.51] put it succinctly: “It is assumed that each individual in the community has a definite ordering of all conceivable social states, in terms of their desirability to him. … It is simply assumed that the individual orders all social states by whatever standards he deems relevant.” With this model, each player comes to the game with a single preference ordering that, at least ostensibly, corresponds to its assessment of what is best for itself. Consequently, the natural concept of rational behavior is individual rationality: each player acts in a way that achieves its best possible outcome, regardless of the effect doing so has on other players. This doctrine is articulated by Harsanyi [2, p.13]: “Because all values and objectives in which the players are interested have been incorporated into their payoff functions, our formal analysis of any given game must be based on the assumption that each player has only one interest in the game—to maximize his own payoff.” One of the consequences of the categorical preference ordering structure is that the game is stripped of social context. Indeed, this is often viewed as a strength of game theory, which is designed to remove all irrelevant and redundant issues and reduce the problem to its bare-bones mathematical essence. This modeling assumption is compatible with the “hourglass” approach described by Slatkin [3]: A complex problem is reduced to a tractable mathematical model by eliminating all irrelevant issues and, once a solution is obtained, it is expanded back into the original context for interpretation. Friedman [4, p.13] asserts that the context has very little to do with the way the game is solved: “The economist has little to say about the formation of wants; this is the province of the psychologist. The economist's task is to trace the consequences of any given set of wants.” According to this point of view, each player must come to the game with a categorical preference ordering that completely defines its personal tastes and values in a way that simultaneously accounts for such dissimilar motives as egoism, altruism, benevolence, malevolence, and indifference to the welfare of others, and which is not susceptible to change as a result of social interaction. Furthermore, once the preferences are defined, the process used to define them has no bearing on the way the game should be played. This division of labor has been effective, particularly in economic settings where competition and market-driven forces dominate. But in more general social settings, expanding back into the context can lead to contradictions between observed and predicted behavior. Arrow [5, p.203] clearly delimits conditions for this division of labor approach to be valid. “Rationality in application is not merely a property of the individual. Its useful and powerful implications derive from the conjunction of individual rationality and other basic concepts of neoclassical theory—equilibrium, competition, and completeness of markets. … When these assumptions fail, the very concept of [individual] rationality becomes threatened, because perceptions of others and, in particular, their rationality become part of one's own rationality.” Thus, in contrast to Friedman's division of labor model, which separates the search for a rational solution from the context that generates the preferences, Arrow argues that the context can influence the rationality and, hence, the solution. When an individual's concerns truly extend beyond it's own narrow interests, requiring it to express complex social interests within a framework that is explicitly designed to account for, and only for, individual interests is an artificial and unnecessary constraint. Although one may construct sophisticated and clever devices to cast social concerns as manifestations of individual interest, such attempts can lead to paradoxes such as having purely selfish reasons for acting unselfishly. At the end of the day, such mechanisms only allow the individual to simulate the interests of others; they do not allow general expressions of true social interest. Thus, although the dual premise of categorical utilities and individual rationality offers a convenient framework within which to model many decision making scenarios, it has its limitations. As Shubik [6, p.4] bluntly put it, Economic man, operations research man and the game theory player were all gross simplifications. They were invented for conceptual simplicity and computational convenience in models loaded with implicit or explicit assumptions of symmetry, continuity, and fungibility in order to allow us (especially in a pre-computer world) to utilize the methods of calculus and analysis. Reality was placed on a bed of Procrustes to enable us to utilize the mathematical techniques available. One way to make the bed a better fit for its occupant is to acknowledge that context matters when both defining and using preference orderings and to respond to the argument advanced by Hausman [7, p.136] for “the need to supplement game theory with systematic inquiry into how agents confronting a strategic interaction construct the game they play.” The issue of how to account for social relationships has of course been raised by many others [8]. For example, behavioral game theory introduces greater psychological realism into the structure of payoffs by focusing on such factors as fairness and reciprocity [9]–[15]. Furthermore, repeated and evolutionary game theories provide frameworks by which players may learn or evolve behavior, such as cooperation, that conforms to the social context [16]–[19]. Such models are used to demonstrate that players are not exclusively motivated by narrow self-interest, but also care about the payoffs and intentions of others [20]–[21]. However, for the most part, the payoffs associated with these approaches are explicitly categorical, and any sociality generated by these models remains a function of individual interests. Previous research on various forms of conditional preferences has included work that makes some distinctions between private and social preferences [22]–[25]. Conditional preferences are also used by multicriterion decision theory to characterize dependency relationships among different attributes [26]–[30]. Yet others have powerfully modified game theoretic assumptions by highlighting how the strategies of players are conditional on such factors as who focal players are surrounded by and the associated spatial distribution of strategies. For example, Szolnoki et al. use the ultimatum game to highlight the emergence of spatial patterns of empathy and fairness [31]–[32], along with highlighting how the imitation of emotions, rather than strategies leades to higher social welfare [33]. And finally, yet others have highlighted how social interaction itself can be conditional on the reputation of agents [34]. Questions of aggregation have of course also been central in fields such as strategy and organization theory [35]. All of these approaches represent important advances to our understanding of human and social interaction, and modifications to the strong assumptions made by traditional approaches in economics that assume categorical preferences. However, our approach differs in substance, syntax and application from the above work in that we seek to develop a formal approach to conditional preferences, where extant social relations and ties give rise to both individual and social preferences. Building on the work of Stirling [36], we initiate a systematic inquiry by moving the study of preference formulation further upstream, though not necessarily to the psychological and sociological headwaters of preference origination. More modestly, the goal is to provide a mathematical framework within which such issues can be systematically studied. In short, our explicit purpose and interest in this paper is to develop a formal model of how extant social relations, modeled as conditional preferences, play a role in games, social interaction, and aggregation. Conditional preferences We restrict attention to finite, strategic (normal form), noncooperative games. Let , , denote a set of players, and let denote a finite space of feasible actions from which may choose one element. A profile is an array . Under classical game theoretic assumptions, each possesses a categorical utility . By its construction, a categorical utility naturally leads to solution concepts that require each to choose an action such that the resulting outcome is maximally preferable to it, regardless of the effect the outcome has on other players. When making decisions in a social environment, however, it is natural for an individual to take into consideration the opinions of others when forming her own opinions. In short, individuals may be influenced by the preferences of others for a number of reasons: they may like (or dislike, for that matter) the others involved, they may value others' opinions, or they may have an existing relationship with others (familial, friendship or professional). Our approach is to incorporate these extended interests into the game by endowing each player with a family of conditional utilities that enable it to account for the social influence that the preferences of other players have on its preferences. Conditional utilities provide social linkages among players that enable simultaneous consideration of both individual and social interests. We show how graph theory can be used to characterize the way preference relationships propagate through a collective to generate an emergent social model that characterizes the interdependence relationships that exist and which leads to solution concepts that account for both group and individual interests. Our framework and formal model is general and thus it can readily be applied to a wide range of potential social contexts that feature extant social relations and influence. To illustrate how conditional preferences provide a natural way to account for social relations and influence, consider a hierarchical organization such as a manager-employee scenario. The manager can choose either action or action , and the employee can choose either actions or action . Under classical theory, each must determine their categorical preference ordering over the outcome space . The priorities of the employee, however, are likely to be influenced by the priorities of the manager. One way to proceed is for the employee to reason as follows: If the most preferred outcome for the manager were, say, , then the employee could define his ordering given that hypothesis. But if the manager were to most prefer , the employee would define a different ordering. Continuing, the employee could form a set of four different preference orderings, each one conditioned on a different hypothesized preference ordering by the manager. This could be done without the employee knowing the manager's actual preference ordering. The conditional preference orderings for the employee are the consequents of hypothetical propositions whose antecedents are assumptions regarding the preferences of the manager. There is an important difference in the interpretation of the manager's preference ordering and the employee's preference orderings. Whereas the manager categorically orders her preferences over the possible joint actions of the two players, the employee conditionally orders his preferences for joint action with respect to the preferences for joint action of his manager. Thus conditional preferences can provide a powerful approach to more formally modeling how extant social relations play a role in influencing the behavior of actors. This line of reasoning is similar to the type of reasoning employed by multivariate probability theory. The power of probability theory is succinctly expressed by Shafer (cited in [37, p.15]): “probability is not really about numbers; it is about the structure of reasoning.” And one of the powerful reasoning structures that probability theory offers is a framework within which to form hypothetical propositions. In the probabilistic context, given a collective of two discrete random variables , the conditional probability mass function is the consequent of a hypothetical proposition regarding the probability that , given the antecedent that . This reasoning structure is epistemological, both semantically and syntactically. The semantics deals with notions of knowledge, what to believe, and how to justify beliefs, and the syntax deals with the way beliefs are expressed and combined. For example, the chain rule, , governs the way beliefs in one domain, as expressed by 's marginal probability mass function, should be combined with conditional beliefs in another domain, as expressed by 's conditional probability mass function, to govern the beliefs of the collective. This reasoning structure, however, is not limited to epistemological contexts; it may also be applied to contexts where the semantics deals with notions of effective and efficient action, and where the syntax deals with the way preferences for taking action are expressed and combined. Given a set with outcome space , let the parent set be the -element subset whose preferences influence 's preferences. Now consider the hypothetical proposition whose antecedent is the assumption that considers, for whatever reason, to be the outcome that should occur. We term such a profile a conjecture . Let denote the joint conjecture of . The consequent of this hypothetical proposition is a conditional utility for each . (This notation is analogous to the notation used for conditional probability. The argument on the left side of the conditioning symbol “” denotes the profile corresponding to , and the argument on the right side of the conditioning symbol denotes the joint conjecture of the agents who influence .) If , then , a categorical utility. Without loss of generality (via a positive affine transformation if necessary), we may assume that all utilities are non-negative and sum to unity. With this constraint, the utilities possess the syntax of a mass function. The collective constitutes a finite, normal form, noncooperative conditional game. Neoclassical game theory has not developed in a way that sanctions notions of a group-level preference ordering. As [38, p.124] has observed, “It may be meaningful, in a given setting, to say that group ‘chooses’ or ‘decides’ something. It is rather less likely to be meaningful to say that the group ‘wants’ or ‘prefers’ something.” Although this sentiment may be appropriate when all utilities are categorical, this line of reasoning loses much of its force when social linkages exist among the players. The existence of conditional utilities enables an important added dimension to decision making in a complex social environment: Once interest extends beyond the self, considerations of group-level interest become relevant and should not be suppressed. Returning to the manager-employee example, a natural question is: How should the categorical preferences of the manager and the conditional preferences of the employee be combined to form an emergent preference ordering for the One way to address these questions is to exploit the obvious syntactical similarity of conditional utilities and conditional probabilities by forming an analogy between probability theory and utility theory. If this were done, then mathematical operations such as marginalization, independence, and the chain rule could provide a powerful framework within which to characterize and analyze complex social systems. It is not sufficient, however, simply to make a syntactical correspondence between belief modeling and preference modeling. Beliefs are not preferences (unless we engage in wishful thinking), and considerations of what to believe are not the same as considerations of what to prefer. Thus it is not obvious that the syntax of probability theory will apply in a preference-oriented context. Any such correspondence must be rigorously justified. To address this issue, it is helpful first to take a close look at the way probability theory is developed. The traditional treatment is to view conditional probability as the ratio of a joint probability and a marginal probability; i.e., , from which the chain rule follows trivially. The important feature of this development, however, is that the definition of conditional probability is dependent on the epistemological context; it is a re-normalizing of probability: ; that is, the belief that is true given that is true is the ratio of the belief that both are true and the belief that is true. To proceed along these lines to justify the chain rule in a preference-oriented context would either require a) constructing an analogue to a probability space, which would seem to be a rather tedious undertaking, or b) ignoring foundational theoretical concerns and relying on intuition and ad hoc reasoning. Fortunately, there is a way to arrive at the chain rule that does not rely upon the standard definition of conditional probability. This approach requires an important change in perspective. Rather than view the joint probability mass function as the primary component from which marginal and conditional probabilities can be derived, an alternative view is to consider the conditional and marginal probabilities as the primitive components from which the joint probability can be synthesized. The development of probability theory from this perspective has been provided by [39] and [40], and is entirely in keeping with the observation by [37] that dependence relationships are the fundamental building blocks of probabilistic knowledge. Motivated by the probability context, let us consider a general context scenario involving a collective of entities, each of which possesses some notion of ordering the alternatives available to it, and whose ordering notions can be influenced by other agents. A convenient and powerful way to express such influence is with graph theory. Consider the two-vertex graph illustrated in Figure 1, where for each , and for each be conditional ordering functions for given and given , respectively, for some finite domains , . Also, let be categorical ordering functions for , . Suppose we wish to synthesize a joint ordering function from these components. The general form would be(1)This form permits the possibility of indirect self-influence, that is, influences which in turn influences , and so forth, which could lead to an infinite regress. We may eliminate such behavior by stipulating that influence flows must be uni-directional, that is, by imposing acyclicity. Then we may simplify the structure by eliminating one set of marginal and conditional orderings from the argument list. If this is done, however, consistency requires that the joint ordering be invariant to the way the problem is framed, that is, we require(2)Framing invariance is a strong condition to impose upon a collective. It means that the same joint ordering will obtain for setting as for setting in Figure 2. More generally, framing invariance means that if aggregation can be framed in more than one way using exactly the same information (although coded differently), then all framings will result in the same joint ordering. Framing invariance is always assumed in probabilistic contexts, since the joint probability mass function is invariant to the order in which the random variables are considered. Given a set of random variables , let be an arbitrary permutation of . Then must hold. In an arbitrary context, however, framing invariance cannot be automatically assumed to hold. Consider the manager-employee scenario. Under the original framing, the manager possesses a categorical utility and the employee possesses a family of conditional utilities . Framing invariance requires that a categorical utility must exist for the employee and a family of conditional utilities must exist for the manager such that(3)for all . For this to hold, some concept of reciprocity or symmetry must exist between the two participants. Obviously, framing invariance would fail if the manager were so intransigent that she would not, under any circumstances, take the opinions of the employee under consideration, or if the employee were so incompetent that he could not form preferences over the outcome space. Invoking framing invariance does not require that the alternate framing actually be defined, only that it exist in principle. The richness and variability of human behavior, however, make it impossible to impose this condition without justification. Nevertheless, framing invariance provides a reasonable framework within which to model many social relationships, especially for scenarios involving coordinated behavior or the need to compromise, and represents a significant generalization to the traditional categorical utility model. Obviously, the utilities of classical game theory are framing invariant, since then the conditional utilities coincide with the categorical utilities. Thus, framing invariance is a weaker condition than the categorical In addition to acyclicity and framing invariance, we must impose one more condition on . Referring to (2), suppose were increased but were held constant. Common sense dictates that should not decrease. A similar argument applies of is increased and is held constant. Thus, must be non-decreasing (monotonic) in each argument in order to avoid counter-intuitive influence behavior. For collectives involving more than two entities, we must move beyond the simple graph defined in Figure 2. In general, a directed graph is a collection of vertices and directed edges that constitute the links between vertices. Each takes values over some finite domain space . A directed acyclic graph (DAG) is a directed graph such that no sequence of edges returns on itself. If a vertex has no incoming edges (i.e., it has no parents), it is a root vertex. To complete the specification of a DAG, each root vertex must possess a categorical preference ordering over its own states. Figure 3 illustrates a three-vertex DAG with (a root vertex) influencing , and with and both influencing . The edges are denoted by the conditional ordering functions and . The DAG structure is not dependent on any specific context; it is a general model of how influence propagates uni-directionally through any kind of collective. (Graphical models of complex economic systems have recently appeared in the literature as a convenient and powerful means of representing social relationships (see [41]–[44]). Such models are often used to characterize the spread of infectious diseases and the propagation of information. However, none of these discussions involve the formal modeling of conditional preferences.) A key issue is how the individual preferences, as represented by categorical preference orderings for root vertices and conditional preference orderings for children vertices, can be combined to create a preference ordering for the group. The main contribution of this paper is the aggregation theorem, which is formally stated and proved in the Methods and Materials section. Essentially, the aggregation theorem establishes conditions that justify applying the chain rule syntax to preference aggregation semantics. We apply this result to the social influence context as follows. Let be an -member influence network such that each possesses its own finite action space and orders its preferences over the outcome space . Let 's parent set, denoted be the -element subset whose preferences influence 's preferences. If , then is a root vertex and will possess a categorical utility over the outcome space. If , then 's conditional utility will be of the form , where , the set of conjectures of . Applying the aggregation theorem,(4) The aggregation theorem establishes that acyclicity, monotonicity, and framing invariance are necessary and sufficient for the chain rule to apply. Thus, given these conditions, the syntax of probability theory applies to preference modeling as well as to the conventional application of the chain rule to belief modeling. However, simply imposing the chain rule on a set of conditional preference orderings without complying with these requirements would be problematic. The aggregated utility is a very complex function and interpreting it is equally complex. Analogous to the way a joint probability mass function provides a complete description of the dependency relationships that exist among the random variables in terms of belief, the aggregated utility provides a complete description of the dependency relationships that exist among the players in terms of preference. Interpreting the aggregated utility, however, is not as straightforward as interpreting a joint probability mass function. To develop this concept, we introduce the notion of concordance and then provide interpretations of independence and marginalization. Social influence can propagate through a group in complicated ways. may influence , who may in turn influence , and so on, thereby creating a cascade of social relationships that interconnect the players in ways that cannot be easily predicted. In this expanded context, it is not sufficient simply to create a payoff array to be subjected to standard solution concepts such as dominance and equilibrium. Instead, we must construct a social model that accounts for all of the interrelationships. One concept that applies in multiple contexts is the notion of concordance, and in the sequel we will term a concordant utility. Since it is a function of profiles, the concordant utility cannot be used directly to define a group-level ordering over the outcomes. Rather, it provides a representation of the social consistency of the group, in that it provides a measure of the degree of severity of controversy. To illustrate, let us consider a two-agent group . Le and be such that is best for and next-best for , and is worst for and best for . It is reasonable to argue that if both were to conjecture , the degree of controversy would be fairly small, since both agents receive a reasonable reward. If both were to conjecture , however, the outcome would be worst for one and best for the other; hence the degree of controversy would be quite large. Accordingly, the condition would obtain. The concordant utility permits the definition of an emergent notion of social consistency, namely, an aversion to controversy. The expresson means that if the group were jointly to conjecture , the level of controversy for would be less than or equal to what it would be if the group were jointly to conjecture . “Consistency,” as considered here, can be positive, in the sense of cooperation for individuals with common interests, or negative, when conflict would not be controversial, as would be the case in military operations or athletic competitions. In general, concordance captures the context of the game as the conditional preferences propagate through the vertices of the network. Let and be disjoint subgroups such that(5)These subgroups are independent if neither subgroup influences the other, in which case the concordant utility of the union is the product of the concordant utilities of the subgroups. That is,(6) The concordant utility provides a complete representation of the way preferential influence, as modeled by ex ante conditional utilities, propagates through a collective of players. Marginalization extracts ex post preferences of each player as a result of this propagation:(7)where the notation means that the sum is taken over all arguments except . Marginalization is the mechanism by which individual preferences emerge as a result of the social relationships that exist among individuals. Thus, even though an ex ante categorical ordering may not be given, marginalization provides an ex post unconditional ordering; that is, after consideration of the social relationships among the agents have been taken into account. These ex post categorical utilities represent the players' enlightened self-interest after systematically taking into account the degrees to which they are influenced by the preferences of others as expressed by their ex ante conditional utilities. Example 1 Let comprise a three-player collective such that possesses a categorical utility, possesses a conditional utility, and possesses a conditional utility (see Figure 3). Applying the chain rule yields the concordant utility(8)Since possesses an ex ante categorical utility, the ex post utility for will coincide with that categorical utility. The ex post utility of is given by(9)with a similar expression for the ex post utility for. Social solution concepts The ex post marginal utilities defined by (7) provide a preference ordering for individual players. Once obtained, the history of their creation ceases to be relevant. In fact, such a procedure is nothing more than an application of Friedman's division of labor. They are unconditional and are indistinguishable in structure from ex ante categorical utilities. Consequently, they may be used according to any classical solution concept, such as Nash equilibria. If this were the end of the story, then all of the above development would be nothing more than a prelude to classical game theory, and we would fall short of our goal to offer a true expansion to the theory. But there is more to be said. In contrast to classical game theory, which eschews notions of group-level preferences, the existence of explicitly defined social influence relationships opens the possibility of defining a group-level preference ordering that is more than just an aggregation of categorical preference orderings. Our approach is to construct another kind of marginal. Just as we may extract marginals from the concordant utility for each individual, we may also extract a marginal for the group. To proceed, we observe that since each player can control only its own actions, what is of interest is the utility of all players making conjectures over their own action spaces. Definition 1 Consider the concordant utility. Let denote the th element of ; that is, is 's conjecture profile. Next, form the action profile by taking the -th element of each 's conjecture profile,. Now let us sum the concordant utility over all elements of each except to form the group utility for, yielding(10) The group utility provides a complete ex post description of the social relationships between the members of a multiplayer group. Unless its members are independent, this utility is not simply an aggregation of categorical utilities. Rather, it is an emergent notion of group preference. Although the group does not act as a single entity, or superplayer, the group utility nevertheless informs each member of the group regarding the effect of their collective actions on the society. Each member can extract its own single-player utility as a function of its own action by computing its own marginal welfare function. Definition 2 The individual utility of is the th marginal of, that is,(11) The existence of both group-level and individual-level preference orderings provides a framework within which to create solution concepts that simultaneously take into consideration the (emergent) interests of the group and the individuals. We describe one such concept that involves negotiation. Definition 3 The maximum group welfare solution is(12)and the maximum individual welfare solution is(13) If for all , the action profile is a consensus choice. In general, however, a consensus will not obtain, and negotiation may be required to reach a compromise solution. The existence of group and individual utilities provides a rational basis for meaningful negotiations; namely, that any compromise solution must at least provide each player with its security level—that is, the maximum amount of benefit it could receive regardless of the decisions that others might make. The security level for is the maximin profile, defined as(14)where is the ex post utility given by (7). In addition to individual benefit, we must also consider benefit to the group. Although a security level, per se, for the group cannot be defined in terms of a minimum guaranteed benefit (after all, the group itself does not actually make a choice), a possible rationale for minimum acceptable group benefit is that it should never be less than the smallest benefit to the individuals. This approach is consistent with the principles of justice espoused by [45], who argues, essentially, that a society as a whole cannot be better off than its least advantaged member. Accordingly, let us define a security level for the group as , where we divide by the number of players since the utility for the group involves players. Now define the group negotiation set(15)the individual negotiation sets(16)and the negotiation rectangle(17)The negotiation rectangle is the set of profiles such that each member's element provides it with at least its security level. Finally, we define the compromise set(18)which simultaneously provides each member of the group at least its security level, as well as meeting the group's security level. If , then no compromise is possible at the stated security levels. One way to overcome this impasse is to decrement the security level of the group iteratively by a small amount, thereby enlarging until . If after the maximum reduction in group security has been reached, then no compromise is possible, and the group may be considered dysfunctional. Another way to negotiate is for individual members to iteratively decrement their security levels. Once , any element of this set provides each member, as well as the group, with at least its security level. If contains multiple elements, then a tie must be broken. One possible tie-breaker is(19) which provides the maximum benefit to the group such that each of its members achieves at least its security level. This development has assumed the full generality of conditioning; namely, that a conditional utility depends on the entire conjecture profiles of all of the parents, and that the conditional utility is a function of all elements of the action profile. This fully general model can be extremely complex, since each player is under obligation to define its preferences for every possible joint conjecture of its parents—a potentially intractable task. Although it is necessary for the theory to have the ability to accommodate maximal complexity, sociation provides a way to control complexity in keeping with the observation by [46, p.176] that “complexity is no argument against a theoretical approach if the complexity arises not out of the theory itself but out of the material which any theory ought to handle.” It can be the case that a player does not condition its preferences on the entire conjecture profiles of its parents. It can also be the case that a player's utility does not depend upon the entire action profile. To account for such situations, we introduce the notion of sociation. Suppose has parents , with conditional utility , where the joint conjecture comprises the conjectures of . Definition 4 A conjecture subprofile for, denoted, is the subset of that influences 's conditional utility. We then have(20) is completely conjecture sociated if for and . It is completely conjecture dissociated if for and , in which case,(21)Otherwise, the group is partially conjecture dissociated. Definition 5 A utility subprofile, denoted, comprises the subset of actions by, , that affect 's utility. We then have(22)where denotes with the dissociated elements of its argument removed. is completely utility sociated if for . It is completely utility dissociated if for , in which case(23)Otherwise, the group is partially utility dissociated. For a partially sociated group, the concordant utility assumes the formwhere is with the dissociated arguments removed. Definition 6 A group is completely dissociated if it is both completely conjecture dissociated and completely utility dissociated, in which case,(24) For a completely dissociated group, the concordant utility becomes the group utility.(25) The ultimatum game The Ultimatum game ([47]) has received a great deal of attention as an example of situations where experimental evidence contradicts the assumptions of classical game theory. Ultimatum is a two-player game defined as follows: , the proposer, has access to a fortune, , and offers , the responder, a portion , and chooses whether or not to accept the offer. If accepts, then the two players divide the fortune between themselves according to the agreed upon portions, but if declines the offer, neither player receives anything. The game loses little of its effect, and its analysis is much simpler if we follow the lead of [48], and consider the so-called minigame, with only two alternatives for the proposer: and (high and low), with . This minigame analysis captures the essential features of the continuum game and permits us to see more clearly the relationships between the two players. With this restriction, the action sets for the two players are and for and , respectively. The payoff matrix for the Ultimatum minigame is illustrated in Table 1. This game has a dominant strategy for each player; namely, should play and should play . The response of many players, however, indicates that they typically are not utility maximizers. Thus, this game is an excellent example of a situation where social considerations appear to be significant. Analysts of the game have theorized that the responders decline an offer they deem to be unfair because they are emotionally connected with the consequence. A term that seems to capture this emotion is indignation. Another feature that emerges from the play of this game is that the proposer may be motivated by considerations other than greed. Even if the proposer is greedy, it may still make an equitable offer if it suspects that the responder would be prone to reject an inequitable one. A concept that expresses this emotion is intemperance. We denote these two emotional attributes by the intemperance index and the indignation index , and assume that and . The condition means that the proposer is extremely avaricious, but if , then the proposer may be viewed as having altruistic tendencies. The condition means that the responder is easily offended, while means that it is extremely tolerant. With conventional game, the payoffs define their utilities, but when social issues are involved, we need to relax the requirement for strict alignment of preferences and payoffs in order to apply the social parameters. There are many ways to frame such a game; one natural way is to endow the proposer with a categorical utility and endow the responder with a conditional utility. The general fully sociated game would require the players to define preferences over the entire product space , which would require to specify four values for its categorical utility and to specify sixteen values (four for each of 's four possible conjectures). The game may be simplified by adopting various levels of dissociation, and to keep this presentation as simple as possible, we will assume a condition of complete dissociation. Under this condition, each 's categorical utility is with respect to its actions only, and 's conditional utility is with respect to its actions only, conditioned on conjectures regarding 's actions only. We thus make the following assignments.(26)(27)(28)Under the assumption of complete dissociation, the concordant utility reduces to the group utility (see (10)), with components of the form for (see (35)), yielding The outcome that maximizes group welfare depends on the values of and as follows:(29)(30)(31) Upon simplification of these expressions, we may identify values of and that maximize group welfare.(32)(33)(34)Notice that the outcome is never the maximum outcome for the group. The individual welfare functions are (see (11)):from which it is easily seen that is best for if and is best for if . Figure 4 displays the regions of the - plane where the group and individual preferences are in agreement. The region labeled represents the values of where the group prefers the joint outcome to all other outcomes and, simultaneously, prefers to and prefers to . Similar interpretations apply to the regions labeled and . In all other portions of the - plane, group preferences are not consistent with individual preferences. This example illustrates how social parameters may be explicitly embedded into the mathematical structure of the utilities. Without incorporating context into the model, deviations from behavior predicted by the payoffs would require the invocation of psychological and sociological attributes that, while not part of the mathematical model, are necessary to explain the deviations. They merely overlay the basic mathematical structure of a game and avoid or postpone a more profound solution, namely, the introduction of a model structure that explicitly accounts for complex social relationships and notions of rational behavior that extend beyond narrow self-interest and categorical preferences. In this paper we have developed a formal model of conditional preferences, with application to various forms of social interaction such as game theory and preference aggregation [36]. We modify traditional models of social interaction that presume categorical individual preferences, preferences that are assumed to be fixed and invariant. We provide a model of how extant social relations influence and condition individual preferences and aggregate, social outcomes. We have thus sought to introduce increased realism to existing models of games and social interaction by showing how individual preferences interact in nontrivial ways in social settings and how collective outcomes aggregate and emerge as a result of this conditioning, social interaction and influence. Our formal approach is generalizable to many specific social contexts, beyond the stylized settings of game theory and social choice. That is, our approach is general in the sense that extant social relations and conditioning can feasibly be driven by many factors. Our model is applicable to any setting where agents are linked and some form of interaction or aggregation is needed. For example, the conditioning of individual preferences might have to do with such social or interactional factors as spatial relations, expertise, hierarchy, professional and managerial relationships, or friendship networks. Our model is generalizable to these settings and might thus also be applied to settings such as the design of artificial or expert systems, the aggregation of information, or coordination within, or design of, social systems. Since our approach is meant to be general, future work might look at the boundaries of our argument, that is, how social conditioning perhaps differs between various social contexts and how our general model of conditioning and social influence may require context-specific amendments. Furthermore, our model also takes extant relations as a given and thus has little to say about where social relations or structures come from, or how these structures evolve as individuals interact over time. Thus there is an opportunity to study the emergence and evolution of the social relations that condition individual and aggregate preferences and outcomes. Overall, many different solution concepts have been proposed since the inception of game theory, with the bulk of attention focusing on concepts that conform to the individual rationality assumption, including minimax theory, notions of equilibrium, coalition forming, and repeated games. For example, repeated and evolutionary game theories have provided frameworks within which to study how players learn and adapt to their social environments. The field of behavioral economics has sought to imbue games with greater psychological realism by introducing social parameters into the preference models. All of these threads, however, are ultimately connected to the fundamental mathematical structure of categorical utilities and the logical structure of individual rationality. This paper at least strains, if not breaks, those threads: categorical preference orderings are explicitly replaced by conditional preference orderings and individual rationality is replaced by a notion of simultaneous group and individual accommodation. We have thus developed a formal model of social interaction, focused on capturing how individuals influence each other and how this influence propagates in and through social structures, both at the individual and aggregate levels. Materials and Methods Theorem 1 The Aggregation Theorem. Let be the vertices of a DAG, where each takes values in some set. Let be the set of parents of. For any vector of states t denote the subvector corresponding to. Also, let be a mass function defining the conditional ordering of the states of given. If, then is a root vertex and, a categorical ordering of the states of. The joint ordering function for the collective is of the form(35)if and only if acyclicity, monotonicity, and framing invariance hold. We first prove this result for and then extend to the general case. Consider three-vertex DAG illustrated in Figure 3 and let denote a mass function that orders the states of . Let denote a mass function that conditionally orders the states of given the state of , and let denote a mass function that conditionally orders the states of given the sates of and . One way to frame this problem is to aggregate and first and to aggregate the result with , yielding(36)(37)(38) Another way to frame the problem is first to aggregate and conditioned on and then to aggregate the result with , yielding(39)(40)(41) By framing invariance, these two ways of aggregating must yield the same joint ordering:(42)Thus, the joint ordering satisfies the associativity equation(43)evidently first studied by [49]. By inspection, setting is a solution to this equation. Cox [39] (also see [40]) has shown that if is differentiable, then the general solution to the associativity equation is of the form for any monotonic continuous . Taking yields(44)hence,(45) To deal with the case , consider the DAG illustrated in Figure 5, where is a dummy vertex with a singleton range space , with . Since has no influence on either or , and . Substituting these expressions into (41) yields(46) To establish this result for , we apply the aggregation theorem to obtain(47)But(48)Successive applications results in(49)which is (35). To prove the converse, monotonicity and acyclicity are obvious. To establish framing invariance, we first consider the case . Applying (35),(50)We may recognize the first two terms on the right-hand side of (50) as the joint ordering function for and , that is,(51)Thus,(52) Furthermore, we may recognize the last two terms on the right-hand side of (50) as the conditional joint ordering function for given , that is,(53)Thus,(54)Comparing (52) with (54) establishes framing invariance for . Then general case for follows by similar manipulations of (35). Author Contributions Conceived and designed the experiments: WS TF. Performed the experiments: WS TF. Analyzed the data: WS TF. Contributed reagents/materials/analysis tools: WS TF. Wrote the paper: WS TF.
{"url":"https://journals.plos.org:443/plosone/article?id=10.1371/journal.pone.0056751","timestamp":"2024-11-12T00:42:53Z","content_type":"text/html","content_length":"262198","record_id":"<urn:uuid:933d9d11-ffff-45a7-8056-6f011f65593a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00471.warc.gz"}
Verification, Validation, and Uncertainty Quantification Notes on Chapter 8: Verification, Validation, and Uncertainty Quantification by George Em Karniadakis [1]. Karniadakis provides the motivation for the topic right off: In time-dependent systems, uncertainty increases with time, hence rendering simulation results based on deterministic models erroneous. In engineering systems, uncertainties are present at the component, subsystem, and complete system levels; therefore, they are coupled and are governed by disparate spatial and temporal scales or correlations. His definitions are based on those published by DSMO and subsequently adopted by AIAA and others. Verification is the process of determining that a model implementation accurately represents the developers conceptual description of the model and the solution to the model. Hence, by verification we ensure that the algorithms have been implemented correctly and that the numerical solution approaches the exact solution of the particular mathematical model typically a partial differential equation (PDE). The exact solution is rarely known for real systems, so “fabricated” solutions for simpler systems are typically employed in the verification process. Validation, on the other hand, is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. Hence, validation determines how accurate are the results of a mathematical model when compared to the physical phenomenon simulated, so it involves comparison of simulation results with experimental data. In other words, verification asks “Are the equations solved correctly?” whereas validation asks “Are the right equations solved?” Or as stated in Roache (1998) [2], “verification deals with mathematics; validation deals with physics.” He addresses the constant problem of validation succinctly: Validation is not always feasible (e.g., in astronomy or in certain nanotechnology applications), and it is, in general, very costly because it requires data from many carefully conducted Getting decision makers to pay for this experimentation or testing is especially problematic when they were initially sold on using modeling and simulation as a way to avoid testing. After this the chapter goes into an unnecessary digresion on inductive reasoning. An unfortunate common thread that I’ve noticed in many of the V&V reports I’ve read is they seem to think Karl Popper had the last word on scientific induction! I think the V&V community would profit greatly by studying Jayne’s theoretically sound pragmatism. They would quickly recognize that the ’problems’ they perceive in scientific induction are little more than misunderstandings of probability theory as logic. The chapter gets back on track with the discussion of types of error in simulations: Uncertainty quantification in simulating physical systems is a much more complex subject; it includes the aforementioned numerical uncertainty, but often its main component is due to physical uncertainty. Numerical uncertainty includes in addition to spatial and temporal discretization errors, errors in solvers (e.g., incomplete iterations, loss of orthogonality), geometric discretization (e.g., linear segments), artificial boundary conditions (e.g., infinite domains), and others. Physical uncertainty includes errors due to imprecise or unknown material properties (e.g., viscosity, permeability, modulus of elasticity, etc.), boundary and initial conditions, random geometric roughness, equations of state, constitutive laws, statistical potentials, and others. Numerical uncertainty is very important and many scientific journals have established standard guidelines for how to document this type of uncertainty, especially in computational engineering (AIAA 1998 [3]). The examples given for effects of ’uncertainty propagation’ are interesting. The first is a direct numerical simulation (DNS) of turbulent flow over a circular cylinder. In this resolved simulation, the high-wave numbers (smallest scales) are accurately captured, but there is disagreement at the low wave numbers (largest scales). This somewhat counter-intuitive result occurs because the small scales are insensitive to experimental uncertainties about boundary and initial conditions, but the large scales of motion are not. The section on methods for dealing with modelling uncertain inputs is sparse on details. Passing mention is made of Monte Carlo and Quasi-Monte Carlo methods, sensitivity-based methods and Bayesian The section on ’Certification / Accreditation’ is interesting. Karniadakis recomends designing experiments for validation based on the specific use or application rather than based on a particular code. This point deserves some emphasis. It is an often voiced desire from decision makers to have a repository of validated codes that they can access to support their various and sundry efforts. This is an unrealistic desire. A code can not be validated as such, only a particular use of a code can be validated. In most decisions that engineering simulation supports, the use is novel (research and new product development), therefore the validated model will be developed concurrently with (in the case of product development) or as a result of (in the case of research) the broader effort in question. The suggested hierarchical validation framework is similar to the ’test driven development’ methodologies in software engineering and the ’knowledge driven product development’ championed in the GAO’s reports on government acquisition efforts. Small component (unit) tests followed by system integration tests and then full complex system tests. When the details of ’model validation’ are understood, it is clear that rather than replacing testing, simulation truly serves to organize test designs and optimize test efforts. The conclusions are explicit (emphasis mine): The NSF SBES report (Oden et al. 2006 [4]) stresses the need for new developments in V&V and UQ in order to increase the reliability and utility of the simulation methods at a profound level in the future. A report on European computational science (ESF 2007 [5]) concludes that “without validation, computational data are not credible, and hence, are useless.” The aforementioned National Research Council report (2008) on integrated computational materials engineering (ICME) states that, “Sensitivity studies, understanding of real world uncertainties and experimental validation are key to gaining acceptance for and value from ICME tools that are less than 100 percent accurate.” A clear recommendation was reached by a recent study on Applied Mathematics by the U.S. Department of Energy (Brown 2008 [6]) to “significantly advance the theory and tools for quantifying the effects of uncertainty and numerical simulation error on predictions using complex models and when fitting complex models to observations.” ] WTEC Panel Report on International Assessment of Research and Development in Simulation-Based Engineering and Science , 2009, [2] Roache, P.J. 1998. Verification and validation in computational science and engineering. Albuquerque,: Hermosa Publishers. [3] AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, Reston, VA, AIAA. AIAA-G-077-1998. [4] Oden, J.T., T. Belytschko, T.J.R. Hughes, C. Johnson, D. Keyes, A. Laub, L. Petzold, D. Srolovitz, and S. Yip. 2006. Revolutionizing engineering science through simulation: A report of the National Science blue ribbon panel on simulation-based engineering science. Arlington: National Science Foundation. Available online [5] European Computational Science Forum of the European Science Foundation (ESF). 2007. The Forward Look Initiative. European computational science: The Lincei Initiative: From computers to scientific excellence. Information available online. [6] Brown, D.L. (chair). 2008. Applied mathematics at the U.S. Department of Energy: Past, present and a view to the future. May, 2008. Concepts of Model Verification and Validation has a glossary that defines most of the relevant terms. 19 comments: 1. A good account of the importance of experimental validation re:climate change 2. In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who believe in the good of science and engineering to be above reproach whenever possible. Public confidence is further eroded with every error we make. Although many of society’s problems can be solved with a simple change of values, major issues such as radioactive waste disposal and environmental modeling require technological solutions that necessarily involve computational physics. As Robert Laughlin noted in this magazine, “there is a serious danger of this power [of simulations] being misused, either by accident or through deliberate deception.” Our intellectual and moral traditions will be served well by conscientious attention to verification of codes, verification of calculations, and validation, including the attention given to building new codes or modifying existing codes with specific features that enable these activities. --P.J. Roache, "Building PDE Codes to be Verifiable and Validatable", Computing in Science and Engineering, 2004 3. Polynomial regression with derivative information in nuclear reactor uncertainty quantification 4. Software Failure Probability Quantification for System Risk Assessment 5. Description of the Sandia Validation Metrics Project ABSTRACT: This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomoechanics, that serve to focus the technical work of the project in Fiscal Year 2001. 6. The reference list linked to the AIAA standard on VV&UQ, here's the ASME's similar standard ($45), an overview is available as well (free). 7. Dan Hughes has written a nice white-paper on Discussions of Application of Verification, Validation, and Quality Assurance Procedures to Climate Modeling Software. 8. NC State is putting on Verification and Validation for Nuclear Systems Analysis: Workshop II, May 24-28, 2010 9. Interesting sounding post doc position at NREL: The successful candidate will perform research on the use of modern statistical techniques and participate in work with NREL scientists to analyze and control uncertainty and sensitivity in large geospatial and temporal data sets, laboratory data, mathematical models and simulations related to renewable energy and energy efficiency research and deployment. A strong background in Uncertainty Quantification and Sensitivity Analysis. Experience in Validation and Verification and applying statistical analysis to scientific, engineering, energy resource, and/or atmospheric science data sets. 10. NIST is putting on a conference in Boulder on Uncertainty Quantification, 1-4 Aug 2011: Computing has become an indispensable component of modern science and engineering research. As has been repeatedly observed and documented, processing speed measured in floating point operations per second has experienced exponential growth for several decades. These hardware efficiencies have been accompanied by innovations in mathematical algorithms, numerical software, and programming tools. The result is that, by any measure, the modern computer is many orders of magnitude more powerful than its early predecessors, capable of simulating physical problems of unprecedented Given the success of scientific computation as a research tool, it is natural that scientists, engineers, and policy makers strive to harness this immense potential by using computational models for critical decision-making. Increasingly, computers are being used to supplement experiments, to prototype engineering systems, or to predict the safety and reliability of high-consequence systems. Such use inevitably leads one to question "How good are these simulations? Would you bet your life on them?" Unfortunately, most computational scientists today are ill equipped to address such important questions with the same scientific rigor that is routine in experimental science. 11. "If error is corrected whenever it is recognized, the path of error is the path of truth." —Hans Reichenbach From the banner on the PECOS site. They are also hiring: ...areas including Computational Engineering and Sciences, Turbulence Modeling and Optimal Experimental Design. 12. Interesting new journal: International Journal for Uncertainty Quantification It might turn out to be pretty good, the associate editor is Dongbin Xiu, who has some pretty good recent papers on using polynomial chaos expansions to do UQ. 13. The googlebot found this funding summary document for some of the DOE's nuclear weapons stockpile stewardship stuff, this paragraph on the 'predictive capabilities framework' is interesting: The Predictive Capability Framework (PCF) is an integrated roadmap that reflects the responsive scientific capabilities needed to deliver a predictive capability to the nuclear security enterprise. Participants of the PCF include Defense Science, ASC, Engineering, DSW Research & Development, and Inertial Confinement Fusion Ignition and High Yield Campaign. The PCF identifies a list of long-term integrated goals and links the progress in the predictive capabilities to the progress in the five enabling capabilities, four of which (theory/model capabilities, code/ algorithm capabilities, computational facilities, and Quantification of Margins and Uncertainties (QMU) and Verification & Validation (V&V) capabilities) are developed by the ASC program. With the pending completion of major new experimental facilities and entry into peta-scale high performance computing, the PCF represents a new phase of science-based stockpile stewardship – one better aligned to the challenges of an aging and changing stockpile. Even thought they can't do full-up testing anymore, experimental work is still a huge part of their understanding of the useful predictive capabilities of their (huge) simulations. 14. Updated ASME V&V guide published. 15. Regulatory Models and the Environment: Practice, Pitfalls, and Prospects Modeling is a difficult enterprise even outside of the potentially adversarial regulatory environment. The demands grow when the regulatory requirements for accountability, transparency, public accessibility, and technical rigor are added to the challenges. Moreover, models cannot be validated (declared true) but instead should be evaluated with regard to their suitability as tools to address a specific question. The committee concluded that these characteristics make evaluation of a regulatory model more complex than simply comparing measurement data with model results. Evaluation also must balance the need for a model to be accurate with the need for a model to be reproducible, transparent, and useful for the regulatory decision at hand. Meeting these needs requires model evaluation to be applied over the “life cycle” of a regulatory model with an approach that includes different forms of peer review, uncertainty analysis, and extrapolation methods than for non-regulatory models. Their choice of terminology is unfortunate, validated generally doesn't mean true, it means understanding the degree to which a model is a suitable representation of the real world. 16. I mentioned in a previous comment that D. Xiu was the associate editor of a new Uncertainty Quantification journal, he has written a nice (freely accessible) review article on Fast Numerical Methods for Stochastic Computations, which is a good intro to the state of the art. 17. From NA digest: Postdoc Position in V&V/UQ at Sandia National Laboratories The Computer Science Research Institute (CSRI) at Sandia National Laboratories in Albuquerque, NM, is seeking outstanding applicants for a postdoctoral position in any of the broad areas of Verification and Validation (V&V), Uncertainty Quantification (UQ), data assimilation, optimization, and inverse problems. The CSRI brings university faculty, students, and others together to pursue collaborative research in a highly multidisciplinary, team-based environment. The CSRI is well known for producing the DAKOTA and Trilinos multi-function software packages. For more information, see http://www.cs.sandia.gov/CSRI/, http://www.cs.sandia.gov/dakota/, and To apply, go to http://www.sandia.gov, click on Employment, click on Career Opportunities, click on Search Job Postings, enter the job opening number (64709) in the Keywords field of the Basic Job Search, press Search, and scroll down to see the posting. 18. The new International Journal for Uncertainty Quantification (mentioned previously here and here) is now online and accepting submissions. 19. An unfortunate common thread that I’ve noticed in many of the V&V reports I’ve read is they seem to think Karl Popper had the last word on scientific induction! Well, he didn't (hat tip Briggs).
{"url":"http://www.variousconsequences.com/2009/12/verification-validation-and-uncertainty.html","timestamp":"2024-11-14T01:05:50Z","content_type":"application/xhtml+xml","content_length":"177986","record_id":"<urn:uuid:9622e1d4-02e7-4be6-9a2c-fdb109390291>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00550.warc.gz"}
Frequency Converter Basics Static Frequency Converter buy@FrequencyConverter.net tech@FrequencyConverter.net Home » Wiki » Frequency Converter Basics Frequency Converter Basics To realize high efficiency, great manageability and energy saving in industrial induction motor related applications, it is necessary to adopt controllable frequency converter systems. Frequency converter system nowadays is AC motor fed by a static frequency converter. The up-to-date frequency converter works great for AC motor applications and easy installation. One important issue, however, is caused by the non-sinusoidal output voltage. This factor has caused a lot of undesirable problems. Increased induction motor losses, noise and vibrations, detrimental impact on the induction insulation system, and bearing failure are examples of frequency converter related systems' issues. Increased induction losses mean a de-rating of the induction output power to prevent overheating. Measurement in the lab shows that temperature rises may be 40% higher with frequency converter compared with general power supplies. Continuous research and enhancement of frequency converters have solved many of these issues. Unfortunately, it seems that solving one problem has accented another. Reducing the induction and frequency converter losses tends to increase the detrimental impact on the insulation. The induction manufacturers are, of course, aware of this. New induction designs (inverter-resistant motors) are beginning to appear on the market. Better stator winding insulation and other structural improvements ensure induction motors that will be better adapted for frequency converter applications. One of the most serious problems to the induction motor has been the difficulty of adapting it to speed adjusting. The synchronous speed of an AC motor is determined by the following equation. n[s] = 120 * f / p = synchronous speed f = power grid frequency p = pole number The only way to adjust the speed, for a given pole number is to change the frequency. The basic principle In theory, the basic idea is simple, the process of transforming the stable power-line frequency into a variable frequency is basically done in two steps: 1. The AC power source is rectified into a DC voltage. 2. The DC voltage is chopped up into an AC voltage of the desired frequency. frequency converter basically consists of three blocks: the rectifier, the DC-link, and the inverter. Different types of frequency converters PWM Voltage Source Inverter (VSI) The PWM (Pulse Width Modulation) is widely applied in the frequency converter industry. They are available from a several hundred watts up to megawatts. A PWM converter does not have to match the load exactly, it need only make sure that the load does not consume current higher than the PWM converter is rated for. It is quite possible to run a 20 kW induction with a 100 kW PWM converter. This is a great benefit that makes operation easier for the application. Nowadays, the PWM frequency converter is using Insulated Gate Bipolar Translator (IGBT). Modern PWM frequency converters perform very well, and are not far behind designs using a sinusoidal power supply - at least not in the power range up to 100 kW or so. Current Source Inverter (CSI) The current source inverter is a rough and rather simple design compared with the PWM. It uses simple thyristors or SCRs in the power circuits, which makes it much cheaper. It has also benefit of being very reliable. The design makes it short-circuit proof because of the large inductors in the DC link. It is bulkier than the PWM. Earlier, the current source inverter was the best choice for big loads. A disadvantage with the current source inverter is the need of matching to the load. The frequency converter has to be designed for the induction motor used. In fact, the induction itself is a part of the inverted circuit. The current source inverter supplies the induction motor with a square-shaped current. At low speeds, the induction produces a cogging torque. This type of frequency converter will generate more noise on the supply source compared to the PWM converter. Filtering is necessary. Heavy voltage transients in the output voltage are an additional disadvantage of the current source inverter. The transients can reach nearly twice the nominal voltage in the worst cases. There is also a risk that the winding insulation will be worn out prematurely, if this frequency converter is used. This effect is most serious when the load does not match the frequency converter properly. This can happen when running at part load. This kind of frequency converter is losing its popularity more and more. Flux Vector Control (FVC) A flux vector control is a more sophisticated type of frequency converter that is used in applications having extreme control demands. In paper mills for example, it is necessary to control speed and stretching forces very precisely. A FVC frequency converter always has some kind of feedback loop. This kind of frequency converter is generally of minor interest in pump applications. It is expensive, and its benefits cannot be taken advantage of. Effect on the motor An induction works best when supplied with a pure sinusoidal-voltage source. This is mostly the case when connected to a robust utility power source. When an induction is connected to a frequency converter, it will be supplied with a non-sinusoidal voltage—more like a chopped square voltage. If we supply a 3-phase induction with a symmetrical 3-phase square voltage, all the harmonics that are multiples of three, as well as the even numbers, will be eliminated because of symmetry. But, still left are the numbers 5;7 and 11;13 and 17;19 and 23;25 and so on. For each pair of harmonics, the lower number is reverse rotating and the higher number is forward rotating. The speed of the induction motor is determined by the fundamental number, or number 1, because of its strong dominance. Now what happens to the harmonics? From the point of view of harmonics, the induction seems to have the rotor blocked, which means that the slip is approximately 1 for the harmonics. These provide no useful work. The result is mostly rotor losses and extra heating. In our application in particular, this is a serious outcome. With modern technology, however, it is possible to eliminate much of the harmonic content in the induction current, thereby reducing the extra losses. Frequency converter before The earliest frequency converters often used a simple square voltage to supply the induction motor. They caused heating-up problems and the inductions ran with a typical noise caused by torque ripple. Much better performance was achieved by simply eliminating the fifth and the seventh. That was done through some extra switching of the voltage signal. Frequency converter today Nowadays, the technique is more sophisticated and most of the disadvantages are history. Development of fast power semiconductors and the micro-processor has made it possible to tailor the switching pattern in such a way that most of the harmful harmonics are eliminated. Switching frequencies up to 20 kHz are available for frequency converters in the medium-power range (up to some tens of kW). The induction current with this type of frequency converter will be nearly sinus shaped. At a high switching frequency, induction losses are kept low, but losses in the frequency converter will increase. The total losses will become higher at excessively high switching frequencies. Some motor basic theory The torque production in an induction motor may be expressed as T = V * τ * B [Nm] V = Active rotor volume [m τ = Current per meter stator bore circumference B = Flux density in the air gap B = proportional to (E / ω) = E / (2 * π * f) ω = angular frequency of the stator voltage E = induced stator voltage To obtain the best performance at various speeds, it becomes necessary to maintain an appropriate magnetization level for the induction for each speed. A range of various torque characteristics is shown as following figure. For the constant torque load, the V/F ratio must be constant. For the square torque load, a constant V/F ratio will result in excessively high magnetization at lower speed. This will generate unnecessarily high iron losses and resistance losses (I It is better to use a square V/F ratio. The iron losses and I R losses are thus reduced to a level more acceptable for the actual load torque. If we look at the Figure, we find that the voltage has reached its maximum and cannot be increased above the 50Hz base frequency. The range above the base frequency is called the field weakening range. A consequence of this is that it is no longer possible to maintain the necessary torque without increasing the current. This will result in heating-up problems of the same kind as with normal under voltages run from a sinusoidal power grid. The frequency converter’s rated current will likely be exceeded. Running in the field-weakening range Sometimes, there is a temptation to run the pump at frequencies above the commercial power grid frequency in order to reach a duty point that would otherwise be impossible. Doing so calls for extra awareness. The shaft power for a pump will increase with the cube of speed. An over speed of 10% will require 33% more output power. Roughly speaking, we can expect that the temperature rise will increase by about 75%. There is, nonetheless, a limit to what we can squeeze out of the induction at over speed. The maximum torque of the induction will drop as a function of 1/F in the field-weakening range. It is obvious that the induction will drop out if the frequency converter cannot support it with a voltage that corresponds to that needed by the torque. In many cases, the induction is run at maximum capacity from a sinusoidal power grid and any extra heating cannot be tolerated. If such an induction is powered from a frequency converter of some kind, it most probably must be run at lower output power in order to avoid overheating. It is not unusual that a frequency converter for big pumps above 300 kW will add extra induction losses of 25–30%. In the upper power range, only a few of the frequency converters have a high switching frequency: 500 to 1000 Hz is usual for the former generation of frequency converters. To compensate for the extra losses, it is necessary to reduce the output power. We recommends a general derating of 10–15% for large pumps. Since the frequency converter pollutes the supply grid with harmonics, an input filter sometimes is prescribed by the power company. This filter will decrease the available voltage by typically 5–10%. The induction will consequently run at 90–95% of nominal voltage. The consequence is additional heating. Derating might be necessary. Assume that the output power for the actual pump motor is 300 kW at 50 Hz and the temperature rise is 80°C using a sinusoidal power grid. Extra losses of 30% will result in an induction that is 30% warmer. A conservative assumption is that the temperature rise varies with the square of shaft power. In order not to exceed 80°C, we have to reduce the shaft power to P[reduced] = √(1/1.3) * 300 = 263kW The reduction can be achieved either by reducing impeller diameter or by speeding down. Frequency converter Losses When the total efficiency of a frequency converter system is determined, the internal losses of the frequency converters must be included. These frequency converter losses are not constant and not easy to determine. They consist of a constant part and a load dependent part. Constant losses: Cooling losses (cooling fan)—losses in the electronic circuits and so on. Load dependent losses: Switching losses and lead losses in the power semiconductors. Following figure shows the frequency converter efficiency as function of the frequency at a cubic load for units rated at 45, 90, and 260 kW. The curves are representative for frequency converters in the power range of 50–300 kW; with the switching frequency equaling about 3 kHz and with an IGBT of the second generation. Effects on motor insulation The output voltages from modern frequency converters have a very short voltage rise time. dU/dT = 5000V/μs is a common value. Such steep voltage slopes will cause undue stress in the insulation materials of the induction winding. With short rise times, voltage in the stator winding is not uniformly distributed. With a sinusoidal power supply, the turn-turn voltage in an induction winding is normally equally distributed. With a frequency converter on the other hand, up to 80% of the voltage will drop across the first and the second turn. Since the insulation between the wires constitutes a weak point, this may prove to be hazardous for the induction. A short rise time also causes voltage reflection in the induction cable. In the worst case, this phenomena will double the voltage across the induction terminals. An induction fed from a 690-volt frequency converter might be exposed to up to 1 900 volts between phases. The voltage amplitude depends on the length of the induction cable and the rise time. With very short rise times, full reflection occurs in a cable 10 to 20 meters in length. To ensure function and ample motor life time, it is absolutely necessary that a winding be adapted for use with a frequency converter. Inductions for voltages above 500 volts must have some form of reinforced insulation. The stator winding must be impregnated with a resin that ensures an insulation free of bubbles or cavities. Glow discharges often start around cavities. This phenomena will eventually destroy the insulation. There are ways to protect a motor. Over and above a reinforced insulation system, it might be necessary to insert a filter between the frequency converter and the induction. Such filters are available from most well-known frequency converter suppliers. A filter will typically slow down the voltage rise time from dU/dT = 5000V/μs to 500-600V/μs Bearing failure Breakdown of rotating machinery can often be related to bearing failure. In addition to excessive heating, insufficient lubrication or metal fatigue, electric current through the bearings may be the cause behind many mysterious bearing breakdowns, especially with large inductions. This phenomenon is generally caused by non-symmetry in the magnetic circuit, which induces a small voltage in the stator structure, or by a zero sequence current. If the potential between the stator structure and the shaft unit becomes high enough, a discharge will take place through the bearing. Small electric discharges between the rolling elements and the bearing raceway will eventually damage the bearing. The use of frequency converters will increase the probability of this type of bearing failure occurring. The switching technique of a modern frequency converter causes a zero-sequence current that, under certain circumstances, finds its way through the bearings. The easiest way to cure this problem is to raise an obstacle for the current. The usual method is to use a bearing with an insulating coating on the outer ring. Using of a frequency converter is not meaning problem-free. Lots of questions that must be paid attention during the design work. Will it be necessary, for example, to limit the available shaft power to prevent excessive heating? It may prove necessary to run at lower output power to avoid this issue. Will the induction motor insulation resist effects from the inverter? Is filtering necessary? Modern, efficient inverters have detrimental impact on the insulation due to high switching frequency and short voltage-rise time. Which maximum cable length can be used without producing full voltage reflection? The voltage amplitude depends on both the cable length and the rise time. With very short rise times, full reflection will occur in cables 10 to 20 meters long. Might it be necessary to use insulated bearings in order to prevent a zero-sequence current from finding its way to the bearings? Only when we clear of all these questions, we will be able to make right decisions about the use of a frequency converter. In the section: "Running in the field-weakening range", about 5 lines down, after "will increase by about 75%.", there is a strange incomplete sentence: "Maximum torque drop in field-weakening range", just before continuing with "There is, nonetheless, a limit to what we can squeeze out of the induction at over speed." Is that the remnant of a corrupted graphic? If it is a mistake, please rectify.
{"url":"http://www.frequencyconverter.net/basics.html","timestamp":"2024-11-13T21:24:06Z","content_type":"application/xhtml+xml","content_length":"30012","record_id":"<urn:uuid:0d7792e0-45e4-4a92-bf0f-87d310ca02d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00872.warc.gz"}
• fix clang15 warnings • fix latex escape note • fix version info from new.md note • reset license from GPL-2 to the original GPL (>=2) • fixed typo in C code causing i386 error in rmutil::int() • added Author information of TOMS614/INTHP • fixed void / SEXP registration issue (LTO / valgrind additional issues) • fixed 0-sized array warning • Fixed call_R Issue https://github.com/swihart/rmutil/issues/11 • Updated links to https: where appropriate • Fixed call_R Issue https://github.com/swihart/rmutil/issues/11 • Updated links to https: where appropriate • Fixed Issue https://github.com/swihart/rmutil/issues/10 – updated BetaBinomial help documentation to relate m and s to the standard/traditional alpha and beta parameterization. Thanks to @sjenkins20 on github for the suggestion! • Fixed Issue https://github.com/swihart/rmutil/issues/9 – Condition has length > 1 and only the first element will be used warning in int. Thank you @ellessenne on github for chasing this one • In response to K. Hornik email regarding the stringsAsFactors=FALSE default in upcoming R versions, I did the following 3 edits: • add stringsAsFactors=TRUE to the v <- data.frame() call in the example of tvctomat.Rd • add stringsAsFactors=TRUE to the data.frame() call in the function tvctomat() (around line 1160 in objectrm.r): (oldtvcov$tvcov <- data.frame(oldtvcov$tvcov,tvcv, stringsAsFactors = TRUE) • turn tvcv <- as.data.frame(as.vector(t(as.matrix(tvcov)))) into tvcv <- data.frame(as.character(as.vector(t(as.matrix(tvcov)))),stringsAsFactors=TRUE) in the function tvctomat() (around line 937 of objectrm.r) • Issue https://github.com/swihart/rmutil/issues/8. • Implemented better fixes for https://github.com/swihart/rmutil/issues/5 (Thanks @hennerw!) where values below the support will give 0 and values above support give 1 for pbetabinom and dbetabinom as the last fix in 1.1.2 could give erroneous results. Let me know if other functions need this functionality – I’d be happy to update. • Removed local generic weights as per personal communication from K Hornik • Implemented https://github.com/swihart/rmutil/issues/5 (Thanks @hennerw!) where values below the support will give 0 and values above support give 1 for pbetabinom and dbetabinom. Let me know if other functions need this functionality – I’d be happy to update. • fixed an _R_CHECK_LENGTH_1_CONDITION_=true error • fixed _R_S3_METHOD_LOOKUP_BASEENV_AFTER_GLOBALENV_=true errors. • changed Example in DataMethods.Rd to glm(y~x+z, data=as.data.frame(reps)) Major changes Passed CRAN checks and is back on CRAN. Please see github page https://github.com/swihart/rmutil to see the changes required to pass CRAN checks. I’ll try to document the changes henceforth here and at https://github.com/swihart/rmutil/issues. 30.11.10 (growth) elliptic: twins option added for model using covfn with covariance matrix diagonal constant 28.11.10 elliptic: added an error check when covfn used 15.2.10 (rmutil) changed s<0 to s <=0 in qlaplace & rlaplace (thanks to Peter Ehlers) 18.11.09 (repeated) removed a redundant line in gausscop.r that now produced an error (thanks to Patrick Lindsey) 7.4.09 removed extra } in biv.binom.Rd (thanks to Christopher Marcum) 20.10.08 discrete q functions: changed trunc to round (thanks to Frederic Gosselin) 3.7.08 (gnlm) fit.dist: corrected check for negative values with Laplace, Cauchy, and Student t plus error in counts (f -> ni) for Laplace (thanks to Michael Anyadike-Danes) 24.10.07 fnenvir: changed way “=” is handled in gsub() because of error since R2.5.0 8.10.07 (event, gnlm, growth, repeated) changed typsiz to typsize in nlm() throughout 11.7.07 romberg.c: added missing R.h (thanks to Olivia Lau) 8.2.07 print out name of response variable in elliptic, bnlr, gnlr, gnlr3, gnlmm, gnlmm3, and fmr (thanks to Patrick Lindsey) qsimplex: corrected search interval updates 27.9.06 qhjorth, qinvgauss, qginvgauss, qboxcox: changed lower limit of search from 0.001 to .Machine$double.xmin (thanks to Franco Mendolia) 8.12.05 (rmutil, repeated, event) minor modifications to Fortran for F95 compatibility (thanks to Jan de Leeuw) 22.11.05 finterp, objectrm: added na.action=NULL in calls to model.frame (default is na.omit !!!!!) (thanks to Patrick Lindsey) 30.9.05 elliptic: corrected calculation of number of parameters for builtin logistic function (thanks to Tom Van Dooren) 1.8.05 qbetabinom: changed trunc() to round() (thanks to Elias Krainski) mprofile, iprofile: added check that times are available (thanks to Patrick Lindsey) 6.7.05 ksurvb, ksurvg, kcountb: added break; after default: in C code to satisfy some compilers (thanks to Patrick Lindsey) 30.6.05 finterp: correction for change in functioning of match() gnlm family: added coef.gnlm() and vcov.gnlm() (thanks to Bendix Carstensen) 25.4.05 (stable) rstable: eliminate production of NAs (thanks to Zhu Wang) 26.1.05 finterp: fixed a bug when >= or <= is used in a formula (thanks to Eliot McIntire) 1.11.04 gnlmm3: generalized nonlinear mixed models for three-parameter distributions 28.9.04 catmiss: removed codes() from example in help (thanks to Kjetil Brinchmann) 21.9.04 finterp: fixed if test to remove occasional warning (thanks to Ken Knoblauch) 17.9.04 gnlmix: removed erroneous printing that distribution is censored for binomial (thanks to Ken Knoblauch) 28.7.04 gnlmix, hnlmix: fixed printing of results when nonlinear function contains a linear part (thanks to Ken Knoblauch) 2.7.04 tvctomat: fixed warning message on removing tm (thanks to Patrick Lindsey) 1.6.04 glmm: changed print.summary.glmm to work under R1.9 (thanks to Spencer Graves) 5.4.04 fnenvir: fixed obscure error when linear used in a function (thanks to Ken Knoblauch) help: corrected truncation of usage formula for certain functions (thanks to Patrick Lindsey) 9.1.04 fitdist: fixed typo that stopped geometric distribution from working 6.1.04 ordglm: changed tapply to capply because of change to former (thanks to Andrew Criswell) 9.12.03 corgram: start abscissa at 0 for PACF fnenvir: fixed grep for checking redundant parameters bnlr, fmr, gnlmm, gnlr, gnlr3, int, nlr, nordr, read.list, stable.mode: fixed if() on vector 14.11.03 readdna, survkit, glmm, gnlmm, objectrm, readrm: removed obsolete codes() function (thanks to Ken Knoblauch) carma: give error when ccov=~1 used (thanks to Patrick Lindsey) 21.8.03 elliptic: corrected print function when the dispersion function depends on the location function (thanks to Gabrielle Kelly) 31.7.03 hnlmix: corrected options for dnbinom (thanks to Jagat Sheth) 30.6.03 dftorep: corrected check for ordered times to allow for two levels of nesting 25.5.03 ordglm: added a data argument (thanks to Kosuke Imai) 13.5.03 ordglm: corrected order of printing of standard errors (thanks to Kosuke Imai) 25.4.03 gnlr, gnlr3, gnlmm, fmr, nordr: changed test for environment because the value returned by parent.frame() has a class in R1.7 22.4.03 cphidden: a function to locate a changepoint in continuous time using a two-state hidden Markov model 9.4.03 biv.binom: corrected degrees of freedom printed (thanks to Goran Brostrom) 12.2.03 restovec: fixed handling of delta when the response is a list 16.1.03 kalsurv: fixed typo in print.kalsurv (thanks to Anthony Gichangi) 4.12.02 int: changed eps 2.12.02 fit.dist: added Laplace distribution 1.12.02 glmm: added error message if (codes of) nesting variable not consecutively numbered (thanks to Renaud Lancelot) 27.11.02 fit.dist: changed Weibull parametrisation so that mu refers to y and not to y^alpha 22.11.02 fit.dist: added Cauchy and Student t distributions use (log) density functions instead of writing formulae 18.11.02 fit.dist: added beta-binomial distribution 16.11.02 fit.dist: corrected error in calculation of log likelihood when censor=T 14.11.02 fit.dist: corrected error in calculation of fitted values for zeta distribution 31.10.02 int2: added default limits (thanks to Patrick Lindsey) 8.9.02 (repeated) gar: corrected recursive fitted values when binomial (thanks to Patrick Lindsey) 4.9.02 gausscop: exponential distribution now works (thanks to Patrick Lindsey) 30.8.02 restovec: modified checks for nesting in lists and allow nesting to be supplied separately when a list is given (thanks to Patrick Lindsey) gausscop: for positive-valued distributions, returned predicted values without transforming by log link function (thanks to Patrick Lindsey) 18.7.02 ehr: addition checks of data supplied to this suite of functions rs3: fixed typo marg.hom: added checks on data 16.7.02 chidden.r, hidden.r: corrected negative binomial check so that 0 responses are allowed (thanks to Ben Cooper) 10.7.02 modified man pages for changed arguments to rgamma function rmutil: created dist.h file 11.6.02 hnlmix: corrected AIC for penalty constraint (was too large by one) changed calculation of multiplicative random effects 23.5.02 rmutil: added [pdqr]twosidedpower distribution added log option to all density (d) functions gar, gnlr, gnlmix, gnlmm, hnlmix: added two-sided power distribution gnlr: user-supplied likelihood function works again (thanks to Martin Liermann) finterp, fnenvir: added option to allow any response to be a covariate 9.5.02 hnlmix: recursive fitted values available ordglm: fixed error that PearsRes not defined when individual data are supplied (thanks to Kamal Desai) 6.5.02 gnlmix, hnlmix: added inverse gamma mixture distribution gnlmix: handles censored data gnlmm: finds nesting variable when repeated environment is specified 5.5.02 finterp: modified so that as.factor(times) works 30.4.02 hnlmix: nonlinear random effects models using a modified Lee and Nelder h-likelihood gnlr: modified check on location parameters for Levy distribution added check that double Poisson, multiplicative Poisson, gamma count, and logarithmic data are not censored version 0.9 28.4.02 gnlmix: corrected typo in negative binomial distribution 23.4.02 carma, chidden, elliptic, hidden, kalseries: give error if censored data supplied (thanks to Troels Ring) 22.4.02 elliptic: when two levels of nesting, calculate correctly first recursive fitted value in each cluster (was plotted correctly using iprofile) plus corresponding simplification of plot.iprofile (thanks to Troels Ring) 17.4.02 (all libraries) gnlmix: corrected typo in inverse Gauss mixing distribution print model methods: added option not to print correlations 15.3.02 restovec, tcctomat, tvctomat: added optional description slot to response, tccov, and tvcov objects 13.3.02 glmm: convert repeated object to dataframe if supplied as data tcctomat, tvctomat: corrected to detect contrast options when dataframe=F 12.3.02 tvctomat: corrected problem for list of factors when dataframe=F (thanks to Patrick Lindsey) finterp.default: give error if members of dataframe named using $ (thanks to Christof Bigler) 28.2.02 chidden, hidden: added check for correct number of initial estimates when list of functions supplied (thanks to Patrick Lindsey) 22.2.02 corgram: added option for PACF 19.2.02 fmr: modified some discrete distributions to avoid overflow with large counts 17.2.02 elliptic: added as.double for y in call to C code because of change in read.table 12.2.02 finterp: give error if offset used in W&R formula 31.1.02 %^%: power of a matrix elliptic: corrected problem when common parameters in mean and variance functions 20.1.02 plot.repeated: added selection of profiles to be plotted by using ccov 14.1.02 gar: added absolute value arch (names of others changed: additive -> square, multiplicative -> exponential) volatility method for extracting values of nonconstant dispersion parameter Makefiles: removed . for SHLIB_EXT for R1.4.0 dist.c, kcountb.c, romberg.c, stable.c: changed malloc to R_alloc 10.1.02 (dhpqr)ggamma, fmr, gausscop, gnlmix, gnlmm, gnlr, gnlr3, hgamma: changed argument of (dpqr)gamma for compatibility with R1.4.0 modified help to work with R1.4.0 18.12.01 contr.mean: provides correct labels on mean constraints (corrects contr.sum) 4.12.01 chidden, hidden: corrected printing out family parameter with AR when there is not one 28.11.01 qstable: corrected bug when tail<1 and skew=1 (thanks to Alec Stephenson) 23.11.01 corgram: handles NAs in the series 19.11.01 cprocess: fixed error in checking for list of events (thanks to Troels Ring) stablereg: changed alpha to allow parameter to be (0,2) instead of (1,2) 18.11.01 chidden: added time-discretized Poisson process 17.11.01 chidden, hidden: added Student t distribution changed Cauchy shape parameter to scale instead of scale^2 15.11.01 gar: added Student t distribution added ARCH models elliptic: when AR, take log of determinant returned by dpodi (thanks to Gabrielle Kelly) 13.11.01 elliptic: when series of independent observations, calculate covariance determinant as sum of log variances instead of log of product (thanks to Gabrielle Kelly) 8.11.01 cmcre: corrected problems when a covariate is used (thanks to Anthony Gichangi) 6.11.01 print.response: do not print mean if nominal (but not binary) or ordinal (thanks to Patrick Lindsey) 25.10.01 hidden.r: corrected check for fixed zeros in transition matrix relaxed check for rows of transition matrix summing to one chidden.r: relaxed check for rows of transition matrix summing to zero (all thanks to Patrick Lindsey) 24.10.01 restovec: weights can be logical 14.10.01 gar: fixed output printing when shape is a function of location parameter use dnbinom function changed negative binomial shape parameter to be same as in gnlr 10.10.01 carma, chidden, gar, hidden, kalcount, kalseries: check for two levels of nesting when serial dependence fitted 9.10.01 kalseries: corrected error when torder used with tvcov 8.10.01 hidden, chidden: added observed AR(1) gnlr, gnlmm, gnlmix: changed parametrization of the shape parameter for the beta distribution (thanks to Goran Arnoldsson) binnest: duplicate variables in Fortran call model functions using envir: check that response specified is one in envir when only one present 3.10.01 plevy, qlevy: use pnorm and qnorm instead of integrating 26.9.01 elliptic: added second form of asymmetric multivariate Laplace distribution with constant asymmetry parameter 25.9.01 elliptic: added asymmetric multivariate Laplace distribution 24.9.01 carma.r: removed unnecessary check that envir is a repeated object (thanks to Troels Ring) 11.9.01 fit.dist: added checks that grouped frequency data are supplied 10.9.01 kalsurv: corrected output errors when environment is supplied gar: use log option in dbinom, dpois kalcount: set first recursive prediction in series to marginal prediction 6.9.01 gar: added loglog link for binomial data (corrected cloglog which was, in fact, loglog) 20.8.01 gnlmix: set undefined sh3 to NULL for one parameter distributions 1.8.01 chidden, gar, gnlr3, hidden: added skew Laplace distribution 27.7.01 corgram: improved presentation of correlogram 25.7.01 d,h,p,q,rskewlaplace: probability functions for the skew Laplace distribution 24.7.01 autointensity.r: plots autointensity function of a point process 12.7.01 plot.repeated: fixed error of unknown type when plotted time-varying covariate (thanks to Patrick Lindsey) carma: clearer error message when incorrect environment supplied 10.7.01 carma: will handle data objects with (one of) multivariate responses chidden, hidden: handle Jacobian correctly with (one of) multivariate responses 6.7.01 cprocess.r: recognizes data objects for events and not just for times 5.7.01 f2c.h included in library for toms614.c (missing in R1.3.0) 27.6.01 iprofile, mprofile: corrected links to other libraries for html help plot.cum.pergram: corrected confidence interval pergram: changed calculation of length for odd-lengthed series 22.6.01 gar.r: check that times are supplied and, if not, create if possible 19.6.01 fmr.r, gnlmm.r, gnlr.r, gnlr3.r: linear can be ~1 if mu not supplied 14.6.01 marg.hom.r: modified to handle factor variables with non-numeric levels 8.6.01 ordglm.r: corrected fitted values when weighted observations (thanks to Troels Ring) 31.5.01 elliptic: changed check on initial variance function estimates 16.5.01 print.response, print.tvcov, print.repeated: added option to print range of numbers of observations per individual instead of vector of numbers (thanks to Markus Jantti) dmultpois, etc: added additional check on dependence parameter 9.5.01 gar.r: corrected printout for generalized gamma parameter 26.4.01 changed F and T to FALSE and TRUE throughout read.rep: removed col.names option because of changes in read.table 22.4.01 glmm: corrected typo when dataframe used with no offset 20.4.01 finterp: detects functions when given as arguments of other functions 19.4.01 finterp: formulae can be written on several lines, as several instructions (e.g. to assign temporary variables) 11.4.01 dburr, pburr, qburr, hidden, chidden, gnlr3: changed parametrization of Burr distribution (thanks to Patrick Lindsey) 28.3.01 chidden, hidden: corrected vector length problem in check for ordered intercepts in ordinal models (thanks to Niko Speybroeck) several p and d functions: changed check to y>=0 (thanks to Patrick Lindsey) 22.3.01 glmm: works again with weights and/or offset (thanks to Luc Duchateau) gnlmix: changed to log dispersion for mixing distribution 21.3.01 int.c: corrected memory allocation problem GExtVal: changed man page to agree with functions (both thanks to Patrick Lindsey) 20.3.01 use log option in d and p functions for h functions 14.3.01 chidden, hidden: added further checks on ordering of intercepts for ordinal data 13.3.01 gnlmix: changed dispersion parameter for normal mixing distribution from standard deviation to variance delta: returns a vector instead of a matrix if only for one variable 11.3.01 gnlmix: correction to work with unbalanced data 9.3.01 gar, gnlr3: added power variance function Poisson distribution 8.3.01 covariates: added expand option dpvfpois, ppvfpois, qpvfpois, rpvfpois: functions for the overdispersed power variance function Poisson distribution kalcount: corrected for power variance function 7.3.01 plot.response: corrected indexing problem 1.3.01 kalcount, kalseries, kalsurv: removed constraints on family parameter 27.2.01 chidden, fmr, gar, gausscop, gnlmix, gnlmm, gnlr, gnlr3, hidden, kalseries, kalsurv, nlr: relaxed type checks for continuous and duration data 26.2.01 kalcount, kalseries, kalsurv: added two-parameter power variance family mixture including gamma and inverse Gauss mixtures (family) for serial dependence 23.2.01 response: if response is univariate, returns a vector instead of a matrix covariates: if only one covariate requested, return as a vector chidden, hidden: improved checks for ordered intercepts with ordinal response improved calculation of ordinal probabilities 22.2.01 plot(iprofile()): works for models from kalsurv (thanks to Jacob Bowers) 19.2.01 chidden.r, hidden.r: corrected error in calculating individual profiles when tvmu used (thanks to Jacob Bowers) ordinal data can be used with multinomial option (thanks to Patrick Lindsey) work with ordinal data with a list of different formulae (thanks to Niko Speybroeck) 31.1.01 glmm.r: works if response is a one-column matrix instead of a vector (thanks to Luc Duchateau) restovec: corrected manual so that arguments section appears (thanks to Patrick Lindsey) 30.1.01 finterp, fnenvir: further correction to handle decimal numbers (including scientific notation) correctly finterp: replaced gsub by all.vars 25.1.01 name of response can be a character string when environment is supplied (thanks to Patrick Lindsey) hidden, chidden: added description of pintercept to man page delta: works properly when name is supplied plot functions: use 6 different line types instead of 4 gausscop: corrected mean for gamma margin check that only one initial estimate when no dispersion function 18.1.01 transform.response: works when units is NULL hidden, chidden: reversed order of categories for proportional odds and continuation ratio replaced dqrcf with dqrsl nordr: bug fix to work with data objects 8.1.01 cutil.c, romberg.c, toms614.c: changed include for call_R (thanks to Dennis Murphy) 7.1.01 model fitting functions check for correct type of response dftorep, read.rep: modified to handle new “types” of responses dftorep: now handles two column binomial response ehr.r: rewrote to conform to other functions 4.1.01 restovec: option to add responses to an old response object and types of responses returned in list instead of as a class resptype: new method to return types of response variable(s) finterp.repeated: check that binomial and censored responses are not used as covariates 21.12.00 gar.r: corrected error in printing three-parameter distributions 19.12.00 finterp, fnenvir: methods for dataframes gnlm functions: environment can be a dataframe 18.12.00 changed check for existence of response when environment supplied bnlr, fmr, gnlmm, gnlr, gnlr3, nordr: fixed calculation of n for null function 17.12.00 various changes because of new R1.2.0 handling of formulae finterp: check for + or - before ( (change to R1.2.0) elliptic: removed check on tvcov, so can accept times and individuals 15.12.00 bnlr, fmr, gnlr, gnlr3, gnlmm, nordr: nonlinear formula need not contain covariates changes to cutil.c and coxre for R1.2.0 14.12.00 restovec, tcctomat, tvctomat: added slot for units of measurement dftorep, read.rep: id can be a factor variable carma, elliptic, gar, gausscop, kalcount, kalseries, kalsurv: test if name of response exists when envir contains multivariate responses stable: corrected man pages 3.12.00 gnlmix: generalized nonlinear models with one random parameter having arbitrary mixing distribution 30.11.00 fitdist.r: calculate log probabilities and likelihoods to avoid underflow (thanks to Goran Brostrom) int2: vectorized two-dimensional Romberg integration 28.11.00 added d, p, q, and r functions for Consul generalized Poisson distribution added PACKAGE option to .C and .Fortran calls bnlr.r: added cloglog and loglog links changed class of gnlr-type functions from gnlr to gnlm gar.c: corrected calculation of censored Burr, Pareto, power exponential distributions (thanks to Patrick Lindsey) 24.11.00 q and r functions: improved calculations and checks 23.11.00 bnlr, fmr, gnlmm, gnlr, gnlr3, nordr: nonlinear formulae can have a linear part qgweibull, qggamma, qgextval: corrected arguments to functions and docs (thanks to Patrick Lindsey) 21.11.00 fmobj: find objects referred to in a formula elliptic.r, fmr.r, gar.r, gausscop.r, gnlmm.r, gnlr.r, gnlr3.r: models with common parameters in several regression functions can be specified using formulae, not just functions elliptic.r, gar.r, gausscop.r, gnlmm.r, gnlr.r: models with shape as a function of location can be specified using formulae, not just functions 20.11.00 finterp.r: formulae can have common parameters and/or depend on functions 16.11.00 hidden, chidden: added recursive predicted values added q and r functions for distributions in rmutil 14.11.00 kalseries: corrected error in inverse Gaussian distribution (thanks to Patrick Lindsey) bnlr.r: added stable and mixture links gnlr, gnlmm, gar: added beta and simplex distributions rmutil: added psimplex and dsimplex 9.11.00 improved checking for multivariate response and choosing one response when several present in a data object 6.11.00 fmr.r, printrm.r: corrected so that works with common parameters (thanks to Laura Thompson) 29.10.00 gnlr3.r: corrected typo in normal and inverse Gauss distributions 19.10.00 gausscop: multivariate Gaussian copula with arbitrary marginals elliptic.r: several typing errors corrected —————————————————————————— version 0.8 —————————————————————————— 17.10.00 carma, elliptic, kalseries: handles NULL delta correctly with multivariate response in repeated object restovec: gives names correctly to multivariate 3-dim array covariates.repeated: calculates number of observations correctly when multivariate response 15.10.00 qstable: corrected bug due to change in uniroot (thanks to Gabrielle Kelly) dstable, pstable, qstable, rstable: added checks that parameter values are correct 13.9.00 (growth, repeated, rmutil) restovec: check for NAs in ordinal responses (thanks to Patrick Lindsey) elliptic, kalseries: check that torder is not larger than number of time points (thanks to Patrick Lindsey) elliptic: corrected undefined n (thanks to Patrick Lindsey) 12.9.00 kalseries: constant shape parameter for (log) logistic, Cauchy, and Laplace distributions previously had square root transform kalseries: added inverse Gauss distribution 7.9.00 restovec: corrected error (change in R) when censor is all ones (thanks to Troels Ring) 17.8.00 removed provide () rmutil: removed det() 14.8.00 rmna: corrected typo in man page 17.7.00 nordr.r: corrected minor bugs for weights and data object handling (thanks to Patrick Lindsey) 5.7.00 as.data.frame.x: added options from default (thanks to Patrick Lindsey) rmna: removes NAs in weights (thanks to Patrick Lindsey) restovec: handle correctly option, times=T (thanks to Patrick Lindsey) covariates.repeated: handle correctly non-repeated observations (thanks to Patrick Lindsey) 21.6.00 plotrm.r: plot.residuals corrected so ccov works (thanks to Patrick Lindsey) 14.6.00 carma.r: correction for ccov as one-column matrix (thanks to Patrick Lindsey) 7.6.00 elliptic.f: fixed crash with more than one covariate in tvcov (thanks to Bruno Genicot) 1.6.00 elliptic.r: corrected check to allow varfn= “identity” or “square” 30.5.00 bnlr.r: binomial regression with various links 22.5.00 fnenvir.r: can handle functions without parameters (thanks to Troels Ring) 11.5.00 fit.dist: corrected exact fit for negative binomial and added default options for main and xlab 6.4.00 runge.kutta, lin.diff.eqn: functions to solve differential equations 5.4.00 gar.r: handles censored data correctly when a data object contains more than one response 29.3.00 runge.kutta.r: solution of differential equations 20.3.00 nlr: corrected undefined mu1 17.3.00 print.response: check for NAs in times 15.3.00 glmm: obtain nest vector from dataframe if supplied 14.3.00 nordr, ordglm: clearer error message if the response is not a numeric vector with integral values starting at 0 (thanks to Troels Ring) 15.2.00 ordglm: corrected bug when more than three categories 12.2.00 (repeated, event) kalcount, kalseries, kalsurv: autoregression with frailty dependence 9.2.00 kcountb.c, kserieb.c, ksurvb.c, ksurvg.c: changed -log(1-pfn()) to -pfn(,,,0,1) and removed inthaz.c 8.2.00 all libraries: corrected C code for R0.99 kalcount: corrected error in recursive predicted values for gamma intensity 1.2.00 restovec: corrected handling of weights when response is a list kalsurv.r: corrected plotting of profiles for logged distributions cutil.c: changed Fortran.h to Rconfig.h and moved to rmutil cgamma.c: replaced by cutil.c inthaz.c: changed finite() to R_FINITE() 27.1.00 gar: three-parameter distributions work with constant dispersion parameter kalcount, kalseries, kalsurv: if mu contains time-varying covariates, initial estimates must be in ptvc 24.1.00 finterp, fnenvir: changed name of .fn to avoid conflicts most model functions: check that supplied function was not produced by finterp already 20.1.00 as.data.frame: puts binomial and censored responses as two-column matrices gnlr, fmr, gnlmm, gar: binary response need only be one column for binomial 17.1.00 finterp, fnenvir: handle decimal numbers correctly most model functions: print out regression function correctly when envir supplied 16.1.00 gar: added Consul generalized Poisson distribution transform: check for nonpositive and infinite values in Jacobian carma, elliptic, gar, kalseries: sqrt transformation checks for zero response values 14.1.00 most model functions: check for NAs in data objects (because of lvna) gnlr, gnlr3, fmr, gnlmm: possible to fit a model without optimizing any parameters in the regressions if they are functions restovec, dftorep, read.rep: add additional validity checks for responses times.default: replaces times.repeated 11.1.00 hidden, chidden: handle multinomial count data nlr: modified to handle data objects correctly most model functions: changed way of detecting multivariate responses finterp: correct error for length of response when multivariate 10.1.00 gettvc: works correctly for first observation when ties=FALSE (thanks to Patrick Lindsey) finterp: can find response variables of formula in repeated objects for most model functions, one of multiple responses in a repeated data object can be selected for the model 9.1.00 restovec: handles multivariate matrices, arrays, and lists dftorep: transform a dataframe to a repeated data object read.rep: read a rectangular data set from a file and create a repeated data object directly 7.1.00 logitord.f: reformatted to remove tabs for HP compilers (thanks to Osman Buyukisk) restovec: responses can have more than one type class —————————————————————————— version 0.7 —————————————————————————— 3.1.2000 residuals.elliptic: corrected error in calculation of raw residuals 31.12.99 objectrm.r: can select certain individuals with methods, covariates, delta, nesting, times, weights transform: handles NAs in response correctly 28.12.99 restovec: added name of response variable to list returned objectrm.r: added as.data.frame and as.matrix methods for data objects wr: works with my data objects nordr: changed sign of coefficients for continuation ratio and adjacent categories models so comparable with proportional odds 27.12.99 finterp: with W&R notation, design matrix no longer returned as attribute when ~1 and .envir supplied, returns a function yielding a vector of correct length 26.12.99 fit.dist: corrected exact fit of negative binomial gnlr, gnlr3, fmr, gnlmm: improved speed nordr, ordglm: ordinal categories numbered from 0 instead of 1 hidden, chidden: multinomial categories numbered from 0 instead of 1 handles ordinal models, thanks to Patrick Lindsey gettvc: now handles NAs in response variable 25.12.99 improved documentation for methods to access data objects and functions int: call C instead of Fortran for TOMS614 23.12.99 restovec: added additional checks that correct data are supplied (thanks to Troels Ring) mprofile.carma: corrected bug when no covariates carma: corrected bug when delta is a scalar carma, elliptic: added checks for incorrect formulae 21.12.99 hidden, chidden: improved printout and corrected error in checking number of parameters in lists of formulae 20.12.99 lvna: creates a repeated object leaving NAs in hidden, chidden: interactions between time-constant and time-varying covariates allowed 17.12.99 hidden, chidden: improved printout 16.12.99 tvctomat: handles lists of factor variables correctly restovec: value returned has class of type of response as well as “response” added checks hidden, chidden: can also use formulae if multinomial 12.12.99 hidden, chidden: can use formulae if not multinomial 7.12.99 cmcre: corrected memory leak 6.12.99 cmcre: continuous-time two-state Markov process with random effects 5.12.99 coxre: corrected several errors 1.12.99 stable: fixed plot arguments in help examples 29.11.99 finterp: fixed bug when multiple (’s or ^ before ( when detecting function names nobs: use method instead of direct access in all functions provide default method covind: provide default method 25.11.99 collapse: changed name to capply because of conflict in nlme 23.11.99 profile: changed to mprofile because of conflict in R0.90 22.11.99 finterp: properly distinguishes unknown parameters from functions finterp and fnenvir: when no variables found, changed stop to warning nobs: corrected for independent observations when length is one 18.11.99 stablereg: corrected bug when some parameters are not optimized check for NAs in the hessian 17.11.99 plot.repeated, plot.response: added special call for ordinal responses corrected plot for independent observations (thanks to Patrick Lindsey) 14.11.99 removed unneeded aliases in man pages added aliases to plot.profile and plot.iprofile 11.11.99 added check for Inf (as well as NAs) in hessian to all functions using nlm kalseries.r: added error message if times not available for Markov dependence changed rep(1,n) to rep(1,nind) when mu function returns scalar stable.r: moved call to C code into likelihood function for speed int.r: limits can be specified as -Inf, Inf 4.11.99 kalcount.r, kalseries.r, kalsurv.r: with time-varying covariates in a function or formula, initial estimates can be in preg or ptvc and changed length(resp\(response\)y) to n for speed 31.10.99 gar.r: fixed undefined npt3 for autoregression parameter finterp.r: fixed bug for : in W&R formulae kalcount.r, kalseries.r, kalsurv.r: added error message when time-varying covariates 22.10.99 covind: changed so that it works with carma, elliptic, gar, hidden, kalcount, kalseries, and kalsurv objects 18.10.99 gar.r: corrected printing of parameter values for three-parameter distributions gar.c: corrected calculation of lambda in three-parameter distributions 17.10.99 gar.r: corrected fitted values (due to changes on 12.10.99) 14.10.99 gar.r: corrected undefined variable, tm (due to changes on 12.10.99) 12.10.99 stable: stablereg with nonlinear regression replaces stableglm finterp, fnenvir: check for factor variables instead of not being a numerical or logical vector gar: allow autoregression parameter to depend on covariates dist.c, kcountb.c, kserieb.c, ksurvb.c, ksurvg,c, stable.c: added #include “Rconfig.h” 4.10.99 ordglm.r: added deviance and corrected for zeros in table nordr.r: corrected typo potthoff.r: corrected erroneous calculation of standard errors (thanks to Tim Auton) 1.10.99 finterp, fnenvir: fixed conflict of names by beginning all names with a dot (thanks to Patrick Lindsey) elliptic.r: changed option and title from elliptic to power exponential 30.9.99 ordglm.r: generalized linear ordinal regression —————————————————————————— version 0.6 —————————————————————————— 21.9.99 pkpd.r: changed mu2.1o1cfp to ensure ked>0 20.9.99 resid.f: correction to work with MS-Windows 7.9.99 binnest.r, survkit.r: changed NULLs for Fortran to work with R0.65 6.9.99 ehr.r, kalsurv.r, fmr.r, gnlr.r, gnlr3.r, nlr.r, nordr.r, elliptic.r, gar.r, gnlmm.r, kalcount.r, kalseries.r: changed attributes to work with R0.65 finterp, fnenvir: variables can be logical as well as numeric 3.9.99 Makefiles: moved \((FLIBS) to end of line 14.8.99 print.gnlr: corrected errors in printing fmr, gnlr3, and gnlmm output fnenvir.tvcov: corrected error for undefined ex1a (-> ex2a) Pareto, gnlmm, hstudent, kalcount, kalseries, pkpd, read.list, read.surv, tvctomat: corrected examples 18.7.99 hidden.r, chidden.r: corrected one error message added printout of degrees of freedom 14.7.99 binnest.f: modified comments to compile with standard Fortran (thanks to Martin Maechler) ------------------------------------------------------------------------------ version 0.5 ------------------------------------------------------------------------------ 29.6.99 plot.response: remove NAs when calculating default ylim 28.6.99 gnlr.r, gnlr3.r, fmr.r, nordr.r, gnlmm.r: check if user gives a nonlinear formula in linear argument and correctly handle it 27.6.99 finterp: corrected error message when non-numeric vector supplied restovec: corrected printing of total times when negative times present added transform methods for response, tccov, and tvcov objects 24.6.99 gar.r: corrected error in printing shape functions 22.6.99 binnest.f: modified to compile with g77 8.6.99 binnest: binary random effects model with two levels of nesting 7.6.99 restovec: added an additional check for nest variable in lists 6.6.99 logitord.f: corrected bug in calculation of Hessian (and s.e.) 1.6.99 elliptic: added multivariate Student t distribution 11.5.99 finterp.r: functions allowed in W&R formulae carma.r, elliptic.r, kalseries.r, kalcount.r, kalsurv.r: allow factor variables finterp: can locate and use indices for individuals and nesting as factor covariates 10.5.99 tcctomat.r, tvctomat.r: allow factor variables finterp, fnenvir: changed to check for factor variables 6.5.99 elliptic.r: allow variance to be a function of the mean function 4.5.99 gar.c: changed normal distribution shape parameter from sd to variance 3.5.99 profile and iprofile: fixed to plot correctly with nesting 1.5.99 tcctomat, tvctomat: allow dataframes 28.4.99 tvctomat: time-varying covariates can be factors elliptic.r, gnlr.r, gnlr3.r, fmr.r, gnlmm.r, gar.r: location and shape functions can have common parameters 26.4.99 restovec: weights allowed for lists finterp, fnenvir: can find the times when envir is a repeated object gar.r: allow shape to be a function of the location function 23.4.99 gnlr.r, gnlr3.r, fmr.r, nordr.r, nlr.r, elliptic.r, gnlmm.r, gar.r, kalseries.r, kalcount.r, kalsurv.r, ehr.r: do not require envir if response has class, repeated corrected bugs in restovec and plot.response (Lorenz Gygax) 22.4.99 generalized plot.residuals tvctomat: allow calculation of more than one interaction with time-constant covariates at a time finterp and fnenvir: allow variables in environment to have same name as a function 21.4.99 correction of 18.1.99 by Brian Ripley wrong: dist.c put back in gnlm 20.4.99 ksurvb.c: corrected bug when time-varying covariates elliptic.r: added option to ccov and tvcov to give covariate names when response has class, repeated carma.r: added option to ccov to give covariate names when response has class, repeated 19.4.99 changed plot.profile to profile and plot.iprofile to iprofile 18.4.99 elliptic.r: added recursive predicted values when AR(1) and/or random effect 16.4.99 gnlr.r, gnlr3.r, fmr.r, nordr.r, gnlmm.r, ehr.r: changed order of parameters when function with linear part 15.9.99 ehr: corrected two errors when lambda function with linear part nordr.r: n changed to nrows 13.4.99 carma.r: corrected predicted values when response is transformed gar.r, kalseries.r: changed handling of transformed responses ------------------------------------------------------------------------------ version 0.4 ------------------------------------------------------------------------------ 12.4.99 added dependency on rmutil to DESCRIPTION 11.4.99 elliptic.f: corrected handling of dose for PKPD model when time-varying covariates are present 6.4.99 elliptic.r, gnlmm.r, gar.r, kalseries.r, kalcount.r, kalsurv.r, ehr.r, nordr.r, nlr.r: modified to use fnenvir 5.4.99 gnlr.r, gnlr3.r, fmr.r: modified to use fnenvir 4.4.99 fnenvir: checks functions for covariates and parameters and modifies them to read from data objects 1.4.99 elliptic.r: modified to use model formulae with unknowns finterp.r: added data objects as environment tvctomat, tcctomat: can combine two data objects 31.3.99 gar.r: modified to use model formulae with unknowns 30.3.99 rmna: check if a covariate only has one value after NA removal fixed examples docs so that they work 29.3.99 kalcount.r, kalsurv.r, ehr.r: modified to use model formulae with unknowns 28.3.99 gnlmm.r, kalseries: modified to use model formulae with unknowns restovec: added coordinates to response class for spatial data 26.3.99 gnlr.r, gnlr3.r, fmr.r, nordr.r, nlr.r: modified to use model formulae with unknowns 24.3.99 changed language check to inherits formula in all functions added methods for extracting elements from data objects finterp.r: transforms model formulae with unknowns into functions 22.3.99 restovec: times no longer required for clustered data type attribute added carma.r, elliptic.r, kalseries.r kalcount.r: check to see if times available 15.3.99 rmaov.r: wrote documentation pkpd.r : added two new models and corrected one other 13.3.99 restovec: allow ties in times 23.2.99 gar.c: corrected Laplace cdf and allowed negative values 11.2.99 ehr: corrected for ties kalsurv.r: prints out "birth process" when applicable instead of renewal process logitord.r: removed DUP=F from Fortran call 8.2.99 km.r: fixed bug in plot.dist.km when several groups are plotted (Gareth Ridall) 7.2.99 improved handling of variable names in tcctomat, tvctomat, and functions calling them rmaov.r: split-plot aov from Ralf Goertz 6.2.99 glmm.r: accepts transformed data if dataframe supplied 5.2.99 km.r: fixed bug for double lines with censored observations (Gareth Ridall) ehr.r: modified handling of language 4.2.99 km.r: added print.km to remove attributes restovec: accepts all response data, not just repeated measurements tvctomat: added calculation of interactions 2.2.99 restovec: added adding several column matrices in lists with censoring kalsurv.r: added delta option 1.2.99 glmm.r: binary response with binomial handled correctly 30.1.99 plot.iprofile.carma: corrected nind argument restovec, carma, elliptic, kalcount, kalseries: added how to handle `times' for clustered data to docs 28.1.99 bivbinom.r, marghom.r: minor corrections rs.r: improved printout 26.1.99 readrm.r: corrected lty specification in plot.response added option to plot points 24.1.99 gnlr.r, gnlr3.r, fmr.r, gnlmm.r: y can have classes, response or repeated added DUP=F to all .C and .Fortran calls pbirth.r: binomial data distributions 22.1.99 readrm.r: added ... for graphics options in plot.response and plot.repeated 21.1.99 rmna: added checks that ccov and tvcov have correct class 19.1.99 dist.c: changed static romberg to romberg2 and added static interp carma.r, chidden.r, elliptic.r, gar.r, hidden.r, kalcount.r, kalseries.r, kalsurv.r: allow response to have class, repeated restovec: allow delta to be a dataframe 18.1.99 corrections by Brian Ripley gnlm: removed redundant dist.c enclosed library.dynam in .First.lib potthoff.r: added matrix dimension checks util.r: removed orth function potthoff.r: replaced orth by contr.poly 17.1.99 carma.r, chidden.r, elliptic.r, gar.r, hidden.r, kalcount.r, kalseries.r, kalsurv.r: copy response vector more efficiently restovec: added total time for survival data coxre.r: reorganized for efficiency, eliminating data.frame cprocess.r: times can have class, response 16.1.99 gnlr.r, gnlr3.r, fmr.r, gnlmm.r: removed -delta/2 in right censoring calculation dist.r, gnlr3.r, gar.c, hidden.f: changed parametrization of Burr to agree with kalsurv.r elliptic.r: use var(y) to get initial estimate of variance ------------------------------------------------------------------------------ version 0.3 ------------------------------------------------------------------------------ 14.1.99 kalsurv.r: corrected printing of number of subjects and observations 2.1.99 cprocess.r: allow event counts with unequal times added mode="double" to is.vector 29.12.98 corrected minor bugs in fmr 28.12.98 corrected abs bug for Laplace in kalman C functions 27.12.98 restovec: corrected binary totals when given as a vector gar: added Levy, Pareto, generalized inverse Gauss, and power exponential distributions hidden and chidden: added various overdispersed and continuous distributions 22.12.98 hidden and chidden: added filter calculation and plots 21.12.98 moved Student t from gnlr to gnlr3 renamed beta as Pareto in kalcount, kalseries, and kalsurv corrected various minor errors in fmr and gnlr3 20.12.98 dist.r, gnlr.r, fmr.r: added gamma count and Pareto distributions 18.12.98 chidden: continuous-time hidden Markov chain models 7.12.98 dist.r, gnlr.r, fmr.r: added Levy distribution removed .so from Makefiles and library.dynam 6.12.98 util.r: added spectral decomposition to mexp 5.12.98 rmutil: added several p and d functions gnlr3.r: added censored generalized inverse Gaussian and power exponential distributions 2.12.98 int.r: vectorized Romberg integration 1.12.98 int.r: added option for Romberg integration 30.11.98 updated libraries with Brian Ripley's corrections 25.11.98 hidden: allow values in the transition matrix to be fixed at 0 or 1 24.11.98 hidden: added independence model 23.11.98 inthaz.c: changed header include bessel: removed function gnlr3.r: changed to internal bessel function 14.11.98 hidden: added multinomial distribution 12.11.98 hidden.f: corrected Poisson and binomial calculations 5.11.98 carmasub.f and survkit.f: changes for compatibility with g77 ------------------------------------------------------------------------------ version 0.2 ------------------------------------------------------------------------------ 2.11.98 ehr.r: corrected printing coefficients with linear and other parameters 1.11.98 km.r: corrected NaNs in log 29.10.98 carma.r: corrected printout of mean time km.r: corrected ylab for cdf 26.10.98 rmna: handles NAs in time-constant covariates properly carma.r and elliptic.r: accept ccov of class, tccov cprocess.r: added plots from counts of events 19.10.98 changed to inherits() throughout rationalized printing of gnlr, gnlr3, fmr, gnlmm and moved to rmutil added delta option to carma and elliptic 18.10.98 carma.r and elliptic.r: added handling of delta when y has class, response 17.10.98 gar.r: added cloglog link 12.10.98 gnlmm.r: corrected handling of delta when y has class, response 11.10.98 replaced tapply() with collapse() in bivbinom, catmiss, glmm, gnlmm, ehr, coxre 10.10.98 ehr.r check for singular covariance matrix print names of variables for coefficients when language 8.10.98 kcountb.c: corrected dplogis call gnlmm.r: corrected calls to ddp, dmp, ddb, and dmb coxre.r: removed as.is=T in data.frame corrected printing shape parameters when language used in gnlr, gnlr3, fmr, gnlmm 7.10.98 rs.r: put in check that all covariates are positive gnlmm.r: set censor to F for binomial data dist.c: changed ddp, dmp, ddb, and dmb to log and introduced weights 6.10.98 kseries.c: corrected error in serial update kalseries.r: correcting printing error when there is an interaction kalsurv: added serial update inthaz.c: put back ihlogis (disappeared with nmath) renamed wr.r as util.r moved det and %**% from repeated and growth to rmutil/R/util.r 5.10.98 corrected check in carma, elliptic, gar, and kalseries for nonpositive transformed values 4.10.98 glmm.r: corrected two errors 1.10.98 extended residual plots to all of class recursive kalcount, kalseries, kalsurv: return mean profiles in z\)pred plot.profile: accepts z\(pred as well as a mean function nbkal.r: corrections corrected and updated a lot of docs 30.9.98 moved kalsurv to event library renamed rmtools as rmutil inthaz.c: corrected error from change to nmath 29.8.98 kalsurv.r: added recursive fitted values kalseries.r: added recursive fitted values updated plot.residuals for recursive class 27.9.98 corrected docs for plot.profile, plot.iprofile added covind.default plot.iprofile: corrected default settings 24.9.98 gettvc.r: allow an option for ties bessel.r: only calculate one kind of function 20.9.98 gettvc.r: allow NAs in time-varying covariate corrected for ties between response and covariate tvctomat: allow tvcov to already have class, "tvcov" added as.double in all Fortran and C calls 18.9.98 plotrm.r: corrected bug in plot.iprofile due to new covind() 16.9.98 pkpd.r: corrected mu2.0o2c and added mu2.0o2cfp 15.9.98 replaced Bessel, Gauss-Hermite, and integration routines bessel.r: added docs 14.9.98 moved wr to rmtools and added docs added covind function to rmtools 12.9.98 kalserie.r: added delta option tcctomat.Rd: corrected alias created new library, rmtools 11.9.98 dist.r: added beta binomial dist.c: simplified calls to overdispersion functions autocor.r: corrected pergram kalserie.r: corrected error in printing parameters with torder>0 kserieb.c: corrected error when mu function used 10.9.98 readlist.r: corrected binomial totals for lists in restovec fmr.r: removed unnecessary code gar.r: added overdispersed binomial data dist.r: allow dispersion to be a scalar when mean is a vector created documentation for p and d functions 9.9.98 nordr.r: corrected weights for adjacent categories model test for p>1 in proportional odds gar.r: added checks on times and mu arguments added binomial data corrected docs for elliptic, gar, kalcount, kalseries, kalsurv, nbkal for z\)index clarified docs for rmna, restovec, tcctomat, and tvctomat 8.9.98 removed backslash at end of Makefiles for event, gnlm, growth moved integer declarations to the beginning in carmasub.f, elliptic.f, gettvc.f, survkit.f so that g77 should work 5.9.98 gar.r Corrected predictions for transformed responses —————————————————————————— version 0.1
{"url":"https://cran.r-project.org/web/packages/rmutil/news/news.html","timestamp":"2024-11-08T08:28:56Z","content_type":"application/xhtml+xml","content_length":"52704","record_id":"<urn:uuid:e64dbf90-61f2-4aea-abaf-7568d836e35c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00410.warc.gz"}
Continuous mapping From Encyclopedia of Mathematics A mapping Continuous function). There are many equivalent definitions of continuity. Thus, for the continuity of a mapping a) the inverse image b) the inverse image The concept of a continuous function, which was correctly stated already by B. Bolzano and A.L. Cauchy, played an important role in the mathematics of the 19th century. Weierstrass' function, which is nowhere differentiable, "Cantor's staircase" and Peano's curve pointed to the need of considering more special cases of continuity. The necessity of selecting special classes of mappings became even more urgent when continuous mappings of more general objects — topological spaces — were considered. One could mention the following important types of continuous mappings: topological mappings or homeomorphisms, perfect mappings, closed mappings, open mappings, and quotient mappings (cf. Homeomorphism; Closed mapping; Open mapping; Perfect mapping; Quotient mapping). If One of the aims of topology is the classification of both spaces and mappings. Its essence consists of the following: three fundamental and closely connected problems are selected. 1) In what case can every space of a certain fixed class These general statements comprise in particular, the following question: What topological properties are preserved by mappings of one kind or another under transition from a space to its image or inverse image? 1) Every Essential mapping) onto an The first specific problems of this kind were solved at the beginning of the 20th century. Such are, for example, the representation of an arbitrary compactum as a continuous image of the Cantor perfect set (Aleksandrov's theorem); the characterization of metric spaces with a countable base as open continuous images of subspaces of the space of irrational numbers (Hausdorff's theorem); the description of locally connected continua as continuous images of an interval. The solution of these problem not only made it possible to answer questions about interrelationships between previously known spaces, but also led to the emergence of interesting new classes of spaces. Such are, for example, dyadic compacta, paracompact The concept of a real-valued continuous function, that is, a continuous mapping of a topological space into Urysohn lemma, the Brouwer–Urysohn theorem on the extension of continuous functions from closed subsets of normal spaces, A.N. Tikhonov's definition of completely-regular spaces (cf. Completely-regular space), and the Stone–Weierstrass theorem. These and other investigations led to the creation of a theory of rings of continuous functions, the methods of which turned out to be quite fruitful in general topology. A substantial part of dimension theory is the study of the behaviour of dimensional characterizations of spaces under transition to an image or inverse image by mappings of one class or another. Here an important role is played by One of the characteristics of dimension is the possibility of extending a continuous mapping from a closed subset to an One of the best studied classes of continuous mappings is that of the perfect irreducible mappings (cf. Perfect irreducible mapping). The theorem on the absolute of a regular space stimulated entire series of investigations in this domain. In particular, the concept of an absolute has been extended to the class of all Hausdorff spaces. Closely connected with the concept of a continuous mapping turned out to be that of a The selection of uniformly-continuous functions from the class of all numerical functions of one or several variables became one of the starting points of research leading to the notion of a uniform Continuous mappings of one type or another lie at the basis of the theory of retracts, splines and homology theory. A major role in modern mathematics is played by various aspects of the theory of multi-valued mappings (cf. Multi-valued mapping). Questions related to continuous mappings of Euclidean spaces are interesting by the wealth of ideas they contain. [1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian) [2] N. Bourbaki, "Elements of mathematics. General topology" , Addison-Wesley (1966) (Translated from French) [3] J.L. Kelley, "General topology" , Springer (1975) [4] A.V. Arkhangel'skii, V.I. Ponomarev, "Fundamentals of general topology: problems and exercises" , Reidel (1984) (Translated from Russian) [5] R.A. Aleksandryan, E.A. Mirzakhanyan, "General topology" , Moscow (1979) (In Russian) A mapping is called The Brouwer–Urysohn theorem mentioned above is generally known as the Tietze–Urysohn theorem. H. Tietze proved it for metric spaces and P.S. Urysohn for normal spaces. The statement that locally connected continua are continuous images of the unit interval is commonly known as the Hahn–Mazurkiewicz theorem. More about rings of continuous functions can be found in [a1]. [a1] L. Gillman, M. Jerison, "Rings of continuous functions" , v. Nostrand-Reinhold (1960) How to Cite This Entry: Continuous mapping. V.V. Fedorchuk (originator), Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Continuous_mapping&oldid=16175 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"https://encyclopediaofmath.org/wiki/Continuous_mapping","timestamp":"2024-11-08T23:37:54Z","content_type":"text/html","content_length":"28064","record_id":"<urn:uuid:d9fb8beb-03d8-4220-9985-fbddf0efb67d>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00739.warc.gz"}
Grand Street Wealth Management With contract openers being exchanged in the near future, everyone seems to be talking about what we Delta pilots should be getting in our next contract. Everyone has their areas they would like to see improved. Pay, scope and retirement seem to be at the top of the list. Today we will look more in-depth at the Delta Pilots Defined Contribution (DC) plan to see how it stacks up against the old Defined Benefit (DB) plan it replaced. A Little History Without getting into the weeds, the old DB pension plan paid a percentage of your final earnings for the rest of your life. When most of today’s pilots at Delta were hired they expected to receive 60% of their final average earnings during retirement. Now newly hired pilots enter a different retirement system: a Defined Contribution (DC) plan (it is not technically a DC plan, but it acts almost the same way). It pays 15% of the pilot’s earnings into their 401K. Since these plans are so different it is hard to make a direct comparison, but we will work through an example to see how they stack up. Our Example We will assume our pilot was hired young enough to have a 30-year career. During this career he will spend 8 years as an A320 First Officer, 7 years as an A330 First Officer, 8 years as an A320 Captain and 7 years as an A330 Captain. We will also assume he gets paid 80 hours a month and profit sharing is 10% per year. We will do this whole example in “real” terms, meaning we will work in today’s dollars and assume inflation will have the same effect on all scenarios. This means that the hourly rates are increasing with inflation each year. We will also assume our pilot gets a real return of 5.6% per year—this is the historical real return of a 60/40 portfolio of the S&P 500/5 year US Treasury Notes (an 8.71% nominal return). Depending on who you are and where you sit, these can look like very optimistic assumptions or very pessimistic. Obviously I could have used any range of assumptions, but we will go with these and see how they work. Our pilot in this example would retire off the A330 and his final average earnings (FAE) would be $269,575.68 (again this in in today’s dollars – the nominal FAE with 3% inflation for 30 years would be $654,330.93). Under the old DB plan he would be entitled to receive 60% of his FAE or $161,745 for the rest of his life. There were other options available to him (50% survivor benefit for example), but we will keep it simple with the single life annuity. The cost of a single life annuity starting at age 65 to get an annual payout of $161,745 for the rest of his life is $2,468,498 (check out immediateannuities.com). In other words, we would have to save $2,468,498 to buy the same annuity the old DB pension would have provided. Unfortunately the 15% contribution in the new DC plan isn’t going to cut it. Using the real return mentioned above of 5.6% he would have saved only $1,917,760. This would buy him an annuity only providing him with 47% of FAE. To get to the 60% mark, Delta would have to increase the contribution to 19% of earnings! But this isn’t the whole story. In our example we received a constant real return of 5.6% per year. In the real world there is nothing constant about investing in risky assets. As I mentioned above the 60/40 portfolio of S&P 500/5 year Treasury Notes had a real return of 5.6%, but it also had a standard deviation of 11.53%. If we take the volatility of the returns into account the picture looks even worse. I ran a Monte Carlo simulation using the return and standard deviation of our portfolio with Delta contributing 19% into the 401K. Basically, instead of assigning a constant rate of return every year, it was a random number generated using the statistics of the portfolio. If you remember the bell curve from statistics, most of the returns fell under the fat part of the curve, but some years the returns were really good and others were really bad. After assigning a random return for each year, I can calculate if there was enough money saved to buy the annuity which provides the 60% FAE. I then run through the calculation 1,000 times to see what percentage of the time I have enough money to buy that 60% FAE annuity. With Delta contributing 19% he actually only had a 58% chance of having enough money to buy the 60% FAE annuity. If he wanted to be more confident – say 90% – that he would be able to buy that 60% FAE annuity, Delta would have to contribute 28% of his earnings to the 401K! So as you can see, even with a pretty good career and a decent rate of return, we still are pretty far from having as valuable a retirement situation as it used to be. This difference now has to be made up by saving more in our 401K’s ourselves, which of course decreases our take home pay. To get to that 90% confidence level above he would have to contribute 13% of his income to his 401K right from day 1, in addition to the 15% Delta provides. I can hear all the non-pilots who read this blog really crying for us right now, since this is a better situation than most people are in these days. Be that as it may, I hope increasing retirement funding is a priority in the next contract.
{"url":"https://grandstreetwm.com/fun-with-numbers-how-does-deltas-new-pension-plan-stack-up-against-the-old-one/","timestamp":"2024-11-06T12:08:46Z","content_type":"text/html","content_length":"69045","record_id":"<urn:uuid:7b740fdc-97d5-405b-82d8-76a9394b320d>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00796.warc.gz"}
4.1 And 4.2 Quiz Version A • 1. Find the area of a triangle that has a base of 12 and a height of 6. □ A. □ B. □ C. □ D. □ E. Cannot be determined Correct Answer B. 36 To find the area of a triangle, we use the formula: Area = (base * height) / 2. In this case, the base is 12 and the height is 6. Plugging these values into the formula, we get: Area = (12 * 6) / 2 = 72 / 2 = 36. Therefore, the area of the triangle is 36. Rate this question: • 2. Find the area of a triangle with a base of 4 and a height of 3. Correct Answer The area of a triangle is calculated by multiplying the base and the height of the triangle and then dividing the result by 2. In this case, the base is 4 and the height is 3. So, the area can be calculated as (4 * 3) / 2 = 6. Rate this question: • 3. Find the area of the triangle below: Correct Answer • 4. Find the area of a rectangle with a base of 6 and a height of 8. Correct Answer The area of a rectangle is calculated by multiplying its base by its height. In this case, the base is given as 6 and the height is given as 8. Therefore, the area of the rectangle can be found by multiplying 6 by 8, which equals 48. Rate this question: • 5. Find the area of the rectangle below. Leave the units out of your answer. Correct Answer 168 square centimeters • 6. Find the hypotenuse of the triangle below. Correct Answer • 7. Find the hypotenuse of the triangle below. Correct Answer • 8. Find the missing leg of the right triangle below. Correct Answer • 9. Find the missing leg of the triangle below. Correct Answer • 10. Two legs of a right triangle measure 5cm and 6cm. What is the length of the hypotenuse to the nearest tenth of a centimeter? □ A. 11 cm □ B. 3.3 cm □ C. 7.8 cm □ D. 15 cm □ E. 30 cm Correct Answer C. 7.8 cm In a right triangle, the length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the hypotenuse is equal to the sum of the squares of the other two sides. In this case, the squares of the legs are 25 and 36. Adding these together gives a sum of 61. Taking the square root of 61 gives a value of approximately 7.8 cm, which is the length of the hypotenuse to the nearest tenth of a centimeter. Rate this question: • 11. Find the length of the diagonal below. Correct Answer 25 cm 25 centimeters The given answer options "25, 25 cm, 25cm, 25 centimeters" suggest that the length of the diagonal is 25 cm, which is the correct answer. Rate this question: • 12. Find the perimeter of the triangle below. Note: you will need to use Pythagorean Theorem first to find the missing side. Correct Answer The perimeter of a triangle is the sum of the lengths of all three sides. Since the answer is given as 30, it means that the sum of the lengths of all three sides of the triangle is 30. Rate this question: • 13. Find the area of the triangle below. Note: first you must find the base of the triangle using Pythagorean Theorem. Do not use the hypotenuse when you are finding the area. Remember: Area of a triangle = b*h/2 Correct Answer The given question asks to find the area of a triangle. It instructs to first find the base of the triangle using the Pythagorean Theorem. Once the base is found, the area of the triangle can be calculated using the formula for the area of a triangle, which is base multiplied by height divided by 2. The answer given is 120, which implies that the base and height of the triangle were found to be equal to 20 units each. Therefore, the area of the triangle is 20 multiplied by 20 divided by 2, which equals 200 square units. Rate this question: Questions and Answers Pythagorean Theorem and Area Quiz Review Timeline + Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness. • May 01, 2023 Quiz Edited by ProProfs Editorial Team • Mar 06, 2010 Quiz Created by
{"url":"https://www.proprofs.com/quiz-school/story.php?title=41-42-quiz-version","timestamp":"2024-11-14T02:36:34Z","content_type":"text/html","content_length":"437389","record_id":"<urn:uuid:778fc6b1-6a11-492e-ba95-f3dc648cc4b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00323.warc.gz"}
Hard Interview Question (Brute Force) – Real Python Hard Interview Question (Brute Force) In this lesson, you’ll use what you learned in the previous sections to solve a hard real world interview question. Hard means something you might find in a more difficult technical phone screen or an onsite interview. Here’s the question: class Link: def __init__(self, val, next=None): self.val = val self.next = next def __str__(self): if not self.next: return f"Link({self.val})" return f"Link({self.val}, {self.next})" def merge_k_linked_lists(linked_lists): Merge k sorted linked lists into one sorted linked list. >>> print(merge_k_linked_lists([ ... Link(1, Link(2)), ... Link(3, Link(4)) ... ])) Link(1, Link(2, Link(3, Link(4)))) >>> print(merge_k_linked_lists([ ... Link(1, Link(2)), ... Link(2, Link(4)), ... Link(3, Link(3)), ... ])) Link(1, Link(2, Link(2, Link(3, Link(3, Link(4)))))) In this first lesson, you’ll go through a brute force solution. The entire solution will be available at the end of the course. Try the problem yourself and then watch the video for the solution. If you want to learn more about linked lists, check out Linked Lists in Python: An Introduction. My solution before watching video: import heapq class Link: def __init__(self, val, next=None): self.val = val self.next = next def __str__(self): if not self.next: return f"Link({self.val})" return f"Link({self.val}, {self.next})" def merge_k_linked_lists(linked_lists): >>> print(merge_k_linked_lists([Link(1, Link(2)),Link(3, Link(4))])) Link(1, Link(2, Link(3, Link(4)))) >>> print(merge_k_linked_lists([Link(1, Link(2)),Link(2, Link(4)),Link(3, Link(3))])) Link(1, Link(2, Link(2, Link(3, Link(3, Link(4)))))) links = [] for link in linked_lists: while link: heapq.heappush(links, link.val) link = link.next res = Link(heapq.heappop(links)) v = res while links: v.next = Link(heapq.heappop(links)) v = v.next return res I guess complexity would be: log(N) for priority queue and 2N for for list traversal. So it would be Nlog(N) @belushkin thank you for sharing! Yes that is the case where N is the number of elements (in my example I use k and n as different variables). Note for everyone, if you want to learn more about linked lists, here is a Wikipedia article My solution taking the advantage of that Link class does not contain any special information. If link class contains any special information, this is not useful. def merge_k_linked_lists(linked_lists): for linked_list in linked_lists: for val in all_values: if not prev_link: return prev_link Here is my solution. It finds the linked list that has the current link with the smallest value, and then adds that link on to the end of the new list and removes it from its original list. This should make efficient use of the fact that all the linked lists are themselves sorted. def merge_k_linked_lists(linked_lists): first_link_in_new_list = None while len(linked_lists) > 0: # Find the smallest link by value in of all the links currently in linked_lists smallest_link = min(linked_lists, key=lambda x: x.val) smallest_link_index = linked_lists.index(smallest_link) if first_link_in_new_list is None: # If this is the first iteration, set up new linked list first_link_in_new_list = smallest_link last_link_in_new_list = smallest_link last_link_in_new_list.next = smallest_link last_link_in_new_list = last_link_in_new_list.next # Update the linked list that had the smallest value, # remove if there are no more values if linked_lists[smallest_link_index].next is None: del linked_lists[smallest_link_index] linked_lists[smallest_link_index] = linked_lists[smallest_link_index].next return first_link_in_new_list I’m trying to understand your solution, but there’s a part I’m really struggling with. I’ve commented the code to explain what I’m not understanding. I’ve had a look at the resources provided, but I’m still not getting what’s happening with pointer = result. Can anyone explain, or link to a good explanation of the theory? sorted_val = sorted(values) I know the next section is supposed to start creating a linked list by iterating through the sorted values but am unclear how. # I know this is initialisng the final linked list result = Link(0) # what is happening here? I think this is the key point I'm struggling with # we're only updating pointer in the for loop, but that's changing result # How? pointer = result for val in sorted_val: print(f'result is {result}, pointer is {pointer}') pointer.next = Link(val) pointer = pointer.next # I understand the .next removes the first "0" that we added when creating the linked list return result.next I was going to ask this question on stackoverflow but there have been quite a few similar questions - unfortunately, none of them are very clear to me. @Elise Ratcliffe The solution presented in the video boils down to the following three high-level steps: 1. Go through each individual linked list and unpack the stored values onto a flat list 2. Sort the list of unpacked values 3. Build a new linked list from the sorted values It seems to me that you’re struggling to understand the last point, so I’ll try to break it down for you. Let’s use one of the examples from the video to try and visualize what’s going on: • Linked lists (input): • Unpacked values: 1, 2, 2, 4, 3, 3 • Sorted values: 1, 2, 2, 3, 3, 4 • Merged linked list (the expected output): ① → ② → ② → ③ → ③ → ④ So, now the question is, how do you wrap the sorted list of numbers into a corresponding linked list? Both are sequences, but every element of the linked list (called a node) must also point to the next element until there are no more elements. You can start by creating the first node (head) of your list: (I’m using slightly different variable and class names here to make my point across easier. Also, note that using .next as an attribute name shadows the built-in next() function. In cases like this, it’s customary to append a single underscore character to the variable name to avoid name conflicts, for example, .next_.) head is going to be the first element of your resulting linked list. It can have any value at all, such as zero, because you’re going to discard that element later anyway. Your linked list looks like this at the moment: ⓪ Next, you go through the sorted values, then create the associated node element for each of the values, and hook the current node up to the previous node. To keep track of the previous node, you’ll need another helper variable. Let’s call it previous because we’ll use that variable to refer to the last node: head = Node(0) previous = head for value in sorted(values): current = Node(value) previous.next_ = current Initially, the previous node is set to the head of your resulting linked list. This will ensure that the head will now point to the current node with the desired value, for example: ⓪ → ①. You must also remember to update the current value of the previous node so that the subsequent loop runs will move to the next node in your linked list: head = Node(0) previous = head for value in sorted(values): current = Node(value) previous.next_ = current previous = current Otherwise, you’d always keep overwriting the .next_ attribute of the head, ending up with something like ⓪ → ④ as a result. You can rewrite the code snippet above without using the current helper variable, which is similar to the solution in the video: head = Node(0) previous = head for value in sorted(values): previous.next_ = Node(value) previous = previous.next_ When the loop finishes, you’ll have created the following linked list: ⓪ → ① → ② → ② → ③ → ③ → ④. Finally, you can use head.next_ to discard the extra node that you created in the beginning: head = Node(0) previous = head for value in sorted(values): previous.next_ = Node(value) previous = previous.next_ return head.next_ So, your linked list will not contain the initial head but only the nodes with the desired values: ① → ② → ② → ③ → ③ → ④. Here’s the complete solution with an example: from dataclasses import dataclass class Node: value: int next_: "Node" = None def merge(linked_lists): head = Node(0) previous = head for value in sorted(unpack_values(linked_lists)): previous.next_ = Node(value) previous = previous.next_ return head.next_ def unpack_values(linked_lists): for head in linked_lists: while head: yield head.value head = head.next_ if __name__ == '__main__': Node(1, Node(2)), Node(2, Node(4)), Node(3, Node(3)), My solution before watching video 😬 .. def merge_k_linked_lists(linked_lists): nodes = [] for linked_list in linked_lists: node = linked_list node = node.next # Sort the nodes based on their value and re-link in order. nodes.sort(key=lambda node: node.val) for i in range(len(nodes)): nodes[i].next = nodes[i+1] if i < len(nodes)-1 else None return nodes[0] My solution: def merge_k_linked_lists(linked_lists): Merge k sorted linked lists into one sorted linked list. >>> print(merge_k_linked_lists([ ... Link(1, Link(2)), ... Link(3, Link(4)) ... ])) Link(1, Link(2, Link(3, Link(4)))) >>> print(merge_k_linked_lists([ ... Link(1, Link(2)), ... Link(2, Link(4)), ... Link(3, Link(3)), ... ])) Link(1, Link(2, Link(2, Link(3, Link(3, Link(4)))))) values = [] for link in linked_lists: #O(n) values.sort(reverse=True) #O(nlog(n)) linked = None for val in values: #O(n) linked = Link(val, linked) return linked
{"url":"https://realpython.com/lessons/hard-interview-question-brute-force/","timestamp":"2024-11-08T08:15:13Z","content_type":"text/html","content_length":"116581","record_id":"<urn:uuid:4c58b253-b150-4538-8631-62b3880d9474>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00294.warc.gz"}
What is a Bipartite Graph? - Only Code A Bipartite Graph is a graph whose vertices can be divided into two sets such that no two vertices within the same set are adjacent. In other words, it is a graph that can be coloured with only two colors such that no two adjacent vertices have the same colour. Bipartite graphs are widely used in various applications such as matching problems, scheduling, and resource allocation. They are also used in various fields of science and engineering, such as genetics, computer science, and operations research. Problem statement Given a graph G, check whether it’s a Bipartite graph or not. isBipartite (G) 1. Initialise a boolean array visited (to mark nodes as visited) and an integer array colour (to mark the colour of nodes). 2. Perform DFS (Depth-First Search) on graph G. 3. Iterate through all the adjacent vertices of the current node. 4. If the adjacent vertex v has been visited before and has the same colour as the current node, return false. This means that the graph is not Bipartite. (Explanation is below) 5. If all the adjacent vertices have been visited and assigned different colors, return true. This means that the graph is Bipartite. Why visiting a node that has been visited before and has the same colour as the current node means graph is not Bipartite? It is because if a vertex has already been visited and has the same colour as the current vertex, it means that there is an odd-length cycle in the graph, and hence the graph cannot be Bipartite. In general, if we encounter an odd-length cycle in the graph during the depth-first search, we can be sure that the graph is not Bipartite. This is because in a Bipartite graph, all cycles have an even length. So, if we find an odd-length cycle, it must mean that the graph cannot be coloured with two colors without any adjacent vertices having the same colour. Code Implementation // main.cpp // Bipartite Graph // Created by Himanshu on 25/02/23. #include <iostream> #include <vector> #include <cstring> #define N 6 using namespace std; vector<int> graphFirst[N+1]; vector<int> graphSecond[N+1]; int color[N+1]; bool visited[N+1]; void intializeGraphs() { bool dfs(vector<int> graph[], int node, int c) { visited[node] = true; color[node] = c; for (int i=0; i<graph[node].size(); i++) { int v = graph[node][i]; if (!visited[v]) { if (!dfs(graph, v, ~c)) { return false; } else if (color[node]==color[v]) { return false; return true; bool isGraphBipartite(vector<int> graph[]) { bool flag = true; for (int i=1; i<=N; i++){ if (!visited[i]){ if (!dfs(graph, i, 0)) { flag = false; return flag; int main() { if (isGraphBipartite(graphFirst)) { cout<<"graphFirst is Bipartite"<<endl; } else { cout<<"graphfirst is not Bipartite"<<endl; memset(visited, false, sizeof visited); memset(color, 0, sizeof visited); if (isGraphBipartite(graphSecond)) { cout<<"graphSecond is Bipartite"<<endl; } else { cout<<"graphSecond is not Bipartite"<<endl; return 0; graphfirst is not Bipartite graphSecond is Bipartite Time complexity: O(V + E), where V is the number of vertices and E is the number of edges in the graph. Traversing Graph (G) using DFS (Depth-First search), we visit each vertex only once, and we visit each edge at most twice (once for each of its endpoints). Therefore, the total time complexity of the algorithm is proportional to the sum of the number of vertices and edges in the graph.
{"url":"https://www.onlycode.in/what-is-a-bipartite-graph/","timestamp":"2024-11-11T06:22:28Z","content_type":"text/html","content_length":"206898","record_id":"<urn:uuid:608dcdb0-9150-41fb-ade6-3a3afa48e460>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00120.warc.gz"}
CAMB: Multiple Source Windows Lead to Different Cl's I have been trying to fix up my code and I noticed that when you have multiple windows when calculating the cl's the value of the cl's will be different . Is there any reason why this may occur? I have posted the code below, I am not sure if something is wrong with the setup. The Cl that I get for a bin, using the first window function, will depend on the bin in the other window function. Code: Select all pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122, mnu=0.06, omk=0, tau=0.06) pars.InitPower.set_params(As=2e-9, ns=0.965, r=0) pars.set_for_lmax(700, lens_potential_accuracy=0) pars.Want_CMB = False pars.SourceTerms.counts_density = True pars.SourceTerms.counts_redshift = True pars.SourceTerms.counts_lensing = False pars.SourceTerms.counts_velocity = False pars.SourceTerms.counts_radial = False pars.SourceTerms.counts_timedelay = False pars.SourceTerms.counts_ISW = False pars.SourceTerms.counts_potential = False rs=np.arange(0, 3, 0.0001) WW1=top_hat(rs, z1-sep,z1+sep) WW2=top_hat(rs, z2-sep,z2+sep) pars.SourceWindows = [SplinedSourceWindow(bias=1.5, z=rs, W=WW1), SplinedSourceWindow(bias=1.5, z=rs, W=WW2)] The tophat function: Code: Select all def top_hat(x, a, b): Top-hat function that is 1 between a and b (inclusive) and 0 elsewhere. - x: Input array or scalar value. - a: Start of the interval where the function is 1. - b: End of the interval where the function is 1. - Array or scalar value with 1 inside the interval [a, b] and 0 outside. return np.where((x >= a) & (x <= b), 1, 0) Re: CAMB: Multiple Source Windows Lead to Different Cl's It uses the smallest step size needed by any window, so adding another window can affect precision of the first. (not hat is not supported - windows should be differentiable) Re: CAMB: Multiple Source Windows Lead to Different Cl's That makes sense. I'm assuming it affects the accuracy of the 'W1XW2" result as well? I'll look into a better function than my top hat function, thank you!
{"url":"https://cosmocoffee.info/viewtopic.php?t=3836&sid=335c2960e91537cd63f4eca744aa2740","timestamp":"2024-11-02T01:25:23Z","content_type":"text/html","content_length":"33160","record_id":"<urn:uuid:7c4c62b2-af76-4c62-abd6-1671470731a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00773.warc.gz"}
Engineering Physics Major (B.S.) The goal of the UNH BSEP program is to produce broadly-trained engineers who can provide solutions to today’s challenging problems in support of a technologically evolving society. The core of the program is based on interdisciplinary training, complemented with a deeper understanding of the physical principles needed to support careers in engineering, engineering research or, perhaps, further training in systems engineering. The program balances depth and breadth in skill development; flexibility and functionality are what drive the program in the sense that 1) the particular focus is based on the student’s interests, and 2) the breadth of the course selection is guided by the post-graduation goals of the student (e.g., employment versus graduate school). Degree Requirements Minimum Credit Requirement: 128 credits Minimum Residency Requirement: 32 credits must be taken at UNH Minimum GPA: 2.0 required for conferral* Core Curriculum Required: Discovery & Writing Program Requirements Foreign Language Requirement: No All Major, Option and Elective Requirements as indicated. *Major GPA requirements as indicated. Major Requirements A student must have a minimum grade of C in each 400- or 500-level courses that are part of the core requirements and an overall grade-point average of 2.33 in these courses in order to continue in the program. Course List Code Title Credits Core Requirements for all tracks: CHEM 403 General Chemistry I 4 or CHEM 405 Chemical Principles for Engineers CS 410P Introduction to Scientific Programming/Python 4 or IAM 550 Introduction to Engineering Computing MATH 425 Calculus I 4 MATH 426 Calculus II 4 Choose one: 8-12 MATH 527& MATH 528 Differential Equations with Linear Algebra and Multidimensional Calculus or Linearity I and Linearity II MATH 525& MATH 526 PHYS 400 Physics Seminar I 1 PHYS 407 General Physics I 4 PHYS 408 General Physics II 4 PHYS 505 General Physics III 3 PHYS 506 General Physics III Laboratory 1 PHYS 508 Thermodynamics and Statistical Mechanics 4 PHYS 615 Classical Mechanics and Mathematical Physics I 4 PHYS 616 Classical Mechanics and Mathematical Physics II 4 PHYS 703 Electricity and Magnetism I 4 Capstone: 4 PHYS 797 Senior Design Project Additional courses for Aerospace Track ECE 541 Electric Circuits 4 ECE 548 Electronic Design I 4 ECE 651 Electronic Design II 4 ME 608 Fluid Dynamics 3 ME 743 Satellite Systems, Dynamics, and Control 3-4 or PHYS 712 Space Plasma Physics Electives in major: choose four from the following ME 441 Introduction to Engineering Design and Solid Modeling ME 603 Heat Transfer ME 646 Experimental Measurement and Data Analysis ME 670 Systems Modeling, Simulation, and Control ME 705 Thermal System Analysis and Design ME 707 Analytical Fluid Dynamics ME 743 Satellite Systems, Dynamics, and Control ME 747 Experimental Measurement and Modeling of Complex Systems ME 786 Introduction to Finite Element Analysis PHYS 708 Optics PHYS 712 Space Plasma Physics Additional Courses for the Engineering Research track ECE 541 Electric Circuits 4 ECE 543 Introduction to Digital Systems 4 ECE 548 Electronic Design I 4 ECE 651 Electronic Design II 4 PHYS 704 Electricity and Magnetism II 4 or PHYS 708 Optics Electives in major: choose four from the following 12 CS 750 Machine Learning ECE 633 Signals and Systems I ECE 634 Signals and Systems II ECE 647 Random Processes and Signals in Engineering MATH 644 Statistics for Engineers and Scientists ME 441 Introduction to Engineering Design and Solid Modeling ME 561 Introduction to Materials Science ME 608 Fluid Dynamics ME 670 Systems Modeling, Simulation, and Control ME 706 Renewable Energy: Physical and Engineering Principles ME 712 Waves in Fluids ME 743 Satellite Systems, Dynamics, and Control PHYS 701 Quantum Mechanics I PHYS 702 Quantum Mechanics II PHYS 704 Electricity and Magnetism II PHYS 705 Experimental Physics II PHYS 708 Optics Sample Degree Plan Aerospace Track Plan of Study Grid First Year Fall Credits PHYS 407H Honors/General Physics I 4 MATH 425H Honors/Calculus I 4 PHYS 400 Physics Seminar I 1 ENGL 401 First-Year Writing 4 Discovery Course 4 Credits 17 PHYS 408H Honors/General Physics II 4 MATH 426H Honors/Calculus II 4 CHEM 405 Chemical Principles for Engineers 4 IAM 550 Introduction to Engineering Computing 4 Credits 16 Second Year PHYS 505 General Physics III 3 PHYS 506 General Physics III Laboratory 1 PHYS 508 Thermodynamics and Statistical Mechanics 4 MATH 528 Multidimensional Calculus 4 Discovery Course 4 Credits 16 PHYS 615 Classical Mechanics and Mathematical Physics I 4 MATH 527 Differential Equations with Linear Algebra 4 Elective in Major 4 Discovery Course 4 Credits 16 Third Year PHYS 616 Classical Mechanics and Mathematical Physics II 4 ME 608 Fluid Dynamics 3 ECE 541 Electric Circuits 4 Discovery Course 4 Credits 15 PHYS 703 Electricity and Magnetism I 4 ECE 548 Electronic Design I 4 Elective in Major 3-4 Discovery Course 4 Credits 15-16 Fourth Year PHYS 797 Senior Design Project 2 ECE 651 Electronic Design II 4 Elective in Major 3-4 Discovery Course 4 Free Elective 4 Credits 17-18 PHYS 797 Senior Design Project 2 ME 743 Satellite Systems, Dynamics, and Control or or Space Plasma Physics 3-4 PHYS 712 Elective in Major 4 Discovery Course 4 Free Elective 4 Credits 17-18 Total Credits 129-132 Engineering Research Track Plan of Study Grid First Year Fall Credits PHYS 407H Honors/General Physics I 4 MATH 425H Honors/Calculus I 4 CHEM 405 Chemical Principles for Engineers 4 PHYS 400 Physics Seminar I 1 Discovery Course 4 Credits 17 PHYS 408H Honors/General Physics II 4 MATH 426H Honors/Calculus II 4 IAM 550 Introduction to Engineering Computing 4 ENGL 401 First-Year Writing 4 Credits 16 Second Year PHYS 505 General Physics III 3 PHYS 506 General Physics III Laboratory 1 MATH 528 Multidimensional Calculus 4 ECE 541 Electric Circuits 4 Discovery Course 4 Credits 16 PHYS 615 Classical Mechanics and Mathematical Physics I 4 MATH 527 Differential Equations with Linear Algebra 4 ECE 548 Electronic Design I 4 Discovery Course 4 Credits 16 Third Year PHYS 508 Thermodynamics and Statistical Mechanics 4 ECE 651 Electronic Design II 4 PHYS 616 Classical Mechanics and Mathematical Physics II 4 Discovery Course 4 Credits 16 PHYS 703 Electricity and Magnetism I 4 ECE 543 Introduction to Digital Systems 4 Elective in Major 3-4 Discovery Course 4 Credits 15-16 Fourth Year PHYS 797 Senior Design Project 2 PHYS 704 Electricity and Magnetism II or or Optics 4 PHYS 708 Elective in Major 3-4 Free Elective 4 Discovery Course 4 Credits 17-18 PHYS 797 Senior Design Project 2 Elective in Major 3-4 Elective in Major 3-4 Free Elective 4 Discovery Course 4 Credits 16-18 Total Credits 129-133 Students are expected to achieve the outcomes below upon graduation. • Students will master the fundamentals of a broad set of physics subjects (e.g., mechanics, electricity and magnetism, quantum mechanics, thermodynamics, optics). • Students will have a solid understanding of mathematics (e.g., calculus, differential equations, linear algebra). • Students will be able to solve physics and engineering problems using computational methods. • Students will have excellent knowledge of the principles and practice of their chosen engineering disciplines. • Students will be able to use physical principles to design systems, apparatuses, experiments or models; collect and analyze data; and develop conclusions. • Students will be able to identify and solve complex engineering and physics problems by applying physical principles and mathematical tools. • Students will be able to communicate technical content effectively to a range of audiences.
{"url":"https://catalog.unh.edu/undergraduate/engineering-physical-sciences/programs-study/physics/engineering-physics-major-bs/","timestamp":"2024-11-11T13:06:28Z","content_type":"text/html","content_length":"78668","record_id":"<urn:uuid:4ddb2681-c2d6-4378-8a17-25a2500151f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00291.warc.gz"}
Frontiers | Intracellular Dynamics in Cuneate Nucleus Neurons Support Self-Stabilizing Learning of Generalizable Tactile Representations • ^1The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy • ^2Section for Neurobiology, Department of Experimental Medical Sciences, Biomedical Center, Lund University, Lund, Sweden How the brain represents the external world is an unresolved issue for neuroscience, which could provide fundamental insights into brain circuitry operation and solutions for artificial intelligence and robotics. The neurons of the cuneate nucleus form the first interface for the sense of touch in the brain. They were previously shown to have a highly skewed synaptic weight distribution for tactile primary afferent inputs, suggesting that their connectivity is strongly shaped by learning. Here we first characterized the intracellular dynamics and inhibitory synaptic inputs of cuneate neurons in vivo and modeled their integration of tactile sensory inputs. We then replaced the tactile inputs with input from a sensorized bionic fingertip and modeled the learning-induced representations that emerged from varied sensory experiences. The model reproduced both the intrinsic membrane dynamics and the synaptic weight distributions observed in cuneate neurons in vivo. In terms of higher level model properties, individual cuneate neurons learnt to identify specific sets of correlated sensors, which at the population level resulted in a decomposition of the sensor space into its recurring high-dimensional components. Such vector components could be applied to identify both past and novel sensory experiences and likely correspond to the fundamental haptic input features these neurons encode in vivo. In addition, we show that the cuneate learning architecture is robust to a wide range of intrinsic parameter settings due to the neuronal intrinsic dynamics. Therefore, the architecture is a potentially generic solution for forming versatile representations of the external world in different sensor systems. The problem of how to represent a complex external world to support non-trivial versatility of action has a deadening presence both in neuroscience (Loeb and Fishel, 2014; Spanne and Jorntell, 2015), robotics and artificial intelligence (AI) systems (Service, 2014). For neuroscience, the issue is closely associated with the understanding of the brain—without knowledge of how information of the world is represented in neuronal circuitry, it is difficult to decipher its functional mechanisms. An important related issue is how biological systems can generalize previously learnt sensory experiences to apply them to the interpretation of novel contexts—lack of generalization capability is a limitation in classical pattern recognition systems (Spanne and Jorntell, 2015), to which AI and DNN systems has an ancestry, and likely an important reason why such systems can be comparatively easily fooled (Nguyen et al., 2015). As versatility of interaction with the external world is a hallmark of brain function, an important question is how that versatility can be supported. The skin is an interface that directly interacts with the physical world, using 10,000's of tactile sensors (Johansson and Flanagan, 2009). Current models of the organization of tactile neural systems to a large extent build on assumptions that the brain needs to identify edges, shapes or other physical parameters that human external observers may deem important to represent (Pei et al., 2011; Sathian, 2016). Using sparse-coding interpretations of neural coding combined with grandmother neuron-like theory (Barlow, 1972) such a system can be expected to work in the classification of a large range of tactile experiences. However, classifying systems based on pattern recognition can suffer from problems with generalization, i.e., where learning from one situation can be scaled or adapted to apply to new ones (Spanne and Jorntell, 2015). An alternative mode of representation of tactile information would be one that automatically arises out of experienced interactions through learning. Indeed, during early development, mammalians generate seemingly random movements and interactions with the environment, which would engage a wide set of sensors from large parts of the body (Shao et al., 2016) and play a crucial role for development (Forssberg et al., 1995; Petersson et al., 2003; Blumberg et al., 2013). Such interactions result in spatiotemporal patterns of skin sensor activation, that depend on, and therefore abstract, the properties of the objects we touch, the laws of physics describing the interactions that can take place (Hayward, 2011), the types of movement we make, the mechanical effects inducible in the skin and how the tactile sensors are tuned to them (Jörntell et al., 2014). The available set of sensors will respond across these conditions and their activations will have specific relationships depending on the condition. Hence, rather than viewing brain integration of tactile sensors as occurring in a pixel-by-pixel fashion, we here consider the often overlooked fact that individual neurons integrate information from several sensors. It follows that what is being learnt is likely to involve the relationships between the activations of these sensors. To learn such relationships is here hypothesized to be an important component of being able to form representations of the external world that is applicable or generalizable to novel experiences. As tactile inputs are first processed in the cuneate nucleus, before they reach the cortex, it is likely that the basic constraints on the brain's representation of the tactile part of the physical world are formed here. In vivo whole cell recordings from these neurons indicate that their synaptic weights for tactile inputs are highly skewed, which indicates that they are highly learnt ( Bengtsson et al., 2013). The important question why that learning occurs recently found a possible answer, when the cuneate neurons were found to code for unique combinations of the fundamental haptic input features (Jörntell et al., 2014), which tentatively correspond to the dimensions or the vector decompositions of the contact mechanics effects arising between two objects (Hayward, 2011 ). Here, we emulated the learning that would arise from of a set of varied sensory experiences given the biological constraints provided by the recording data we obtained on the intrinsic membrane dynamics and the synaptic inputs of the neurons of the cuneate nucleus. We find that the main effect of the cuneate learning is a utility-based decomposition of the tactile sensory space into vector components that made it possible to generalize the learning to novel tactile experiences. This is a different form of representation of sensory input data than a direct identification of sensory input patterns, which is a potential explanation for the large versatility in the identification of sensor input data in biological systems. All biological data was obtained under acute conditions identical to those of a previous study on the cuneate nucleus in vivo (Bengtsson et al., 2013). Briefly, adult cats of both sexes were initially prepared under propofol anesthesia and subsequently decerebrated to allow cuneate neuronal recordings under non-anesthetized conditions. This study was carried out in accordance with the recommendations of Malmö-Lund Animal Research Ethics Committee. All experimental procedures were approved in advance by the Malmö/Lund Animal Research Ethics Committee (permit number and approval-ID: This section contains four main parts. First, we describe how recordings were made from projection neurons and inhibitory interneurons of the cuneate nucleus using the in vivo whole cell patch clamp technique. Secondly, the recorded characteristics of the cuneate projection neurons and the inhibition from the interneurons were approximated by constructing a model of individual cuneate projection neurons (CNs) and their afferent network. Thirdly, the responses of a population of skin sensors that provided synaptic input to the CN network across a range of different real world stimuli were generated by a bionic fingertip. Fourthly, the CN synaptic learning process for skin sensor input was inferred from generic neuronal learning mechanisms in vivo and our estimation of intracellular calcium responses. The theoretical basis for the construction of the CN model and its learning process is also provided for each step. Biological Data Briefly, under initial deep propofol anesthesia, adult cats were prepared for acute in vivo recordings with artificial respiration, strict monitoring of blood pressure, end-expiratory carbon dioxide and temperature. Thereafter, the animals were decerebrated and the anesthesia discontinued. To monitor the level of anesthesia before decerebration, we continuously measured the blood pressure and verified the absence of withdrawal reflexes to noxious stimulation. To monitor the state of the preparation after the decerebration, we in addition made EEG recordings from intact parts of the neocortex. EEG recordings were characterized by a 1–4 Hz oscillatory activity that was periodically interrupted by large-amplitude 7–14 Hz spindle oscillations lasting for 0.5 s or more. Such EEG patterns are associated with deep stages of sleep (Niedermeyer and Da Silva, 2005). The EEG activity and the blood pressure remained stable, also on noxious stimulation, throughout the experiments. Mounting in a stereotaxic frame, drainage of cerebrospinal fluid, pneumothorax and clamping the spinal processes of a few cervical and lumbar vertebral bodies served to increase the mechanical stability of the preparation. In vivo whole cell patch clamp recordings were made in the exposed cuneate nucleus. The recorded neurons were identified with respect to the location of their excitatory receptive field on the skin ( Bengtsson et al., 2013). Stimulation in relation to the location of this receptive field was done using force-time controlled touch as well as by local electrical activation of skin afferents. Current injections to characterize the intrinsic membrane responses were made through the recording pipette. All intracellular signals were analyzed off-line using custom-made software. Identification of unitary IPSPs was made using a tailored template matching routine with manually constructed templates. The templates consisted of a time series of voltage ranges into which the signal had to fit to be counted. The template was adjusted to allow the detection of gradually smaller events until it started to include events that on visual inspection did not resemble the event time course of IPSPs evoked by skin stimulation (Bengtsson et al., 2013). Hence, the similarity of time course between evoked and unitary IPSPs was taken as an indication that they were derived from the same type of inhibitory synapses. A difference with our previous analysis of EPSPs (Bengtsson et al., 2013), was that the voltage deflections of the unitary IPSPs were much smaller. Thereby, the noise of the recording system prevented the detection of IPSPs of amplitudes below a certain level. Therefore, our reported values of the median peak unitary IPSP amplitudes for each neuron are most likely overestimates. For population data, we report the mean and standard deviation of the median IPSP amplitudes recorded for at least 100 spontaneous IPSPs in each of the 15 recorded cuneate projection neurons. Modeling Design Based on our present biological observations, as well as previously published data on cuneate neurons (Bengtsson et al., 2013) and generic neuronal physiology and plasticity, we aimed to simulate how these parameters and processes could support learning when brought together in a functional system. Our simulation consisted of two main components, (a) A bionic fingertip covered with silicon skin and equipped with tactile sensors that transduced local mechanical skin strain patterns into spatiotemporal patterns of sensor spike output data in response to physical interactions with external objects/surfaces. This was the counterpart of the tactile primary afferents (PAs) in biological systems. The bionic fingertip provided the important feature of a system of PA sensors where there is a consistent relationship between the activation of the different sensors across conditions or tactile experiences. As discussed in the introduction, rather than viewing brain integration of tactile sensors as occurring in a pixel-by-pixel fashion, we here consider the often overlooked fact that each cuneate nucleus neurons integrate information from several sensors. It follows that what is being learnt is likely to involve the relationships between the activations of these sensors. To learn such relationships is here hypothesized to be an important component of being able to form representations of the external world that is applicable to novel experiences. (b) A simulated cuneate nucleus neuronal network containing a set of cuneate neurons receiving synaptic input from those PAs. Each model cuneate projection neuron (CN) was simulated individually, capturing the intrinsic responsiveness observed in vivo and the generic subcellular processes involved in synaptic plasticity. The accumulative effects of the synaptic plasticity during the learning process were evaluated by the changes in synaptic weights of the PA inputs in each CN. In the account that follows, we present the design of the simulation starting with the low level neuronal properties, the dynamic model describing the intrinsic responsiveness of the cuneate neurons, synaptic weight initialization, the sensory input data and the bionic fingertip used to generate it, the learning process and the evaluation of the end effect of the learning process. The design and performance of the bionic fingertip has already been reported in previous papers, therefore we merely present the specific adaptations made for the present simulation. The modeling software is available on: https://figshare.com/projects/Artificial_Cuneate_Neuron_Model/34886. Network Connectivity Based on biological data of cuneate neurons (Bengtsson et al., 2013; Figures 1, 2), our network model (Figure 3A) was comprised of sensory afferents projecting as excitatory synapses on each individual model CN. In addition to the excitatory synapses, and based on our biological observations (Figure 1), the sensory afferents also drove local inhibitory interneurons that provided inhibitory synapses to the CNs (Figure 3A). The model CN had equal numbers of excitatory and inhibitory synapses. The synaptic weights of these synapses were given initial starting weights (or seed weights, see below) that were gradually learnt on a trial-by-trial basis. FIGURE 1 Figure 1. Inhibitory synaptic inputs and intrinsic responses of cuneate neurons in vivo. (A) Schematic of cuneate nucleus neuronal circuitry. Lines from the left symbolize axons originating from skin sensors. Their synaptic weights are indicated by the size of the triangles or as a stop ending for near-zero weights. Inhibitory interneurons (red for highlighted neuron, the others are gray) send axons that make inhibitory synapses on the projection neuron (black triangles). In the middle is a cuneate projection neuron, which sends its axon to the thalamus or the cerebellum. Inset: 3D illustration of the lower brainstem and the location of the cuneate nucleus (green). 3D scale bar indicate distances (1 mm per division). Yellow is gracile nucleus, small red volume is external cuneate nucleus. (B) Response of a projection neuron in vivo to a light skin touch within its receptive area. Arrows indicate putative spontaneous IPSPs, which are so small they are barely visible at this magnification of the voltage trace. (C) At a different magnification, examples of spontaneous IPSPs from one cuneate projection neuron are superimposed and averaged to the left. The peak amplitudes of 500 consecutive spontaneous IPSPs are shown in the box plot at right. (D) The gradual recruitment of summated, or compound, IPSPs with increased electrical stimulation intensity to a skin area adjacent to the excitatory receptive skin area of a sample projection neuron (black curve). Gray curve illustrates corresponding recruitment of unitary EPSPs (Bengtsson et al., 2013). Inset traces, average evoked IPSPs (averages of 20-50 repetitions) at different stimulation intensities. (E) Spontaneous activity of a projection neuron with zoomed-in, superimposed spikes to the right. Red trace indicates a case of a spontaneous single spike event, whereas most spontaneous firing occurred in doublets with an associated difference in voltage trajectory after the spike. Bottom, examples of rebound responses elicited by release from hyperpolarizing current injections (−400 pA for 200 ms). (F) Comparison between the rebound responses of a cuneate neuron in vivo and the CN FIGURE 2 Figure 2. Spike responses in a sample interneuron of the cuneate nucleus. (A) Recordings were identified as projection neurons (black traces) and interneurons (red trace) based on their characteristic spike shapes. (B) Example response of an interneuron evoked by a brief touch to the skin. (C) Response of a sample interneuron to electrical skin stimulation. Black traces indicate evoked EPSPs that did not elicit a spike whereas red traces indicate those that did. FIGURE 3 Figure 3. Functional structure of the CN model. (A) Structure of the CN network model. Colored lines indicate PA connections between the physical sensors of the bionic fingertip and the neurons of the model, blue triangles indicate variable weight synapses. Only cuneate projection neurons (CN1–CNn) were simulated individually, where the number identifies a specific initial synaptic weight configuration (see Figure 5B). PA inputs were also provided to the CNs as a lump inhibitory synaptic input via an interneuron (red). (B) Graphic representation of the functional structure of the CN model. Components indicated with a dashed outline were the only elements with fixed parameters, whereas parameters with solid outlines were adjusted in the later part of this paper to simulate CNs with different intrinsic properties. (C) Subcellular factors involved in the synaptic plasticity of the CN, with variable weight PA synapses (blue) and inhibitory synapses (red). The CN neuron is divided into a main compartment, with reactive LVA and CAP conductances and synaptic spaces containing VGCCs (Higley and Sabatini, 2012) and a variable number of AMPAr:s. AMPAr, excitatory glutamate receptors; VGCCs, voltage gated calcium channels; GABAa, inhibitory synaptic receptors; LVA, low–threshold voltage activated calcium channels; CAP, calcium-dependent potassium channels. (D) The calcium activity of the main compartment varied over time due to synaptic input and the responses they elicited via the intermediate dynamics model of the CN. The level of the calcium activity, in relationship to the learning threshold, defined when the cell was in the positive zone (i.e., potentiation mode) or in the negative zone (depression mode). (E) The local calcium activity threshold used to define the eligibility for plasticity for each PA synapse. (F) The net learning drive for the PA synapse varied depending on the temporal correlation between the zero-offset main compartment calcium activity and the local, or synapse space, calcium activity. (G) The activity of another PA synapse under the same stimulus presentation and; (H) its net learning drive. (I) The synaptic weight compensation constant that was multiplied with the integral net learning drive to calculate the final weight change per stimulus presentation. For (D,E,G), note that the y axes indicate the relative magnitude of calcium signals for each compartment and not an actual estimation of the calcium concentration. The simulated network model had 80 sensory input channels (PAs) that innervated each individual CN. This number of afferents was less than biological estimates, which suggest in the order of 1,000's of PAs per CN (Bengtsson et al., 2013). However, the lower number of simulated afferents is still realistic because out of these 1,000 synapses, many are likely to represent anatomically present synaptic inputs from PA axons mediating input from several different fingers, which in the adult animal mostly provide “silent” synaptic inputs (Weinberg et al., 1990; Bengtsson et al., 2013). In contrast, our inputs were generated from the tip of one finger alone. Synaptic inhibition was simulated as being provided by 80 independent interneurons that were each directly activated by one out of the 80 PA afferents available in our simulated system. Because all unitary inhibitory synaptic inputs were found to be of comparable, low weights in the biological cuneate neuron (Figure 1), inhibitory synapses were here simplified to one lump inhibitory synapse per CN (total inhibitory synaptic weight, Figure 3). Hence, interneurons were not explicitly simulated, but instead the spiking activity of each PA afferent was fed directly to an inhibitory synapse on the CN. Neuron Model The model cuneate projection neuron (CN) was implemented as a conductance based Exponential Integrate and Fire (EIF) model (Fourcaud-Trocmé et al., 2003), which can recreate the fast dynamics (~1 ms timescale) of neuronal spike generation. In addition to the basic EIF model, voltage sensitive calcium channels and calcium dependent potassium channels were also modeled in order to recreate intermediate cuneate neuron dynamics (~10 ms timescale) observed in vivo (Figures 1E,F). The complete dynamics of the CN membrane potential are given by: $CmdVmdt= IL+Ispike+Iion+Iext+Isyn (1)$ where C[m] is the membrane capacitance, ${I}_{L}=\text{ }{-\overline{g}}_{L}\left({V}_{m}-{E}_{L}\right)$is the leak current, I[spike] is the spike currents (fast dynamics), I[ion] is the ion channel currents (intermediate dynamics), I[syn] is the synaptic input currents and I[ext] denotes external injected currents that were used to evaluate the intrinsic CN neuron responsiveness to current step commands (Figure 1F). The model values for these and other parameters are given in Table 2. The membrane resistance and time constant/capacitance were within the range of values recorded in vivo ( Bengtsson et al., 2013), whereas other intrinsic parameter values were chosen through an optimization process (see below). Neuron Model-Fast Dynamics The spike current (I[spike]) is generated using a basic EIF model (Equation 2) to achieve the fast dynamics and recreate the initiation of the action potential (Ostojic and Brunel, 2011). $Ispike= g¯L ΔT exp(Vm-VtΔT) (2)$ As the depolarization reaches a threshold speed, an action potential is said to be generated, and the membrane potential is immediately reset by a fast afterhyperpolarization (AHP) to the membrane potential at which the spike was initiated, i.e., mimicking the fast spike AHP of the recorded cuneate projection neurons (see Figure 2A). Neuron Model-Intermediate Dynamics The intermediate dynamics include currents from additional ion channels (I[ion], Equation 1) that are not directly involved in forming action potentials, but have more of a modulating role in episodes leading up to the generation of action potential and the episodes between action potentials when the synaptic input activity is high. The intermediate dynamics of the model were optimized to mimic the reactive conductances that could generate the types of responses to current injections we recorded in the cuneate neurons in vivo (Figures 1E,F). Such responses, i.e., post-inhibitory rebound responses and a tendency to generate bursts of action potential firing, have been observed in other neuron types (Llinás and Jahnsen, 1982; Huguenard, 1996; Molineux et al., 2008) and has at least partly been attributed to by low-threshold voltage gated calcium channels (LVA) and calcium-activated potassium channels (CAP). Hence, I[ion] can be divided according to: $Iion= ICa+ IK (3)$ where I[Ca] is the current through LVA channels or equivalent channels and I[K] the current through the CAP channels. These are modeled as two separate pools of ion channels (Saarinen et al., 2008) according to: $IK=- g¯KxKCa4xKVm4(Vm-EK) (4)$ where ${\overline{g}}_{Ca}$ and ${\overline{g}}_{K}$ are the maximum conductances of the respective channels, E[Ca] and E[K] are the reversal potentials of the respective ions, and x[K[Ca]], x[K[Vm ]], x[Ca, a], x[Ca, i] are the activity states of the channels (Saarinen et al., 2008). The activation states of the LVA channels were modeled using the following differential equations: $d(xCa,i)dt=(x¯Ca,i(Vm)-xCa,i)/τCa,i (5)$ where tau is the time constant at which the states move toward the voltage dependent equilibrium described by ${\overline{x}}_{Ca}\left({V}_{m}\right)$. These equilibrium functions are two parameters sigmoid of the form: $x¯Ca,i(Vm)=1-(1+exppCa,i,1-VmpCa,i,2)-1 (6)$ Since the calcium sensitive subunits of the CAP channels are located on the inner surface of the cell membrane, the intracellular concentration of calcium ([Ca^2+]) is modeled for small volumes rather than as an overall concentration within the cell. The calcium concentration within the cell will change both due to ion channels through which calcium ions enter to the inside of the membrane, and diffusion of ions into the remaining intracellular volume. From this line of reasoning, and in accordance with Saarinen et al. (2008), Equation 7 is constructed $+ ([Ca2+]rest-[Ca2+])/τ[Ca2+] (7)$ The activation states of the CAP channels were modeled using the following differential equations: $d(xKVm)dt=(x¯KVm(Vm)−xKVm)/τKVm (8)$ where the two time constants (τ[K[Ca]] and τ[K[Vm]]) indicate the times at which the states move toward the voltage dependent equilibrium described by ${\overline{x}}_{{K}_{Vm}}$ and the calcium dependent equilibrium described by ${\overline{x}}_{{K}_{Ca}}$. These equilibrium functions are two parameters sigmoid of the form: $x¯KCa([Ca2+])=1-(1+exppKCa,1-[Ca2+]pKCa,2)-1 (9)$ The values for these parameters were chosen through an optimization procedure (see below) and are indicated in Table 2. Note that we observed a range of variance between cuneate neurons in vivo in terms of their intermediate dynamics. Our aim was to provide a simple model that at the same time could capture the main principles of the rebound and burst responses that we could demonstrate in our recorded cuneate neurons. Therefore, we do not expect that our parameter values have a direct correspondence with biophysical measures, and we do not expect to precisely capture the properties of the intermediate dynamic response of any single neuron (which likely would have required a larger set of parameters). Neuron Model-Synaptic Inputs The synaptic current (I[syn]) through the cell membrane is the summated synaptic currents of the activated synapses. Each individual synapse (i) is activated by a primary afferent spike generated by a single sensor of the bionic fingertip. Once activated, this spike gives rise to a stereotyped time course of conductance injection at the synapse (I[syn]) which is described by $+gmaxwinh∑iexp(-τ(t-t*))(Erev,inh-Vm) (10)$ where E[rev] is the reversal potential of the type of synapse (E[rev, exc] or E[rev, inh] depending on whether the synapse is excitatory or inhibitory, see Table 2), V[m] is the membrane potential and t* is the time of activation of the synapse. Each spike in each sensory afferent was converted into a synaptic conductance in the simulated neuron. For each synapse, the peak amplitude of the synaptic response was determined by the product of their individual weight (w[exc] or w[inh]) and the overall maximum synaptic conductance constant (g[max], see Table 1). Through g[max] the relative leak conductance (i.e., the ratio of the synaptic and the leak conductances) could be adjusted to simulate cuneate neurons with different sizes. The time constant of the decay of the synaptic membrane potential responses, τ, was 6.4 ms for both excitatory and inhibitory synapses, in accordance with the time courses recorded in the cuneate nucleus neurons in vivo (Bengtsson et al., 2013; Figure 1). Note that as all the PA synapses of our system stayed well below 200 Hz of firing activity, we did not simulate any rate adaptation of the PA synapses as such adaptation in vivo primarily occurs at intervals shorter than 5 ms (Jörntell and Ekerot, 2006). TABLE 1 Optimization of the Neuronal Calcium Dynamic Model Against Measured Data The complete model was optimized during three steps. During each simulation, the model was fed with six 100 ms current steps with amplitudes of 100, 200, 300, 400, 600, and 800 pA. The results from the six trials were then optimized against intracellular recordings where the cuneate neurons were fed with the same currents in vivo (Figure 1F). The first step is to manually choose suitable initial parameters, using both previously known values to some of the parameters, and estimating others using trial and error simulations. The second step is to use the Nelder-Mead algorithm, but with an objective function where the simulated traces is compared to the measured traces. The third step also uses the Nelder-Mead algorithm, but with an objective function that measure the discrepancy between the action-potential timing in the simulated trace and the measured trace. As there is no guarantee that the simulated trace contains the same number of action potentials as the measured trace, discontinuities appear when the number of action potentials in the simulated trace change. The use of the Nelder–Mead method is motivated by that this is a commonly applied numerical method used to find the minimum or maximum of an objective function in a multidimensional space, in particular for nonlinear optimization problems. Equation 11 contains the complete objective function: $e= {etωs∑T,T^∈T(|T^|−|T )2 ∃T,T^∈T |T^|≠|T| et ∀T,T^∈T |T^|=|T| (11)$ where e is the objective function value, ω[s] a weight used to punish any discrepancy in the number of action potentials, and e[t] the total time error between simulated and measured action $et=∑T,T^∈T{∑t∈Tmint^∈T^|t−t^|2|T|≥|T^|∑t^∈T^mint∈T|t−t^|2|T|<|T^| (12)$ where 𝕋 is the set of all pairs of $T,\stackrel{^}{T},$ where T and $\stackrel{^}{T}$ are the sets of all measured and simulated action potentials, respectively, during a single current step. The list of all optimized parameters used in the model is shown in Table 2. TABLE 2 Subsynaptic Local Calcium Activity In the learning process, excitatory synaptic weight learning was driven by the calcium activity in the main compartment of the cuneate neuron (i.e., as calculated by the calcium dynamic model above) in combination with the calcium activity in the individual synapses. An essential component of this combination is the intensity of activation of the individual synapses. According to the learning rule that we used (see below), a synapse that fires at high frequency with a high degree of correlation with the main compartment total calcium activity (${A}_{tot}^{C{a}^{2+}}=k\left[C{a}^{2+}\ right]$, where k is an arbitrary constant that is here assumed to be 1), will be “rewarded” as due to the strong correlation with the learning signal ${A}_{tot}^{C{a}^{2+}}$. Conversely, strong firing in a synapse in relation to low or zero ${A}_{tot}^{C{a}^{2+}}$ will be “punished” (i.e., similar to the classical BCM rule for Hebbian plasticity) (Bienenstock et al., 1982). Therefore, the local calcium time constants (${\tau }_{C{a}_{loc}^{2+}}$, Table 1), defining the temporal properties of the calcium signal in the local space underneath each individual synapse, play a major role in the learning process (the local postsynaptic calcium activity can be considered an analogy with the calcium activity in a local dendritic spine; Koester and Sakmann, 1998; Tigaret et al., 2016). The learning rule critically depends on this time constant. For instance, if ${\tau }_{C{a}_{loc}^{2+}}$ time constants are too high, the rewarding effects on synapses that have a high degree of correlation with the ${A}_{loc}^{C{a}^{2+}}$ will be lost. However, as there is no data on the relevant time constants in the cuneate neurons in vivo, we had to make assumptions of the values of this time constant. In order to avoid pitfalls in relation to this assumption, we studied a range of time constants for ${A}_{loc}^{C{a}^{2+}}$ (Table 1) during the CN learning process. For each synapse, each input spike at time t* contributes to the subsynaptic spine calcium concentration, the time course of which is given by the kernel (Mazzoni et al., 2008) of Equation 13, $ΔAlocCa2+(t)= α*[exp(-t-τl-t*τd) - exp(-t-τl-t*τr)] (13)$ In its basic configuration, the parameters describing the relative local calcium concentration (or activity), are the decay time τ[d] = 12.5 ms and the rise time τ[r] = 4 ms multiplied with a constant τ[1] = 21 ms (which is a constant to calculate the ratio) for ${\tau }_{C{a}_{loc}^{2+}}$ = 100% (Table 1). τ[l] is the latency time which is zero in our case. The initial values chosen were derived from our assumption that the time course of the slow afterhyperpolarization of the cuneate neuron spike (Figure 1B), which is known to reflect the activation of calcium-dependent potassium channels, matched the time course of the calcium concentration induced in the synapse. This resulted in a somewhat faster but still comparable time course of spine calcium than reported for single spines in vitro (Tigaret et al., 2016), but the rise time of our subsynaptic calcium signal was slower than in previous simulations of the properties of calcium-dependent learning (Graupner and Brunel, 2012). As the temporal properties of this calcium signal clearly were assumptions with large uncertainty, we tested a wide range of different values for these parameters (Table 1). In order to achieve supralinearity in the local calcium activity (Wang et al., 2000), we used an approximative approach of subtracting an offset from the local calcium signal (an offset corresponding to 75% of the peak activity of the single pulse activation was subtracted) and this was the value of the local calcium activity (${A}_{loc}^{C{a}^{2+}}$). With this approach, repetitive activation of the same synapse resulted in a supralinear increase in the intensity of the local calcium activity depending on the frequency of the PA afferent activity for that synapse. Synaptic Weights The synaptic weight in our model was a gain factor ranging between 0.001 and 1, where 1 signified the maximum weight of a synapse. As there is no information on perinatal synaptic weights for the CNs, we needed to make assumptions. The first assumption was that synapses have different initial synaptic weights, or seed weights. The second assumption was that all synapses had initial non-zero up to medium strong synaptic seed weights. In our model, the distribution of synaptic weights across the primary afferent synapses of the CN model, or the initial excitatory synaptic weights (w[init, exc]), were normal distributions ranging between 0.001 and 0.5 across the 80 primary afferent inputs. We used 5 different pseudo-randomized initial weight distributions, referred to as “Seed weights 1–5.” Pseudo-randomization here implies that the distribution is randomized but that the same distribution is kept constant across different learning process runs. This had the advantage that the effects of specific intrinsic CN configurations (see “Variations of initial intrinsic CN parameters” below) could be tested for the same initial weight distributions (Figure 12). Synaptic inhibition was simulated as being provided by 80 independent interneurons that were each directly activated by one out of the 80 PA afferents available in our simulated system. Each PA synapse had the same weight on its targeted inhibitory interneuron. The collective, or total, inhibitory synaptic weight (w[init, inh]) was initially set to 0.125 evenly distributed across all the inhibitory synapses between the interneurons and the CN neuron they contacted, meaning that each PA synapse provided equally weighted inhibition to the CN neuron (as suggested by our biological data accounted for in the Results). Sensory Inputs Our aim was to simulate the learning process in the cuneate neurons driven by simulated sensory activation of the PAs. The main idea was to simulate a varied set of interactions between the skin and the physical world, and let the resulting spatiotemporal patterns of skin sensor activation determine the outcome of the learning process in the cuneate neurons. Rather than designing an arbitrary set of spatiotemporal patterns of skin sensor activations, we wanted to use a physical model of a fingertip to generate the spatiotemporal spike patterns of PA input that the cuneate neurons of our model (CNs) learned from. This is because across the different kinds of interactions that the skin may experience, there may be relationships between the skin sensors that are not easily calculated across all conceivable conditions/interactions. Bionic Fingertip To generate the sensor patterns, we used a set of touch protocols based on different stimulus conditions (Table S1). The sensory fingertip comprises a 2 × 2 array of Micro Electro Mechanical System (MEMS) based bio-inspired tactile sensors (Oddo et al., 2011) to generate spatio-temporal sensory input patterns (Figure S2). Each individual sensor comprises a four transducing piezoresistors mesh (totaling 16 sensory channels for each fingertip), arranged in cross-shaped fashion able to generate precise response magnitudes for both normal and tangential forces, applied across the surface of the sensor (Beccai et al., 2005; Oddo et al., 2007). However, only four of the 16 sensor channels were sufficiently dynamically sensitive to the range of forces arising for the stimuli used to be used in the present experiment. We created 80 PA sensory input channels from these four sensors' analog data (Figure 4) by multiplexing them with multiple neuron models and signal processing as explained below. Each of these 80 PA sensor spike output patterns were considered a unique PA signal provided as synaptic input to the CNs, where sensor#1 was used to create PA input #1-20, sensor #2 to create PA input #21-40 and so on. FIGURE 4 Figure 4. Training stimuli. Physical stimuli and corresponding sensor spiking responses in PA sensory inputs #1-80 for the training stimuli used in the learning process. (A) Three of the physical stimulus conditions consisted in the artificial fingertip first indenting and subsequently sliding across surfaces with different textures, as shown in (B). (C) The two indentation stimuli, where the artificial fingertip was dynamically indented against two different shapes, as shown in (D). The normalized analog output of the tactile sensor is fed as virtual current input to a fine-tuned Izhikevich neuron models (Izhikevich, 2003) in-order to achieve the spatiotemporal spike output (Figure S2) as described previously (Oddo et al., 2016; Rongala et al., 2017). The Izhikevich model was chosen in order to reproduce the adaptation dynamics, that is a characteristic of mechanoreceptors (Johansson and Flanagan, 2009). Per the Izhikevich model, the membrane potential v and the adaptation variable u were updated via the following nonlinear differential equations discretized using Euler's method: $v°=Av2+ Bv+C-u+ IinputCm$ $u°=a(bv-u) (14)$ When the membrane potential reached the spike depolarization threshold of 30 mV, one spike was produced followed by a reset: $if v ≥30 mV, then {v←cu←u+d (15)$ The A, B, C and the spiking threshold are the standard ones of the Izhikevich artificial neuron model, whereas the parameters a, b, c and d were selected (Table S2) to allow a regular spiking From the existing 16 sensory channels, we consider 4 active channels (2, 5, 12, and 15) (Figure S2). We further derivated the analog sensor data to mimic both Merkel Slowly Adapting (SA) type I mechanoreceptors, sensitive to the sustained level of mechanical interactions, and Meissner Fast Adapting (FA) type I mechanoreceptors, mostly sensitive to dynamic changes (Vallbo and Johansson, 1984 ; Abraira and Ginty, 2013). For each physical stimulus, the stimulation was repeated five times and the corresponding consecutive five analog signals were considered as the response of a unique sensor. As we had four physical sensors, this procedure thus generated 20 analog unique sensory signals per physical stimulus. In addition, each analog sensor signal was derivated to obtain a total of 40 different analog sensory signals (Figure 4). Moreover, by implementing the Izhikevich neuron model (Izhikevich, 2003) with two different settings, we obtained a total of 80 different PA spike output patterns for each physical stimulus. For the learning process, we used the sensor outputs obtained using five different physical stimuli (Figure 4), and in addition the PA spike output responses to five other physical stimuli were provided as “non-training” stimuli (see Results text). Learning Protocol In order to induce learning in the CNs, we used a series of 1,500 stimulus presentations. Each of the 1,500 stimuli hence corresponded to a spatiotemporal pattern of PA spike inputs (Figure 4), where all of the 80 PAs were activated during each stimulus presentation. The 1500 presentations were generated from the 5 physical stimulus presentations. Each stimulus presentation was hence repeated 300 times. Rather than feeding the system with identical spatiotemporal spike patterns for each of these 300 repetitions, we added a spike time jitter to all tactile sensor spike output responses. Gaussian noise with zero mean and standard deviation σ (σ = 5 ms) was added to the individual spike times of each PA. The motivation for this injection of noise rather than repeating identical patterns of PA spike responses was because we observed it to make the learning more robust. It also allowed an identification of the spread of learning outcome (end weight “landscapes”) for each model setting (Figures 5B,C). FIGURE 5 Figure 5. Synaptic weight changes during the learning process. (A) Evolution of the synaptic weights during a learning process of 1,500 stimulus presentations, with the evolutions of two highlighted synapses indicated by red and green thick lines. (B) Starting weights of the PA synapses (“seed weights”) for the sample learning process. Red bars indicate a set of 10 sensors/PA synapses that started with high weight but ended up as low weight synapses (“losers”). (C) The end weights of the PA synapses after a completed learning process. Green bars indicate the ten synapses with the highest end weights (“winners”). (D) Box plots of the correlation indexes for the end weight “winners” and for the end weight “losers” for each of the five seed weight configurations tested. The differences in correlation index between the winners and losers was statistically different in each of these five cases (p < 0.001, paired t-test). (E) Histogram of the distribution of synaptic weights before and after learning in the example CN (CN1). FIGURE 6 Figure 6. Total synaptic weights and response evolution during learning. Relationship between synaptic weight evolution during the learning process (A) and the sum of the excitatory synaptic weights (B) and of the inhibitory synaptic weights (equal for all inhibitory synapses) (C). Note that the latter two parameters were controlled by the feedback mechanisms to ensure stability of the sum of the excitatory synaptic weights on the CN (by homeostatic plasticity) and of the total calcium activity (by the weight of the inhibitory synapses) (Figure 3B). (D) Example responses of the CN to the same stimulus condition (indentation 20 mm, see Figure 4) at three different time points during the learning process (indicated in A). Excitatory Synaptic Weight Learning During the learning process, the individual excitatory synaptic weights were gradually learnt, i.e., there was an alteration in the weight of each synapse. All the synaptic weights were updated after each stimulus presentation. The weight change for the individual excitatory synapse (i) is given by Equation 16. $wcexc,i=∫t0tmax{(AtotCa2+(t)-(AvgAtotCa2+.SynEQ)). AlocCa2+(t)}. K . dt (16)$ The driving force for the net synaptic weight change (wc[exc, i]) is given by the integral of the correlation between main compartment total calcium (${A}_{tot}^{C{a}^{2+}}$) and local calcium activity for each synapse (${A}_{loc}^{C{a}^{2+}}$) from t[0] to t[max] (corresponding to the start and end of each stimulus presentation). The ${A}_{tot}^{C{a}^{2+}}$ is offset to zero by the Learning threshold (Figure 3D), where ${A}_{tot}^{C{a}^{2+}}$ above the Learning threshold is in the positive zone (potentiation) and below is the negative zone (depression) (similar to Graupner and Brunel, 2012). In other words, the main compartment calcium activity is a gate for the plastic reaction in the local synaptic compartment, deciding whether potentiation or depression should occur. Therefore, the product of the counterbalanced ${A}_{tot}^{C{a}^{2+}}$ and individual synapse ${A}_{loc}^{C{a}^{2+}}$ (Figures 3E–G) defines the net learning drive for each respective synapse as a function over time (Figures 3F–H). The value of the time integral of the net learning drive attained at t[max] decides to what extent each specific synapse should be potentiated (Figure 3F) or depressed (Figure 3H). The strength of the potentiation/depression for that individual synapse is further multiplied with a constant given by a sigmoid function (Figure 3I) of the current synaptic weight (w[exc, i]), a constant which we called the synaptic weight compensation. This is motivated by that the insertion of a synaptic receptor ion channel in the synaptic membrane can be regarded as a chemical reaction, where the addition of a new channel will become less likely the higher the number of ion channels already inserted and vice versa. The sigmoid function is defined by $\text{ }S\ left(t\right)=\text{ }\frac{1}{1+{e}^{-t}}\text{ }$, with an arbitrarily chosen steepness gain of 0.005. The Learning threshold is given by the product of the average total calcium ($Av{g}_{{A}_{tot}^{C{a}^{2+}}}$) and synaptic equilibrium (Syn[EQ]) where $Av{g}_{{A}_{tot}^{C{a}^{2+}}}$ is the mean of $ {A}_{tot}^{C{a}^{2+}}$ across the last three consecutive stimulus presentations. The averaging was required to avoid instability in the learning and could in principle correspond to the dampening effect of the time required in biological systems for protein synthesis and/or ion channel insertion. The Syn[EQ] is a gain factor, used to attain a homeostatic synaptic plasticity (Turrigiano and Nelson, 2004; Turrigiano, 2011) to keep the sum of the excitatory synaptic weights in control. It is defined as a linear function of the total excitatory synaptic weight (∑w[exc]), with a dual slope having zero point preset to SetPoint[∑[w][exc]] (in our model, that point is set to 10). The slopes intercepts for this linear function are defined such that, when total excitatory synaptic weight is 10 the Syn[EQ] is 1. When ∑w[exc] is >10, the slope is 0.12, whereas for ∑w[exc] < 10, the slope is 0.04. The differences in slopes were necessary to prevent neurons from being stuck in a depressed state, and to prevent unstable, rapid potentiation once above the set point. In principle, the slope differences can be regarded as corresponding to that the insertion and the removal of synaptic conductances in biology is handled by partly separate chemical reactions (Jörntell and Hansel, 2006). Inhibitory Synaptic Weight Learning In model CNs without adaptation of synaptic inhibition, we observed that synaptic weight changes in the PA synapses tended to make the CNs unstable, which degraded learning stability (Figure 10). Instability was attained gradually during the learning process and typically resulted in that either the CN had excessive calcium activity or no calcium activity at all. In such cases, the CN would often swing between these two extremities several times during the learning process. We found that it was important that the model system prevented the CNs from entering such unstable states and a main effective regulator was found to be gradual adaptation of the total inhibitory synaptic weight around a set point of the total calcium activity of the neuron. Therefore, during the learning process, after each stimulus presentation, also the total inhibitory weight was adapted toward a set point, which was defined by the activity of the calcium spike rate, closely related to ${A}_{tot}^{C{a}^{2+}}$. A high calcium activity during the stimulus presentations was countered by an increase in total inhibitory synaptic weight, whereas a low calcium activity was countered by a decrease in this weight. A plastic regulation of the weight of inhibitory synapses against postsynaptic calcium activity or postsynaptic calcium-dependent bursting activity has previously been found in various neurons across the brain (Kawaguchi et al., 2011; Hulme and Connelly, 2014; Lourenço et al., 2014). In our model, we found this feature to prevent overexcitation in the simulated CNs as the learning of the synaptic weights progressed. A function with dual linear slope was used to define the rate of inhibitory weight change with respect to the calcium spike rate. Inhibitory weight change is summed with the existing inhibitory synaptic weights (w[inh]). The calcium activity set point ($SetPoin{t}_{C{a}^{2+}}$) (Table 1) is the critical factor that defines the calcium spike rate, around which the inhibitory weight adapts to keep the system in balance. These slope functions are built zeroing from the preset point ($SetPoin{t}_{C{a}^{2+}}$), where the range of weight change is set between −0.01 and 0.01 for calcium spike rates between 0 and 200 Hz, respectively. In order to dampen the rate of adaptation, we used a moving average of the calcium spike rate across the last three stimulus presentations as the input to the adaptation. Adaptation of the total inhibitory synaptic weights was implemented because the change in the weights of the excitatory PA synapses on the CN neuron could lead to large changes in the levels of the LVA (voltage-activated calcium channel) activity, which in turn led to unstable learning (as in Figure 10). Adaptation of the inhibitory synaptic weights balanced the LVA activity and resulted in a maintained stable learning process. All adaptation of the weight of the synaptic inhibition was evenly distributed across all the inhibitory synapses. Calcium-dependent potentiation of inhibitory synapses has previously been found to be confined to the inhibitory synapses that were active during the calcium activity (Kawaguchi et al., 2011; Hulme and Connelly, 2014; Lourenço et al., 2014) –indeed, in our simulated set of inhibitory synapses, all of them were active under all stimulus presentations as described above. Variations of Initial Intrinsic CN Parameters In addition to testing the effects of excitatory synaptic seed weight distributions on the outcome of the learning process (see below) we also wanted to test the effects of intrinsic CN parameters. Moreover, the values of these parameters to some extent relied on assumptions, and we wanted to know the sensitivity of the model to these assumptions. Therefore, in separate simulation runs, some of the intrinsic CN parameters were varied as described by Table 1. All of the initial parameters were kept constant during the learning process. Model Data Analysis Correlation Index The correlation index measure (Figure 5D) was used to quantify the degree of correlation between the spiking activities of two or more PAs while being provided with sensory inputs. We compared PAs whose synapses after learning ended up as High End Weight (HEW) synapses with high seed weight synapses that ended up as Low End Weight (LEW) synapses. To compute this measure, we considered 10 HEW (green bars in Figure 5C) and 10 LEW synapses (red bars in Figure 5B). We evaluated the spike trains for these 20 PAs across all the five stimuli (Figure 4) by means of Victor-Purpura distance (VPd) (Victor and Purpura, 1996) spike train metrics. VPd gives a measure of the similarity between two spike trains by measuring the minimum cost necessary to transform one spike pattern into the other. The distance calculation is based on two rules, (i) adding/deleting a spike (cost = 1) and; (ii) shifting the spike by an interval of Δt (cost = q. Δt, in our simulation q = 10/s). By making comparisons between all the primary afferent spike trains evoked by different stimuli, we obtained a matrix of comparisons between all of the stimuli used (Rongala et al., 2017). For each neuron configuration, these matrices were used to compute the correlation index value for sensors with HEW synapses (Figure 5C, green) as given by computing the average VPd across all the 10 synapses with HEW for all the 5 stimuli. The same procedure was carried out to calculate the correlation index value for sensors with LEW synapses (Figure 5C, red). The difference between correlation index values of LEW and HEW sensors was the correlation index difference (Figure 13). Correlations Between Synaptic Responses The correlation measure (Figures 7, 8) was used to compute how correlated the synaptic responses were across different learning conditions. To compute this measure, we consider all the 80 synapses. We convolute the spike trains of all 80 PAs using the same EPSP time constants as in CN in vivo recordings (Bengtsson et al., 2013). Further we multiply each of these convoluted signals with the respective synaptic weights of the corresponding PA. The sum of all the 80 weighted convoluted signals was the simulated intracellular signal displayed (Figures 7, 8). The correlation between two simulated intracellular signals is computed with the inbuilt MATLAB^® function xcorr (with zero lag). FIGURE 7 Figure 7. Learning resulted in decorrelated responses between neurons to the same stimulus. (A) Triangle matrix for the correlations between synaptic responses evoked by the same training stimulus before (CN1[SW]-CN5[SW]) and after learning for all five cuneate neurons (CN1[EW]-CN5[EW]). The difference in correlation between the two groups was statistically significant (p = 0.001, Wilcoxon signed rank test) (B) Averaged synaptic responses to the stimulus (‘indentation 20 mm') repeated 100 times for three different CNs (CN2, CN3, CN5) before and after training. (C,D) Similar displays as in (A,B) but for a novel stimulus that was not part of the training set (“indentation 10 mm;” Oddo et al., 2017). The difference in correlation before and after learning was statistically significant (p < 0.001, Wilcoxon signed rank test). FIGURE 8 Figure 8. Learning resulted in decorrelation of the responses to different stimuli in the individual CN. (A) Triangle matrix for the correlations between synaptic responses evoked by the training stimuli before and after learning in the same CN. Otherwise the display is similar to Figure 7A. The difference in correlation before and after learning was statistically significant (p < 0.001, Wilcoxon signed rank test) (B) Averaged synaptic responses of the CN (CN1) to the full set of training stimuli. (C,D) Similar display as in (A,B) but for non-training stimuli previously reported for shapes (Oddo et al., 2017) and for textures (Rongala et al., 2017). The difference in correlation between the two groups was statistically significant (p = 0.028, Wilcoxon signed rank test). Multi-Dimensional Scaling To further illustrate the specific distribution of synaptic weights, we used the Classical multi-dimensional scaling (MDS) method (cmdscale, an inbuilt MATLAB^® function). The distribution of weights (w[exc]) across the 80 PA synapses is here denoted the synaptic weight landscape. Using MDS, the differences between the synaptic landscapes of two or more CNs were visualized as distances in two-dimensional displays (Figures 9A,C,10A). The input distance vector to multi-dimensional scaling is calculated as a simple Euclidian distance between synaptic weight landscapes. FIGURE 9 Figure 9. Evolution of synaptic weight landscapes and sparseness during the learning process. (A) Evolution of the synaptic weight landscapes during the learning process visualized by MDS for the five seed weights tested. (B) Corresponding evolution of the sparseness of the synaptic weight distributions. (C) Dependence of the evolution on gmax for seed weight #1. The intermediate value was the default configuration for this parameter. (D) Corresponding plot for the sparseness. FIGURE 10 Figure 10. Learning without supralinear calcium responses. (A) Evolution of the synaptic weight landscape for a CN without supralinear calcium activity components. (B) Corresponding evolution of the sparseness of the synaptic weight distribution (cf. Figure 9B). Computation of the Sparseness of the Synaptic Weight Distribution We also measured the degree of dispersion of synaptic weights, or the sparseness of the synaptic weights, among the 80 available PA synapses (Figures 9B,D, 10B). For this purpose, we took the measured synaptic weight sparseness using the ratio between the l[2] and l[1] norm of the weight vector (Yin et al., 2014). This measure will report its minimal value when all synapses have exactly the same weight and a value of one when one synapse has maximum weight and all other synapses have zero weight. It was used to track the evolution of the synaptic weight dispersion during consecutive stimulus presentations and was therefore also an indicator of the stability of the learning. Statistical Tests After the learning process, the correlation index of the High End Weight (HEW) PA synapses (N = 10) was compared to that of the Low End Weight (LEW) PA synapses (N = 10) for each CN configuration using paired t-test (these distributions were on visual inspection comparable to normal distributions, see Figure 5D). The changes in correlation between the synaptic responses, of different CNs to the same stimulus and of the same CN to different stimuli, induced by training were statistically quantified using the paired, two-sided Wilcoxon signed rank test. Inhibitory Synaptic Inputs and Intrinsic Responses of Cuneate Neurons in vivo To support the construction of our model, we started out by extending a previous characterization of the projection neurons of the cuneate nucleus in the decerebrate cat (Bengtsson et al., 2013) to also include their inhibitory synaptic inputs and intrinsic membrane responses, using a set of in vivo whole cell intracellular recordings from projection neurons (N = 15) and interneurons (N = 8) (Figure 1A). A touch to the receptive field of a cuneate projection neuron activated both excitatory and inhibitory synaptic inputs (Figures 1A,B) and the interneurons, which are responsible for synaptic inhibition evoked by skin stimulation in this preparation (Bengtsson et al., 2013). Spontaneous unitary inhibitory postsynaptic potentials (IPSPs), presumably driven by spontaneous interneuron spiking, were very small (Figures 1B,C) (the average median IPSP amplitude was −0.55 ± 0.08 mV, recorded from N = 15 cuneate projection neurons). In fact, this amplitude is an overestimate since many of the apparent IPSPs were too small to be detected from baseline noise using our template based identification method. In contrast, maximal evoked compound IPSPs (evoked from outside the receptive field of the excitatory input, see Bengtsson et al., 2013) were large (−14.3 ± 2.1 mV) and recruited in a continuous, gradable fashion (Figure 1D). Such maximal compound IPSPs had a response onset latency of 6.1 ± 0.7 ms, which is about 1.0 ms longer than the latency time of primary afferent EPSPs in the same type of preparation (Bengtsson et al., 2013). This observation strongly suggests that the compound IPSPs were due to primary afferent activation of the inhibitory interneurons. Together, these findings indicated that a high number of interneurons (>30, but possibly 100's) innervated each projection neuron with relatively uniformly low weighted inhibitory synapses. A strong involvement of intrinsic voltage-activated conductances in shaping the membrane potential trajectory over time was suggested by the subthreshold spontaneous activity of the cuneate projection neurons recorded near resting potential (Figure 1E). Note the doublets, triplets or quadruplets of spike firing, which were observed in all projection neurons, but none of the interneurons. Occasional single spike firing events dramatically altered the spike afterhyperpolarization phase (red trace in Figure 1E), which further suggested the presence of intrinsic membrane potential dynamics. Indeed, using current clamp to generate temporary hyperpolarization of the membrane potential resulted in strong rebound excitation (Figure 1E), which demonstrates the contribution of such active intrinsic conductances. Similar intrinsic responses were recorded in all cuneate projection neurons when hyperpolarized by at least −200 pA for at least 200 ms (N = 15). Since such intrinsic responses can be expected to be an important part of defining the time course of calcium influx into the neuron, which in turn will be important for its learning process during a time-varying synaptic input, an important component of our full model below was the “intermediate dynamics” model of the of the cuneate projection neurons. We observed that there were relatively wide differences in the intensity and the speed of the intrinsic responses (Figure 1E) between the cuneate projection neurons. Therefore, we designed the intermediate dynamics model of these membrane potential dynamics against one of our recorded cuneate projection neurons, with the primary aim of approximating its overall dynamics across a range of conditions rather than focusing on capturing its behavior under any single condition exactly (for the full model, we later also will show model experiments in which parameters influencing the intermediate dynamics are adjusted to modify the intrinsic responses of the cuneate projection neurons, showing that model behavior was robust across a range of intermediate dynamics). Our intermediate dynamics model approximated the observed membrane potential dynamics well across a range of hyperpolarizations (Figure 1F). The model also resulted in a reproduction at least of the overall dynamics for responses evoked by touch (Figure In order to verify that the recorded IPSPs (Figure 1) could be generated by the local interneurons, we made intracellular recordings also from these neurons. Intracellular recordings were identified as projection neurons and interneurons based on their characteristic spike shapes (Figure 2A). The spontaneous spike firing frequencies were 11 ± 4.8 Hz (mean ± sd.) vs. 8.8 ± 2.2 Hz for projection neurons (N = 15) and interneurons (N = 8), respectively. Interneurons responded well to touch to the skin (Figure 2B) and on electrical skin stimulation (Figure 2C). The EPSPs evoked by electrical skin stimulation in the interneurons were considered monosynaptic based on the response latency time and the low variability of the EPSPs (Bengtsson et al., 2013; Figure 2C). Based on a qualitative analysis of the extent of their receptive fields on the skin (Figure 2B), interneurons were found to have much wider excitatory receptive fields than the cuneate projection neurons (which were previously quantitatively defined in Bengtsson et al., 2013). Combined with the rapid responses of the interneurons to PA synaptic input (Figure 2C), these observations made it plausible that their activation was responsible for the compound IPSPs recorded in the cuneate projection neurons on stimulation within or in the vicinity of their receptive fields (Figure 1D). The apparent absence of intrinsic regenerative responses to skin stimulation in the interneurons (Figure 2B) suggested the function of the cuneate interneurons to be a linear transfer of PA input, inverted to IPSPs, to the projection neurons. These findings motivated our simplification of the inhibitory synaptic inputs to the cuneate projection neurons to be modeled as a set of unitary weight inhibitory synaptic inputs directly generated by the spikes of the individual PAs (Figures 3A,B). Functional Structure of the CN Model In our integrated cuneate model, the biological sensorized skin was replaced with a bionic fingertip (Oddo et al., 2016; Figure 3A and Figure S2). The model cuneate projection neuron (CN) was composed of the intermediate dynamics model described above (Figure 1F), that implemented input- and time-varying intrinsic responsiveness, and the excitatory and inhibitory synaptic inputs from the sensors that drove the activation of the intermediate dynamics of the CN (“Total calcium activity” in Figure 3B). The factor g[max] allowed an adjustment of the ratio between the synaptic conductances and the background, leak, conductance, which is later in the paper used for simulating CNs with different input resistances or sizes. The rest of the diagram in Figure 3B indicates adaptation mechanisms for the excitatory and inhibitory synaptic weights, which implemented a Hebbian-type of learning combined with synaptic weight scaling as described below. Depending on the correlation between the calcium activity in the main compartment and in the individual synaptic spaces (Figures 3C,D), the weight of each excitatory synapse was updated after each stimulus presentation (Figures 3E–H). To illustrate this process, Figure 3E shows that the local synaptic calcium activity crossed the learning threshold twice in this example. But for calculating the learning signal, also the calcium activity in the main compartment (Figure 3D) was taken into account. For the first episode of suprathreshold calcium activity in the synapse (Figure 3E), the calcium activity in the main compartment was in the “positive zone,” i.e., sufficiently high to permit potentiation of the synaptic weight. Therefore, this first episode made the net learning drive of this synapse positive (Figure 3F). The second episode of suprathreshold calcium elevation in the synapse (Figure 3E) instead coincided with calcium activity in the main compartment being partly in the “negative zone” (Figure 3D). Therefore, the net learning drive for this episode was partly negative (Figure 3F). The net learning effect of each stimulus presentation and each synapse was the integral of all of these episodes, which in this case ended up positive (Figure 3F). For another synapse, the suprathreshold calcium activity (Figure 3G) mostly coincided with main compartment calcium activity being in the negative zone (Figure 3D), which resulted in a negative integral of the net learning drive (Figure 3H). For each update of synaptic weights, the weight of the synapse itself was an additional factor used to scale the synaptic weight change (Figure 3I; synaptic weight compensation in Figure 3B) (Bi and Poo, 1998). Altogether, this architecture resulted in a Hebbian form of plasticity (Hebb, 1949). The end result was a form of plasticity that had properties being related to spike-time dependent plasticity (STDP) (Markram et al., 1997; Graupner and Brunel, 2012) but being less sharply dependent on the exact spike timing (Helias et al., 2008). The motivation for taking this approach to formulate the driving forces for the learning rule was to let the dynamics of the cellular learning signal, and its shaping by the integration of peripheral synaptic input have its origin in biological data recorded from cuneate projection neurons in vivo (Figures 1E,F) rather than in assumptions made for other learning systems. We believe that overall, net learning effects are comparable to those obtained under some settings of those previous models (Graupner and Brunel, 2012) (Discussion) but more specifically constrained to the effects that might arise in the cuneate projection neurons. The integrated model also used two separate feedback regulatory systems. First, the total inhibitory synaptic weight was updated based on the total calcium activity, according to a fixed set point (Figure 3B). Adaptation of the total inhibitory synaptic weights was implemented because we found that the change in the weights of the excitatory PA synapses on the CN neuron could lead to large changes in the levels of activity in the voltage-activated calcium channels, which in turn led to unstable learning. A plastic regulation of the weight of inhibitory synapses against postsynaptic calcium activity or postsynaptic calcium-dependent bursting activity has previously been found in various neurons across the brain (Kawaguchi et al., 2011; Hulme and Connelly, 2014; Lourenço et al., Secondly, SetPoint[∑[w][exc]] is a control signal used to implement a homeostatic synaptic plasticity mechanisms (Turrigiano and Nelson, 2004; Turrigiano, 2011) through which the sum of the excitatory synaptic weights is kept under control. The total excitatory synaptic weight was driven toward this set point by adjusting the learning threshold for synaptic potentiation vs. depression (Figure 3B; positive vs. negative zone in Figure 3D). Hence, the propensity for LTP or LTD was affected by the current total synaptic weight and the effect of the learning threshold was therefore a synaptic weight scaling. The set point of the learning threshold (Figure 3B) is the fixed sum of the excitatory synaptic weights toward which the adaptation of the learning threshold strives to bring the system. Further details and motivations are given in Methods. Synaptic Weight Changes During the Learning Process The sensorized bionic fingertip was used to provide the CN with training data. The bionic fingertip provided the important feature of a system of PA sensors where there is a consistent relationship between the activation of the different sensors across different conditions or tactile experiences. The CN learning process consisted of 1,500 presentations of five training stimuli, each activating all of the 80 PA sensory inputs but in different spatiotemporal patterns (Figure 4), presented in a pseudorandom order. The training stimuli consisted in three touch-an-slide conditions and in two dynamic indentation stimuli (Figure 4). During the learning process, the gradual transition from initial, or “seed,” synaptic weights to the final distribution of synaptic weights (“end weight”) was monitored (Figure 5A). Interestingly, the weight of a synapse could for example evolve from a very low seed weight to a high end weight (green thick line), or from high seed weight to near-zero end weight (red thick line) and any intermediate or mixed development. Such examples showed that the fate of an individual synaptic weight during the learning process was not depending on its starting point but suggested some underlying functional principle. The learning process resulted in a transformation of the initial random distribution of low-moderate weight synapses (seed weight, SW) (Figure 5B) to a highly skewed distribution with a few high weight synapses (end weight, EW) (Figure 5C), similar to cuneate neurons in adult animals (Bengtsson et al., 2013). We simulated CNs with five different pseudo-randomized SWs, which below are referred to as CN1-5, based on the assumption that initial random synaptic weight distributions likely differ between biological cuneate neurons in the same animal. For each CN, there was at the end of the learning process a unique end weight distribution (EW), dominated by different specific synapses with high weights (Figure S3). The seed weight losers (red bars in Figure 5B) differed from the end weight winners (green bars in Figure 5C and Figure S3) in that they conveyed sensor spiking activity that was less correlated across the training stimuli (Figure 5D), as measured by the Victor-Purpura distance metric for comparisons between spike trains. During the learning process, the overall adaptive regulation of the total excitatory and inhibitory synaptic weights in the model (Figure 3B) helped stabilizing the activity of the CN (Figure 6), which was important for a stable learning outcome. In addition, in parallel with the acquisition of highly skewed synaptic weight distributions, the cuneate neurons also gradually acquired a firing that was dominated by brief, episodic bursts of spike output, similar to the firing mode observed for these neurons in adulthood in vivo (Bengtsson et al., 2013; Figures 1, 6D, Figure S1). Learning Resulted in Decorrelations of Synaptic Inputs Between Neurons and Stimuli The next question asked was what functional effects the learning process resulted in. We found that between the individual CNs, the learning process resulted in a decorrelation of the initially highly correlated temporal patterns of the synaptic responses to each given training stimulus (Figures 7A,B). Hence, a consequence of the learning process was an increased differentiation of the CN responses to a given input, which was statistically significant at p ≤ 0.001 (Wilcoxon signed rank test). In addition to testing the stimuli that the CNs were trained to, we also used a set of previously non-encountered stimuli, consisting of three types of texture sliding and two touch indentations, here referred to as non-training stimuli. Similar to the case with the training stimuli, the learning process resulted in a decorrelation of the synaptic responses of the CNs also to the non-training stimuli (Figures 7C,D; p < 0.001, Wilcoxon signed rank test). Moreover, in the individual CN, whereas the differences in the spatiotemporal patterns of PA input between stimuli to some extent generated specific synaptic responses already in the “naïve” state, the responses became markedly differentiated with learning (Figures 8A,B). For each of the five CNs, CN1-5, the learning process resulted in a statistically significant decrease in the correlation between the synaptic responses to the five training stimuli (p < 0.001, Wilcoxon signed rank test). Such synaptic response differentiation is a basis for tactile input feature segregation (Jörntell et al., 2014) and the mechanism described here can hence be expected to improve the resolution at which the neuronal segregation of spatiotemporal PA sensory input patterns occurs. The learning process also resulted in that the individual CN increased the segregation of its responses to different non-training stimuli (Figures 8C,D). For the illustrated CN, the learning process resulted in a statistically significant decrease in the correlation between the synaptic responses to the five training stimuli at p = 0.027 whereas the p values for the other four CNs were below 0.001 (Wilcoxon signed rank test). The results with non-training stimuli hence indicated that the acquired learning was generalizable to previously non-encountered experiences, which is difficult to achieve in traditional pattern recognition systems (Spanne and Jorntell, 2015). Evolution of Synaptic Weight Landscapes and Sparseness During the Learning Process The gradual evolution of the synaptic weight distributions (or their “landscapes”) during the learning process was further studied by plotting the weights of all the 80 PA synapses of each CN in an 80-dimensional space, embedded in 2D plots using multi-dimensional scaling (MDS). First, the analysis showed that the end weight landscape, which is the basis for the segregation of responses between neurons and inputs shown in Figures 7, 8, depended on the seed weight landscape of the CN (Figure 9A). Secondly, it showed that the evolution of the synaptic weight landscape initially accelerated but later decelerated to hover around a relatively stable end point (Figure 9A), suggesting that the learning was self-stabilizing. Self-stabilization is further quantified in Figure 9B, which shows the temporal evolution of the sparseness of the synaptic weight distribution. This self-stabilization was lost on removal of the supralinearity in the calcium dynamic responses and in the local synaptic calcium activity (Figure 10). In this case, the main compartment calcium was the sum of the local calcium activity in all the synapses, i.e., without intrinsic calcium dynamics in the main compartment and without supralinear local calcium activity subsynaptically. Under such conditions, the learning process became chaotic, without any stable end point and without any development of sparseness of the synaptic weight distribution (cf. Figure 9A). As we will describe in greater detail further below, we found that not only the seed weight landscape but also the intrinsic CN properties affected the outcome of the learning process. Of particular interest in relation to the effects of the calcium dynamic response (Figure 10) was the coupling between the synaptic input and the intermediate dynamics model of the CN, since the intermediate dynamics model governed the time course of the calcium signal and thereby the learning process. This coupling was strongly affected by the Maximum synaptic conductance constant (g[max]), which was the ratio between the maximum synaptic conductance and the leak conductance of the neuron. A decrease in this ratio, or coupling, substantially reduced the learning rate whereas an increase instead caused the learning to become rapid, unstable and with dramatic shifts, unlike the more gradual process when the learning system was in balance (Figure 9C). Interestingly, even in the setting with high g[max] the system found a relatively stable end point in the synaptic weight landscape but with a different location compared to the basic configuration (Figures 9C,D). However, a high learning speed, or learning rate, may not be beneficial if the aim is to allow the result of the learning process to take a variety of experiences into account, which in turn could limit the range of conditions and novel experiences that the learnt network structure would be applicable to. To explore this, we tested the effects of “monotypic” sensory training. Importance of Varied Sensory Experiences Monotypic sensory training is here referring to a situation where the same sensory experience, or stimulus, is repeated a high number of times in succession. i.e., instead of using the regular learning process protocol with 1,500 presentations of the set of five stimulus presentations mixed in a pseudo-random order, here we ran the learning process with one single type of stimulus for the whole learning process. Similar to diversified sensory training (Figure 9A), monotypic sensory training also resulted in gradual changes in the synaptic weight landscape (Figure 11A). However, the type of stimulus applied had a decisive influence on the direction of development of the synaptic weight landscape (Figures 11A–C), and that a switch from one monotypic stimulus to another could sharply turn the evolution of the synaptic weight landscape in a new direction (Figures 11B–D). Also, with monotypic stimulus presentations, stable end points of synaptic weight landscapes were not clearly identified, in contrast to the relatively stable end points obtained when a mixture of stimuli was used in the learning process (Figure 9A). Indeed, instead of reaching a stable end point well before 1,500 trials (as in Figure 9A), monotypic sensory training instead could result in rapid changes in synaptic weight distribution at this time point (Figure 11A), which on inspection was found to be due to that the calcium activity of the CN could not be held within its set point and started to generate chaotic behavior (as in Figure 10). Hence, the order of presentation of the sensory experiences influenced the outcome of the learning process (Figure 11D), which indicates that the presence of a mixed, diversified set of sensory experiences over time will influence the result of the learning process. In addition, these findings also suggest that a high learning speed as in Figures 9C,D would make the CN more sensitive to individual sensory experiences and their order of presentation than in the basic configuration (Figures 9A,B). FIGURE 11 Figure 11. The effects of monotypic sensory experiences. (A) The evolution of the synaptic weight distributing using solely stimulus #1 (latex, Figure 4) as the training stimulus for 3,000 repetitions. (B) Similar experiment, but after 1,500 repetitions, stimulus #1 was replaced with stimulus #5 (indentation, 20 mm). (C) Similar experiment as in (B), but now the 1,500 repetitions of stimulus #5 preceded the 1,500 repetitions of stimulus #1 (reverse order). (D) Synaptic weight landscape evolutions of the three experiments shown in an MDS plot. The Intrinsic CN Properties Profoundly Influenced Learning Outcome Whereas the above analysis (Figures 5–8, Figure S3) indicated that the seed synaptic weight distribution was an important factor influencing learning outcome, i.e., which set of correlated sensors the CNs picked to selectively strengthen, also the intrinsic CN parameters could conceivably act as a seed factor. Morphological differences between cuneate neurons (Fyffe et al., 1986) is an example of a source of variability in intrinsic neuronal behavior, the level of expression of different ionic conductances another. To explore this issue, we tested to separately vary the time constant of the calcium signal in the local synaptic space, the coupling between the synaptic input and the intrinsic dynamics (already shown in Figures 9C,D) and the set point of allowed calcium activity in the intermediate dynamics model (which was controlled by adapting the gain of the inhibitory synaptic input, see Figure 3B). All intrinsic CN parameters, for all values tested (Table 1) influenced the outcome of the learning in terms of the end weight landscape (Figure 12A). Given that these intrinsic parameters influenced the behavior of the intermediate dynamics model and the calcium signal, either on a per stimulus presentation basis or over longer time, this was in itself perhaps not a surprising finding. However, the impact of these intrinsic parameters was profound and almost of comparative magnitude to the impact of the synaptic seed weight landscapes. The only apparent exceptions were the two lowest values of the g[max] (inset) and the lowest τ for the calcium activity in the local synaptic space (triangle) (Figure 12A). However, the effect of these latter settings was that they reduced learning speed to a minimum and consequently resulted in little change from seed weight (black dot in Figure 12A). To estimate the relative stability of the learning, each configuration was run through the learning process five different times where each process was different as a minor noise was always injected in the sensor spike trains. The highest g[max] value, with the fastest learning, resulted in the largest variability between learning processes (star in Figure 12A). All other parameter values were associated with lower variability in the learning outcome. FIGURE 12 Figure 12. The intrinsic CN properties worked as additional seed factors. (A) For the seed weight #1, the end weight landscapes for all tested values of the intrinsic CN parameters, with the radii indicating the standard deviations across five repetitions of the learning. (B) The end weight landscape standard deviation for the whole range of intrinsic parameters is plotted for each seed weight. Note that the scale axes values are unrelated between the MDS plots. We tested the impact of varying the intrinsic CN parameters for all SWs by plotting them on the same MDS plot. The effect of the seed weight on the end weight landscape was larger than those of the intrinsic CN parameters (Figure 12B), but it is clear that both categories of seed factors had a major impact on learning outcome. But irrespective of seed weight or intrinsic seed factors, the learning process always resulted in a selection of correlated sensors to become the end weight winner synapses (Figure 13). The fact that the learning worked in a similar fashion across a range of settings of intrinsic parameters for defining the intrinsic dynamics of the CN shows that the model architecture has some degree of robustness, i.e., it is independent of having precise constraints on neuron intrinsic behavior and will work similarly across a population of neurons with different intrinsic dynamics. Such differences within a CN population may be important to identify different subsets of correlated sensors in the learning process, and hence creating a diversification in the tuning of across the population of cuneate neurons, similar to the one observed in vivo (Jörntell et al., 2014). FIGURE 13 Figure 13. Improvements in correlation indexes for learning with different seed factors. The differences in correlation index between end weight winners and end weight losers (see Figure 5) plotted for all values of intrinsic CN parameters and all synaptic seed weights. Note that for some values of intrinsic parameters (gmax of 23e-9 and gmax of 9e-9, and τ for the local calcium activity of 50%) no learning took place (Figure 12) and the learning effects of those intrinsic parameter settings are hence omitted from this figure. Based on five CNs and 14 conditions, statistical comparisons were made for 70 conditions. The differences in correlation index between the end weight winners and losers was statistically different in each of these 70 cases (p < 0.05, paired t-test), except one (p = 0.11 for CN3 at gmax of 9e-7). Using an integrated model of cuneate nucleus neurons, based on the present in vivo whole cell recording data, and a bionic tactile sensor system, this study investigated what form of representations the brain could automatically learn from tactile interactions with the world. At the single neuron level, the CN learning resulted in the identification of sets of functionally correlated sensors (Figures 5, 13). By reducing or eliminating the synaptic weights of other sensors, this process essentially equals a dimensionality reduction of the sensory space at the single neuron level. At the population level, each CN, as defined by its unique seed factors, gradually identified different sets of functionally correlated sensors (Figures 7–9, 13). As a consequence, the learning process induced decorrelations in the synaptic responses of the different CNs to the same stimulus as well as for the individual CN to different stimuli (Figures 7, 8). Such decorrelated synaptic responses is a key requirement for a population of cuneate neurons to be able to segregate the high-dimensional tactile sensory space, where each sensor potentially represents a unique dimension (Spanne and Jörntell, 2013), into specific projections (Jörntell et al., 2014). These projections are defined by the types of sensory experiences made (Figure 11) and the physical-mechanical effects involved (see Introduction). Hence, the representation of the physical world that naturally unfolded given the neurophysiological constraints of the cuneate neurons in vivo (Figure 1) was a utility-based decomposition of the sensory space. Thereby, the architecture of the cuneate nucleus network (Figure 3) would help the brain focusing to focus on the statistically recurring elements of experienced tactile sensory activation patterns. It follows that an early rich variation of tactile sensory experiences (Pape et al., 2012; Loeb and Fishel, 2014), such as achieved by motor babbling in the infant (Forssberg et al., 1995; Blumberg et al., 2013), is important for the brain because it would be needed to form representations with a sufficiently high Robustness of the Learning Process Suggests Little Dependence on Assumptions In order to address a breadth of issues that related to the representation of haptic input features in the cuneate nucleus neurons (Jörntell et al., 2014) and their intrinsic properties, but which are too complex to be resolved by experimental recordings, we needed to model the cuneate neurons. Whereas, the model system naturally could have featured a higher number of sensors or a larger number of contexts across which the generalization capability was tested, we find no reason to believe that would have altered the principles reported here. An important component of the model was the learning process that we envisage to be one of the earliest developmental steps toward a functional tactile neural system. Although there are several indications that the PA synapses on the cuneate neurons are extensively plastic at least during some phase of development (Bengtsson et al., 2013) and following removal of PA inputs (Kambi et al., 2014; Mowery et al., 2014), the direct synaptic mechanisms underlying the plasticity have not been explored. Therefore, in order to model this learning process, we used an already validated idea of a calcium activity-dependent mechanism ( Helias et al., 2008), which to some extent can be regarded a general synaptic plasticity mechanisms of neurons. It is in principle a variant of the BCM learning rule (Bienenstock et al., 1982) for which we obtained specific data from the biological cuneate projection neurons to approximate how their internal calcium signal can be expected to vary over time and with the input. In comparison with a previous study of how calcium-dependent STDP varies across a range of parameter settings (Graupner and Brunel, 2012), our approach had a less steep rising phase in the postsynaptic calcium response, which can be expected to dampen the dependence of our model on exact spike timings and instead make it rely to a larger extent on the temporal variation in the intensity of the synaptic activation (Graupner et al., 2016) and how it correlated with the overall activity of the postsynaptic neuron. Naturally, the actual cuneate learning process is likely to differ from ours in many details but the fundamental properties of our model, i.e., an extreme dimensionality reduction in each neuron and the identification of different correlated sets of sensors by the population of CNs (Figure 7), were remarkably robust to manipulation of a wide variety of neuronal parameters that influenced the learning process (Figures 12, 13). It can also be noted that after learning, our CNs had synaptic weight distributions that were similar to adult cuneate neurons (Figure 5C and Figure S3) (Bengtsson et al., 2013) as well as response dynamics that qualitatively captured the dynamics recorded in vivo (Figure 6D, Figure S1). Plasticity in inhibitory synaptic inputs could in principle help improving segregation of inputs (Garrido et al., 2016). Our analysis of the biological inhibitory synaptic inputs in the cuneate nucleus did not overtly support a specific role in such function, though. The recorded inhibitory interneurons in the cuneate nucleus were activated from wide receptive fields and in the projection neurons inhibition was found to originate from similar wide receptive fields, with relatively uniform inhibitory synaptic weights (Figures 1C,D). This is in contrast to the excitatory synaptic inputs from PAs, which in cuneate projection neurons have been found to have strongly differentiated weights and hence suggestive of an extensive learning history in these synapses (Bengtsson et al., 2013). In our CN model, the plasticity in the inhibitory synapses instead assumed the role of activity balancing, i.e., ensuring that the excitable calcium responses did not go out of bounds (neither upwards, nor downwards), which was an important function to ensure stable, gradual learning (cf. Figure 10). Advantages of the Representational Format and of the Architecture The utility-based decomposition of the tactile sensory space learned with our CN architecture is likely to correspond to an early level acquisition of a representation of the experienced, and therefore useful, haptic input features, which were previously found to be represented in cuneate neurons in vivo in adult animals (Jörntell et al., 2014). Being based on physical-mechanical effects (Hayward, 2011; Hayward et al., 2014), correspondences between the haptic input features and sensory information from other modalities such as vision and hearing can be expected to be abundant. Hence, this type of tactile representation would provide a basis for rich cross-modality representations of rules of object interactions that can be learnt by the higher level centers of the brain to conceptualize the external world (O'Regan and Noe, 2001). This model does not contradict the possibility that the activity of individual cuneate neurons or neurons of the somatosensory cortex could correlate with physical parameters such as edge orientation or texture or even adaptation rate, but indicates that the underlying organizational and coding principle is likely to be more intricate than that. These principles are potentially portable to AI and robotic systems, which faces challenges of how to design sensor systems to support more general functionality in the interactions with the external world (Brooks, 1991; Davis et al., 1993; Service, 2014). AI systems are generally designed to deliver performance on specific tasks, such as the game of Go (Gibney, 2016), where they deliver impressive performance if given very large amounts of training data (Lecun et al., 2015). However, they seem to be lacking the wide versatility or generalization capability of biological systems ( Nguyen et al., 2015; Athalye and Sutskever, 2017), which could be related to the known problems of classical pattern recognition systems (Spanne and Jorntell, 2015) and which the current alternative mode of representation of sensory input data may help resolving. These technical systems also in general suffer from a strong dependence on the initial weight configuration (Glorot and Bengio, 2010; He et al., 2015). This dependence can be eliminated by specifically dedicated computational mechanisms calculating appropriate initial weights or by other computationally intensive methods (Williams and Hinton, 1986; Földiak, 1990; Glorot and Bengio, 2010), which, however, may not be available to the brain. Our architecture, featuring a set of neuronal auto-regulatory mechanisms and self-stabilizing learning, provide potential solutions also for these challenges within DNN and AI systems. Concluding Remarks A main advantage of representing tactile experiences by decomposing them into vector components of their constituent physical effects, rather than by pattern recognition applied to pixelated sensory data directly, is the powerful generalization possibilities (Spanne and Jorntell, 2015). With this mode of representation, the brain could in principle learn to identify the extent of the space of the theoretically possible physical effects within a short time using just a few types of disparate skin-object interactions and later subsequently learn to interpolate new points within this space at a gradually higher resolution. A major issue for future studies is to explore if brain processing operates in terms of such vector components also globally in the neocortex. Author Contributions UR, AS, and HJ designed the model. FB and HJ performed the patch clamp recording experiments and analyzed the biological data. UR, AM, and CO developed and experimented the bionic finger for electrophysiological and translational bioengineering studies. UR, AS, AM, CO, and HJ designed the analysis of the model data. UR, CO, AM, and HJ wrote the paper. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work was supported by the Ministry of Education, Universities and Research of the Italian Republic and the Swedish Research Council, via the Italy-Sweden bilateral research project J52I15000030005 SensBrain (Brain network mechanisms for integration of natural tactile input patterns), by the EU Grant FET 611687 NEBIAS Project (NEurocontrolled BIdirectional Artificial upper limb and hand prosthesiS), by the EU Grant FP7-NMP 228844 NANOBIOTOUCH project (Nanoresolved multi-scan investigations of human tactile sensations and tissue engineered nanobiosensors), and by the national project B81J12002680008 PRIN/HandBot (Biomechatronic hand prostheses endowed with bio-inspired tactile perception, bi-directional neural interfaces and distributed sensori-motor control), SENSOPAC (EU FP6, IST-028056-SENSOPAC), The Hand Embodied (EU FP7 IST-248587-THE) and the Swedish Research Council (project grant no. K2014-63X-14780-12-3). Supplementary Material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncel.2018.00210/full#supplementary-material Abraira, V. E., and Ginty, D. D. (2013). The sensory neurons of touch. Neuron 79, 618–639. doi: 10.1016/j.neuron.2013.07.051 Athalye, A., and Sutskever, I. (2017). Synthesizing robust adversarial examples. arXiv [preprint] arXiv:1707.07397. Barlow, H. B. (1972). Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1, 371–394. doi: 10.1068/p010371 Beccai, L., Roccella, S., Arena, A., Valvo, F., Valdastri, P., Menciassi, A., et al. (2005). Design and fabrication of a hybrid silicon three-axial force sensor for biomechanical applications. Sens. Actuators. A Phys. 120, 370–382. doi: 10.1016/j.sna.2005.01.007 Bengtsson, F., Brasselet, R., Johansson, R. S., Arleo, A., and Jorntell, H. (2013). Integration of sensory quanta in cuneate nucleus neurons in vivo. PLoS ONE 8:e56630. doi: 10.1371/ Bi, G. Q., and Poo, M. M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472. doi: Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48. doi: 10.1523/JNEUROSCI.02-01-00032.1982 Blumberg, M. S., Marques, H. G., and Iida, F. (2013). Twitching in sensorimotor development from sleeping rats to robots. Curr. Biol. 23, R532–537. doi: 10.1016/j.cub.2013.04.075 Brooks, R. A. (1991). “Intelligence without reason,” in Proceedings of International Joint Conference on Artificial Intelligence (Sydney, NSW), 569–595. Davis, R., Shrobe, H., and Szolovits, P. (1993). What is a knowledge representation? AI magazine 14, 17 Földiak, P. (1990). Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165–170. doi: 10.1007/BF02331346 Forssberg, H., Eliasson, A., Kinoshita, H., Westling, G., and Johansson, R. (1995). Development of human precision grip. Exp. Brain Res. 104, 323–330. doi: 10.1007/BF00242017 Fourcaud-Trocmé, N., Hansel, D., Van Vreeswijk, C., and Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640. doi: Fyffe, R. E., Cheema, S. S., Light, A. R., and Rustioni, A. (1986). Intracellular staining study of the feline cuneate nucleus. II. Thalamic projecting neurons. J. Neurophysiol. 56, 1284–1296. doi: Garrido, J. A., Luque, N. R., Tolu, S., and D'angelo, E. (2016). Oscillation-driven spike-timing dependent plasticity allows multiple overlapping pattern recognition in inhibitory interneuron networks. Int J Neural Syst. 26, 1650020. doi: 10.1142/S0129065716500209 Gibney, E. (2016). Google AI algorithm masters ancient game of Go. Nature 529, 445–446. doi: 10.1038/529445a Glorot, X., and Bengio, Y. (2010). “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of Artificial Intelligence and Statistics (Sardinia), 249–256. Graupner, M., and Brunel, N. (2012). Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location. Proc. Natl. Acad. Sci. U.S.A. 109, 3991–3996. doi: 10.1073/pnas.1109359109 Graupner, M., Wallisch, P., and Ostojic, S. (2016). Natural firing patterns imply low sensitivity of synaptic plasticity to spike timing compared with firing rate. J. Neurosci. 36, 11238–11258. doi: Hayward, V. (2011). Is there a ‘plenhaptic' function? Phil. Trans. R. Soc. B. 366, 3115–3122. doi: 10.1098/rstb.2011.0150 Hayward, V., Terekhov, A. V., Wong, S. C., Geborek, P., Bengtsson, F., and Jorntell, H. (2014). Spatio-temporal skin strain distributions evoke low variability spike responses in cuneate neurons. J R. Soc Inter. 11, 20131015. doi: 10.1098/rsif.2013.1015 He, K., Zhang, X., Ren, S., and Sun, J. (2015). “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (Santiago), 1026–1034. Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Approach. Oxford: John Wiley & Sons. Helias, M., Rotter, S., Gewaltig, M. O., and Diesmann, M. (2008). Structural plasticity controlled by calcium based correlation detection. Front. Comput. Neurosci. 2, 7. doi: 10.3389/ Higley, M. J., and Sabatini, B. L. (2012). Calcium signaling in dendritic spines. Cold Spring Harb. Perspec. Biol. 4:a005686. doi: 10.1101/cshperspect.a005686 Huguenard, J. R. (1996). Low-threshold calcium currents in central nervous system neurons. Annu. Rev. Physiol. 58, 329–348. doi: 10.1146/annurev.ph.58.030196.001553 Hulme, S. R., and Connelly, W. M. (2014). L-type calcium channel-dependent inhibitory plasticity in the thalamus. J. Neurophysiol. 112, 2037–2039. doi: 10.1152/jn.00918.2013 Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572. doi: 10.1109/TNN.2003.820440 Johansson, R. S., and Flanagan, J. R. (2009). Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 10, 345–359. doi: 10.1038/nrn2621 Jörntell, H., Bengtsson, F., Geborek, P., Spanne, A., Terekhov, A. V., and Hayward, V. (2014). Segregation of tactile input features in neurons of the cuneate nucleus. Neuron 83, 1444–1452. doi: Jörntell, H., and Ekerot, C. F. (2006). Properties of somatosensory synaptic integration in cerebellar granule cells in vivo. J. Neurosci. 26, 11786–11797. doi: 10.1523/JNEUROSCI.2939-06.2006 Jörntell, H., and Hansel, C. (2006). Synaptic memories upside down: bidirectional plasticity at cerebellar parallel fiber-Purkinje cell synapses. Neuron 52, 227–238. doi: 10.1016/j.neuron.2006.09.032 Kambi, N., Halder, P., Rajan, R., Arora, V., Chand, P., Arora, M., et al. (2014). Large-scale reorganization of the somatosensory cortex following spinal cord injuries is due to brainstem plasticity. Nat. Commun. 5, 3602. doi: 10.1038/ncomms4602 Kawaguchi, S., Nagasaki, N., and Hirano, T. (2011). Dynamic impact of temporal context of Ca^2+ signals on inhibitory synaptic plasticity. Sci. Rep. 1:143. doi: 10.1038/srep00143 Koester, H. J., and Sakmann, B. (1998). Calcium dynamics in single spines during coincident pre- and postsynaptic activity depend on relative timing of back-propagating action potentials and subthreshold excitatory postsynaptic potentials. Proc. Natl. Acad. Sci. U.S.A. 95, 9596–9601. doi: 10.1073/pnas.95.16.9596 Lecun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539 Llinás, R., and Jahnsen, H. (1982). Electrophysiology of mammalian thalamic neurones in vitro. Nature 297, 406–408. doi: 10.1038/297406a0 Loeb, G. E., and Fishel, J. A. (2014). Bayesian action&perception: representing the world in the brain. Front. Neurosci. 8:341. doi: 10.3389/fnins.2014.00341 Lourenço, J., Pacioni, S., Rebola, N., Van Woerden, G. M., Marinelli, S., Digregorio, D., et al. (2014). Non-associative potentiation of perisomatic inhibition alters the temporal coding of neocortical layer 5 pyramidal neurons. PLoS Biol. 12:e1001903. doi: 10.1371/journal.pbio.1001903 Markram, H., Lubke, J., Frotscher, M., and Sakmann, B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215. doi: 10.1126/science.275.5297.213 Mazzoni, A., Panzeri, S., Logothetis, N. K., and Brunel, N. (2008). Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Comput. Biol. 4:e1000239. doi: 10.1371/journal.pcbi.1000239 Molineux, M. L., Mehaffey, W. H., Tadayonnejad, R., Anderson, D., Tennent, A. F., and Turner, R. W. (2008). Ionic factors governing rebound burst phenotype in rat deep cerebellar neurons. J. Neurophysiol. 100, 2684–2701. doi: 10.1152/jn.90427.2008 Mowery, T. M., Kostylev, P. V., and Garraghty, P. E. (2014). AMPA and GABA(A/B) receptor subunit expression in the cuneate nucleus of adult squirrel monkeys during peripheral nerve regeneration. Neurosci. Lett. 559, 141–146. doi: 10.1016/j.neulet.2013.11.054 Nguyen, A., Yosinski, J., and Clune, J. (2015). “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Boston, MA), 427–436. Niedermeyer, E., and Da Silva, F. L. (2005). Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. London: Lippincott Williams & Wilkins. Oddo, C. M., Controzzi, M., Beccai, L., Cipriani, C., and Carrozza, M. C. (2011). Roughness encoding for discrimination of surfaces in artificial active-touch. IEEE. Trans. Robot. 27, 522–533. doi: Oddo, C. M., Mazzoni, A., Spanne, A., Enander, J. M., Mogensen, H., Bengtsson, F., et al. (2017). Artificial spatiotemporal touch inputs reveal complementary decoding in neocortical neurons. Sci. Rep. 8:45898. doi: 10.1038/srep45898 Oddo, C. M., Raspopovic, S., Artoni, F., Mazzoni, A., Spigler, G., Petrini, F., et al. (2016). Intraneural stimulation elicits discrimination of textural features by artificial fingertip in intact and amputee humans. Elife 5:e09148. doi: 10.7554/eLife.09148 Oddo, C. M., Valdastri, P., Beccai, L., Roccella, S., Carrozza, M. C., and Dario, P. (2007). Investigation on calibration methods for multi-axis, linear and redundant force sensors. Meas. Sci. Tech. 18, 623–631. doi: 10.1088/0957-0233/18/3/011 O'Regan, J. K., and Noe, A. (2001). A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 24, 939–973. discussion: 973-1031. doi: 10.1017/S0140525X01000115 Ostojic, S., and Brunel, N. (2011). From spiking neuron models to linear-nonlinear models. PLoS Comput. Biol. 7:e1001056. doi: 10.1371/journal.pcbi.1001056 Pape, L., Oddo, C. M., Controzzi, M., Cipriani, C., Förster, A., Carrozza, M. C., et al. (2012). Learning tactile skills through curious exploration. Front. Neur. 6:6. doi: 10.3389/fnbot.2012.00006 Pei, Y. C., Hsiao, S. S., Craig, J. C., and Bensmaia, S. J. (2011). Neural mechanisms of tactile motion integration in somatosensory cortex. Neuron 69, 536–547. doi: 10.1016/j.neuron.2010.12.033 Petersson, P., Waldenström, A., Fåhraeus, C., and Schouenborg, J. (2003). Spontaneous muscle twitches during sleep guide spinal self-organization. Nature 424, 72–75. doi: 10.1038/nature01719 Rongala, U. B., Mazzoni, A., and Oddo, C. M. (2017). Neuromorphic artificial touch for categorization of naturalistic textures. IEEE. Trans. Neur. Net. Learn. Syst. 28, 819–829. doi: 10.1109/ Saarinen, A., Linne, M. L., and Yli-Harja, O. (2008). Stochastic differential equation model for cerebellar granule cell excitability. PLoS Comput. Biol. 4:e1000004. doi: 10.1371/journal.pcbi.1000004 Sathian, K. (2016). Analysis of haptic information in the cerebral cortex. J. Neurophysiol. 116, 1795–1806. doi: 10.1152/jn.00546.2015 Service, R. F. (2014). Minds of their own. Science 346, 182–183. doi: 10.1126/science.346.6206.182 Shao, Y., Hayward, V., and Visell, Y. (2016). Spatial patterns of cutaneous vibration during whole-hand haptic interactions. Proc. Natl. Acad. Sci. U.S.A. 113, 4188–4193. doi: 10.1073/pnas.1520866113 Spanne, A., and Jorntell, H. (2015). Questioning the role of sparse coding in the brain. Trends Neurosci. 38, 417–427. doi: 10.1016/j.tins.2015.05.005 Spanne, A., and Jörntell, H. (2013). Processing of multi-dimensional sensorimotor information in the spinal and cerebellar neuronal circuitry: a new hypothesis. PLoS Comput Biol. 9:e1002979. doi: Tigaret, C. M., Olivo, V., Sadowski, J. H. L. P., Ashby, M. C., and Mellor, J. R. (2016). Coordinated activation of distinct Ca^2+ sources and metabotropic glutamate receptors encodes Hebbian synaptic plasticity. Nat. Comm. 7:10289. doi: 10.1038/ncomms10289 Turrigiano, G. (2011). Too Many Cooks? Intrinsic and Synaptic Homeostatic Mechanisms in Cortical Circuit Refinement. Ann. Rev. Neurosci. 34, 89–103. doi: 10.1146/annurev-neuro-060909-153238 Turrigiano, G. G., and Nelson, S. B. (2004). Homeostatic plasticity in the developing nervous system. Nat. Rev. Neurosci. 5, 97–107. doi: 10.1038/nrn1327 Vallbo, A. B., and Johansson, R. S. (1984). Properties of Cutaneous Mechanoreceptors in the Human Hand Related to Touch Sensation. Human Neurobiol. 3, 3–14. Victor, J. D., and Purpura, K. P. (1996). Nature and precision of temporal coding in visual cortex: A metric-space analysis. J. Neurophysiol. 76, 1310–1326. doi: 10.1152/jn.1996.76.2.1310 Wang, S. S. H., Denk, W., and Hausser, M. (2000). Coincidence detection in single dendritic spines mediated by calcium release. Nat. Neurosci. 3, 1266–1273. doi: 10.1038/81792 Weinberg, R. J., Pierce, J. P., and Rustioni, A. (1990). Single fiber studies of ascending input to the cuneate nucleus of cats: I. Morphometry of primary afferent fibers. J. Comp. Neurol. 300, 113–133. doi: 10.1002/cne.903000108 Williams, D., and Hinton, G. (1986). Learning representations by back-propagating errors. Nature 323, 533–538. doi: 10.1038/323533a0 Yin, P., Esser, E., and Xin, J. (2014). Ratio and difference of l1 and l2 norms and sparse representation with coherent dictionaries. Commun. Inform. Syst. 14, 87–109. doi: 10.4310/CIS.2014.v14.n2.a2 Keywords: cuneate nucleus, neurophysiology, neuronal plasticity, Intrinsic dynamics, tactile, touch, synaptic integration Citation: Rongala UB, Spanne A, Mazzoni A, Bengtsson F, Oddo CM and Jörntell H (2018) Intracellular Dynamics in Cuneate Nucleus Neurons Support Self-Stabilizing Learning of Generalizable Tactile Representations. Front. Cell. Neurosci. 12:210. doi: 10.3389/fncel.2018.00210 Received: 26 March 2018; Accepted: 26 June 2018; Published: 31 July 2018. Edited by: Philippe Isope , Centre National de la Recherche Scientifique (CNRS), France Copyright © 2018 Rongala, Spanne, Mazzoni, Bengtsson, Oddo and Jörntell. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Henrik Jörntell, henrik.jorntell@med.lu.se
{"url":"https://www.frontiersin.org/journals/cellular-neuroscience/articles/10.3389/fncel.2018.00210/full","timestamp":"2024-11-08T21:35:19Z","content_type":"text/html","content_length":"888832","record_id":"<urn:uuid:5283d0c6-cb9e-4ecd-b6b9-979e457e96c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00314.warc.gz"}
Csi Bridge V15 Crack: The Best Way to Model, Analyze and Design Bridge Structures | Rusty Garden Gate Csi Bridge V15 Crack Each of these programs offers unique capabilities and tools that are tailored to different types of structures and problems, allowing users to find just the right solution for their work. SAP2000 is intended for use on civil structures such as dams, communication towers, stadiums, industrial plants and buildings. CSiBridge offers powerful parametric design of concrete and steel bridges. ETABS has been developed specifically for multi-story commercial and residential building structures, such as office towers, apartments and hospitals. The SAFE System provides an efficient and powerful program for the analysis and design of concrete slabs and foundations, with or without post-tensioning. PERFORM-3D is a highly focused nonlinear tool offering powerful performance based design CSI is seeking an experienced bridge engineer to join our development team in\\r\\n California and help expand the capabilities of the CSiBridge product. The chosen candidate shall\\r\\n be self-motivated and have a passion for engineering. This position will include development of\\r\\n new features, as well as improving existing features to make the end user experience more\\r\\n efficient and easy to use. Here Fundamental Frequency is the main factor used to calculate the vibration of a vehicle when passing across a bridge. With knowledge of the fundamental frequency, bridges can be evaluated and designed in such a manner so as to avoid the critical range of 1.5 Hz to 4.5 Hz, which is occupied by most vehicles. Therefore it is crucial to develop a reliable method to estimate the fundamental frequency of bridges. To overcome the above issue, numerical analysis combined with a theoretical method is applied to estimate the fundamental frequency of multicell box-girder bridges. The effect of span-length, number of boxes and skew angle on the estimation of this factor is discussed. Finally, reliable expressions are proposed to predict the first fundamental frequency of this type of bridge, and the accuracy of the expressions is verified. The results indicate that the fundamental frequency decreases when span length increases, due to development of crack as well as decrease stiffness of girders. The dynamic effects on a bridge may have a considerable influence on its ultimate limit state behavior by amplifying the maximum stress experienced by each of its members. The dynamic responses of a bridge are influenced by several factors. In order to evaluate these factors for determining the dynamic responses it is necessary to characterize the bridge in term of its fundamental frequency and mode shapes. Traditionally bridges are designed using static loads, which are increased by the dynamic load allowance (DLA) factor of the bridge (Ashebo et al., 2007a,b; Billing, 1984;Chang and Lee, 1994; Kashif, 1992). Extensive research and development has been carried out to understand the vibration of bridges as a result of natural sources of vibration, and to determine the dynamic allowance factor as a function of the fundamental frequency due to its uniqueness (OHBDC, 1983; Samaan et al., 2007; Sennah et al., 2004; Senthilvasan et al., 2002; Zhang et al., 2003). Meanwhile, in practice heavy trucks establish a quite narrow range of frequencies, 1.5 Hz to 4.5 Hz, therefore it is important to find a reliable method to estimate the fundamental frequency of bridges and design structures in such a way as to avoid this critical range of frequencies (Moghimi and Ronagh, 2008). In addition, due to vibration, the dynamic deflection can cause discomfort to pedestrians using the bridge. It has been known that the human body tends to react more to torsional oscillations than flexural ones. A large number of studies on free-vibration analysis have been performed on box-girder bridges. Komatsu and Nakai (1970) and Sennah(1998) evaluated the natural vibration responses of straight and curved I- or box-girder bridges using Vlasov's beam theory. Subsequently, Heins and Sahin (1979) applied finite difference methods to solve the differential equations of motion based on Vlasov's thin wall beam theory. Culver and Oestel (1969) used a close form solution for the equation of motion to determine the fundamental frequencies of horizontal curved beams. Cheung and Cheung (1972)used the finite strip technique to evaluate the free-vibration analysis of straight and curved bridges. Cantieni (1984) conducted an experimental study to establish a reliable relationship between the fundamental frequency and maximum span length of bridges. It was found that the bracing system significantly affected the fundamental frequency of composite bridges. Finite element analysis was used to extract the dynamic characteristics of bridge-vehicle interaction and establish reliable expressions for predicting the dynamic responses of curved multiple box-girder bridges, but the recommendations are unacceptable in the case of skew bridges. A number of studies used the finite element method to examine the forced vibration response of instrumented passing vehicles and free-vibration responses of composite cellular box-shaped bridges. Comparison of the studies indicated that the finite element method obtained sufficiently reliable results, compared with other numerical methods (Brownjohn et al., 2008; Fujino et al., 2010; Lin and Yang, 2005; Siringoringo and Fujino, 2012). In order to reduce the significant difference between the estimation of the fundamental frequency obtained from the codes and theoretical methods, Gao et al.(2012) proposed a numerical improved method for straight bridges. However, some limitations still exist for these methods to actually be used in practice. Some enhancements, improvements or further research is required to allow them to be applied in the field (Wang et al., 2013). The results of a study on effects of damage and decay ratio on fundamental frequency by Pandey and Benipal (2011) indicated that that even SDOF bilinear beams have multiple resonance frequencies at which the resonance can take place. However the actual structure is never a SDOF and may have infinite degree of freedom, hence multiple fundamental frequencies depending upon the actual stiffness operating based on cracking pattern. According to the above, the present studies have mainly concentrated on the free-vibration analysis of straight and curved bridges, and there are no reliable methods to determine the fundamental frequencies of skewed bridges. Therefore, in this study the results of an extensive numerical study on the free-vibration feature of continuous skewed concrete multicell box-girder bridges are evaluated. The prototype bridges are analyzed by using a three-dimensional finite element method. The empirical expressions are established using regression analysis, to determine the fundamental frequencies of such bridges. The prototype bridge used in this study is highly representative of the majority of concrete skew multicell box-girder bridges. Eighty-five typical bridges with a span length ranging from 30 to 90 m have been designed based on the Canadian Highway Bridge Design Code(CHBDC, 2006). The number of boxes varies from two to six, dependent on the width of the bridge. All selected bridges are two-equal-span continuous, with bridge widths (W) of 9.14 m, 14.00 m and 17.00 m. Since most heavily vehicle frequencies occupy a relatively narrow frequency band in practice, 1.5 To 4.5, it is preferred to design bridge in such a manner as to avoid this critical range, if at all possible. The preliminary study indicated that the thickness of the deck has an insignificant effect on the dynamic response of multicell box-girder bridges, so constant values of 20cm and 15 cm were selected for the upper and bottom deck thicknesses, respectively. To consider the effect of skewness on the free-vibration response of this type of bridge, the skew angle was ranged from 0 to 60ยบ, which is within the range of applicability introduced by the American Association of State Highway and Transportation Officials' Load Resistance Factor Design(AASHTO, 2008). Table 1 shows the characteristics of the selected bridges, in which the symbols NB and NL stand for the number of boxes and number of loaded lanes, respectively. A typical bridge cross-section is shown in Figure 1. Reinforced concrete intermediate diaphragms are used for all prototype bridges at spacing of 7.5 m (25 ft), alongside the end diaphragms to enhance the stability of the structure under construction loads. In this study, the prototype bridges are modeled with CSIBRIDGE software V15, using a four node, three-dimensional shell element with six degrees of freedom at each node. The top and bottom shell elements of the webs are integrated with the top and bottom slabs at connection points to improve the compatibility of the deformations (Mohseni and Khalim Rashid, 2013). The bridge modeling was verified by comparing the live load distribution factor (LDF) derived from field testing, and those from the method adopted herein. Boundary conditions are simulated using hinge-bearing at the starting abutment, and roller-bearings for all other supports. Figure 2 shows a finite element model of a 200 m three-box multicell box-girder bridge. A sensitivity study is first performed to investigate the influence of the key parameters on the fundamental frequency of the prototype bridges. These parameters consisted of the span length, number of lanes loaded, skew angle, and number of boxes. The effect of span length on the fundamental frequency for three-box bridges with span lengths ranging from 30 m to 90 m is plotted in Figure 4. It can be observed that the fundamental frequency decreases considerably with increasing span length of bridge, within a maximum range of 62.5%. This difference is caused by the influence of other parameters, especially the skew angle. The mode shapes for a three-lane, three-box bridges with a span length of 45 m are plotted in Figure 3. As expected, for straight bridges the first mode shapes a real ways purely flexural (Ashebo et al., 2007a,b), while torsional effects contribute to the first mode shapes of concrete skew bridges as indicated in Figure 5. The high drop in fundamental frequency when span length increases is due to development of cracks and subsequently, decreases in modal parameters such as damping ratio and stiffness of girders. The same observation was obtained by WU et al. (2011) from experimental studies on reinforced slab and concrete beam.
{"url":"https://www.rustygardengate.com/forum/general-discussion/csi-bridge-v15-crack-the-best-way-to-model-analyze-and-design-bridge-structures","timestamp":"2024-11-10T20:51:30Z","content_type":"text/html","content_length":"1014497","record_id":"<urn:uuid:0337feea-c786-449c-8aac-6228cdb91930>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00851.warc.gz"}
Find the LCM and HCF of the following integers by applying the prime factorization method. 17, 23 and 29 You must login to ask question. NCERT Solutions for Class 10 Maths Chapter 1 Important NCERT Questions Real Numbers NCERT Books for Session 2022-2023 CBSE Board and UP Board Others state Board EXERCISE 1.2 Page No:11 Questions No:3 Part-2
{"url":"https://discussion.tiwariacademy.com/question/find-the-lcm-and-hcf-of-the-following-integers-by-applying-the-prime-factorization-method-17-23-and-29/","timestamp":"2024-11-14T23:56:44Z","content_type":"text/html","content_length":"156301","record_id":"<urn:uuid:b2dc6546-126e-401d-a775-3bdf37ccd4c3>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00650.warc.gz"}
Focal length The focal length is the distance between the main plane of an optical lens or a curved mirror and the focus (focal point). Systems consisting of several lenses and / or mirrors - such as the lenses of cameras or microscopes - have focal lengths that are defined analogously. The positions of the main planes (two per system) cannot be specified as easily as with a single lens (in it) or with a single mirror (on its apex). The focal length is a concept of paraxial optics . It therefore only relates to rays that have a small angle and a small distance from the optical axis of the imaging system. Large focal lengths are created by flat, slightly curved surfaces. Small focal lengths are caused by strong curvatures. Especially with individual lenses, the reciprocal of the focal length becomes the refractive poweror called the refractive index . For converging lenses and concave mirrors , the focal length is defined as a positive value, for diverging lenses and convex mirrors as a negative The focal length is used when applying the lens equation . In photography , the focal length of the lens and the recording format determine the angle of view (see also format factor ). This also applies to the intermediate image on the microscope . With telescopes and binoculars, the focal lengths of the objective and eyepiece together determine the magnification . Refractive power The reciprocal of the focal length is called the refractive power . It is given in diopters for spectacle lenses . ${\ displaystyle D = {\ frac {1} {f}}}$${\ displaystyle f}$ Measurement of the focal length According to the imaging equation , with a sharp optical image through a thin lens, the reciprocal of the focal length is equal to the sum of the reciprocal values of the object distance and the image distance : ${\ displaystyle g}$ ${\ displaystyle b}$ ${\ displaystyle {\ frac {1} {f}} = {\ frac {1} {b}} + {\ frac {1} {g}}}$ This can be used to determine the focal length of the lens. If the object depicted is very far away, the connection becomes particularly simple. The focal length is approximately the same as the image distance and can be read directly from the distance between the image and the lens. One method that works without a distant object is autocollimation . The distant object is replaced by a flat mirror. The Bessel method for determining the focal length of thin lenses takes advantage of the fact that with a fixed distance between the object and the image, two positions of the lens produce a sharp image. The focal length of the lens can then be calculated from the distance between these two positions and the distance between the object and the image. In the case of thick lenses and imaging systems with several optical components, the distance between the main planes can usually not be neglected. Then estimating the magnification ratio can provide more accurate results. The Abbe method is used to record a set of positions in which the imaging system images objects in focus. These points satisfy a straight line equation . The focal length and the position of the main planes can be determined from the parameters of the straight line. Opticians use a wavefront analysis to determine the focal length of aspherical lenses and the refractive power of varifocal lenses, which varies over the surface . A Hartmann Shack sensor is usually used for this. For historical reasons, the automated devices are called lens meters . Calculating the focal length Breaking surface The boundary layer between two optical media with different refractive indices is called the refractive surface. If the light beam comes from the left, then the refractive index is on the left side and the refractive index on the right side of the interface. The curvature of the interface is described by the radius of curvature . If the center of the circle that describes the interface is on the side facing away from the incident light, then it is positive, otherwise it is negative. A non-curved interface has the radius of curvature . ${\ displaystyle n}$${\ displaystyle n '}$ ${\ displaystyle r}$${\ displaystyle r}$${\ displaystyle r = \ infty}$ ${\ displaystyle f '= r {\ frac {n'} {n'-n}}}$ The focal length of the other side is obtained by changing the refractive indices, since the light now coming from the right after transgresses: ${\ displaystyle n}$${\ displaystyle n '}$ ${\ displaystyle f = r {\ frac {n} {n-n '}} = - r {\ frac {n} {n'-n}}}$ The refraction of a lens of thickness is e.g. B. by means of matrix optics , can be calculated from the refractions of their two spherical interfaces. With the focal lengths and the two surfaces and their distance results ${\ displaystyle d}$ ${\ displaystyle f '_ {2}, f_ {2}}$${\ displaystyle f '_ {1}, f_ {1}}$${\ displaystyle d}$ ${\ displaystyle f '= {\ frac {f' _ {1} f '_ {2}} {d-f' _ {1} -f_ {2}}}}$ for the image-side focal length of the lens. With the above equations of the surface focal lengths one obtains with ${\ displaystyle {\ frac {1} {f '}} = {\ frac {n'-n} {n}} \ left ({\ frac {1} {r_ {1}}} - {\ frac {1 } {r_ {2}}} \ right) + {\ frac {(n'-n) ^ {2} d} {n'nr_ {1} r_ {2}}}}$ the lens focal length on the image side as a function of the radii of curvature and the refractive indices and . As in the adjacent figure, the focal length is measured from the main plane H '. Object-side and image-side focal lengths have the same size if the lens is adjacent to media with the same refractive index on both sides , see also lens grinder formula . ${\ displaystyle r_ {1}, r_ {2}}$${\ displaystyle n}$${\ displaystyle n '}$${\ displaystyle f}$${\ displaystyle n}$ Thin lens The approximation is fulfilled for. This approximation is known as a thin lens, and the main planes of the two interfaces coincide (namely to the median plane ). The equation for the focal length simplifies to ${\ displaystyle d \ ll f}$${\ displaystyle d \ approx 0}$ ${\ displaystyle {\ frac {1} {f '}} = {\ frac {1} {f}} = {\ frac {n'-n} {n}} \ left ({\ frac {1} {r_ {1}}} - {\ frac {1} {r_ {2}}} \ right),}$ measuring away from the center plane again. In geometrical optics, the term front surface power and rear surface power. The above equation can thus also be expressed in the form ${\ displaystyle D_ {1} = {\ frac {n'-n} {nr_ {1}}}}$ ${\ displaystyle D_ {2} = {\ frac {n-n '} {nr_ {2}}}}$ ${\ displaystyle D = D_ {1} + D_ {2}}$ write. The optical effect of eyeglass lenses is expressed by the vertex power . System of two thin lenses The system of two thin lenses is basically similar to the system “lens made of two refractive surfaces” (cf. adjacent figure with the one above). If both lenses are surrounded by the same medium on both sides, then: ${\ displaystyle {\ frac {1} {f '}} = {\ frac {1} {f' _ {1}}} + {\ frac {1} {f '_ {2}}} - {\ frac {d} {f '_ {1} \ cdot f' _ {2}}} = {\ frac {1} {f_ {1}}} + {\ frac {1} {f_ {2}}} - { \ frac {d} {f_ {1} \ cdot f_ {2}}} = {\ frac {1} {f}}}$ In addition to the equality of the object and image-side focal lengths of the individual lenses, the corresponding equality also applies to the system: ${\ displaystyle f '= f}$ The dependence of the focal lengths of the lens system consisting of two thin lenses on the refractive indices and radii of curvature is obtained if the lens grinding formulas given above for thin lenses are used for and . ${\ displaystyle f_ {1}}$${\ displaystyle f_ {2}}$ Thin lenses closely spaced When moving the thin lenses together, in the borderline case . The distance can be neglected. The focal length of such a system is approximately the same ${\ displaystyle d \ ll f_ {1}, f_ {2}}$${\ displaystyle d}$ ${\ displaystyle {\ frac {1} {f '}} = {\ frac {1} {f}} = {\ frac {1} {f_ {1}}} + {\ frac {1} {f_ {2 }}}.}$ This equation is used for two thin lenses cemented together, for example. Such a double lens usually consists of two different types of glass , with which lower imaging errors are achieved than with a lens consisting of only one type of glass with the same focal length, such as with the achromatic lens . Image errors with a direct connection to the focal length Strictly speaking, the focal length is only defined in paraxial optics. However, under certain conditions and above all for real non-parabolic lenses, various so-called imaging errors arise, which result in a (sometimes seemingly) changed focal length. In paraxial optics it is always possible to approximate a spherical surface as a paraboloid . Real lenses are often designed as spherical surfaces, as these are easier to manufacture than aspherical surfaces . You are still assigned a focal length that actually only applies to rays close to the optical axis. For off-axis rays, shifted foci result. This lens defect is called spherical aberration Furthermore, the focal length depends, among other things, on the refractive index of the lens material, which in turn depends on the wavelength of the light. If light of different wavelengths (e.g. also white light) falls on a lens, this is focused on different points depending on the wavelength. One speaks of chromatic aberration . If the shape of a lens is not rotationally symmetrical with respect to the optical axis, but rather ellipsoid , then it focuses fan-like light bundles depending on their orientation in different image distances. Full bundles of light are not focused on one point, but rather in two consecutive focal lines in the directions of the two main axes of the ellipsoid. This aberration is called axial astigmatism . See also Web links Individual evidence 1. ↑ Eugene Hecht: Optics. Oldenbourg Verlag, Chapter 5 Geometrical Optics. Section 2 lenses. 2. ^ Ophthalmic lens compendium. ( Memento from July 3, 2013 in the web archive archive.today ). At: Zeiss.de. 3. ^ Wolfgang Demtröder : Electricity and optics. Springer, Berlin 2006. ISBN 3-540-33794-6 (Chapter 9.5.4).
{"url":"https://de.zxc.wiki/wiki/Brennweite","timestamp":"2024-11-08T11:04:28Z","content_type":"text/html","content_length":"81282","record_id":"<urn:uuid:1bb83fa1-1918-4602-83f1-52aa684c825b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00413.warc.gz"}
F-Test (ANOVA) The test statistic is: \[\begin{align} F=\frac{\sum^s_{j=1} \sum^{n_j}_{i = 1} w_{ij}(\bar{x}_j - \bar{x})^2 / (j-1)} {\sum^s_{j=1} \sum^{n_j}_{i=1} w_{ij}(\bar{x}_j - x_{ij})^2 / (\sum^s_{j=1} \sum^{n_j}_{i = 1} w_{ij} - j)}\end{align}\] \(x_{ij}\) is the value of the \(i\)th of \(n_j\) observations in the \(j\)th of \(s\) groups, \(\bar{x}_j \) is the average in the \(j\)th group, \(\bar{x}\) is the overall average, \(w_{ij}\) is the calibrated weight, and \(F\) is evaluated using the F-distribution with \(j-1\) and \(\sum^s_{j=1} \sum^{n_j}_{i = 1} w_{ij} - j \) degrees of freedom. (This is Type III Sum of Squares ANOVA.) See Also
{"url":"https://displayrdocs.zendesk.com/hc/en-us/articles/7744839975951-F-Test-ANOVA","timestamp":"2024-11-04T04:14:15Z","content_type":"text/html","content_length":"24932","record_id":"<urn:uuid:96d2f3bf-91ac-470a-93c0-795c11bbc069>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00634.warc.gz"}
This week in competitive programming Codeforces Round 295 happened early on Monday ( editorial with challenges , top 5 on the left). Let me describe problem D which I didn't solve during the round: you have an array with 10 positive integers, denoting your various skill levels in an online game. You're also given up to 10 possible improvements for your skills. Each improvement is applicable to one particular skill, and is of one of three types: set the skill level to the given value, add the given value to the skill level, or multipy the skill level by the given value. The skill that can be improved, the type of improvement, and the improvement value are fixed for each possible improvement, so the only freedom you have is which improvements you will apply, and in which order. You goal is to achieve the maximum product of all your skill levels using at most m improvements (m is also given). This problem requires careful reduction of complexity until it becomes simple. The first step, for example, is to notice that it only makes sense to apply the "multiplication" improvements after all others, and the order of their application does not matter. I've managed to do a few more steps during the contest, but stopped short of the solution because I couldn't find a way to properly handle the "assignment" improvements. Can you see the remaining steps? Facebook Hacker Cup 2015 Final Round happened on Friday in Menlo Park ( , top 5 on the left). I've managed a very good start by getting the first submission and then skipping the tricky geometry problem (the "Fox Lochs" column in the above table). After submitting the 20-point with almost two hours left, I had two strategies to choose from. I could go back to the geometry problem, implement it carefully, test it a lot, but thus likely not solve anything else - looking at the final scoreboard, that would've earned me the second or third place in case my solution was correct - but given that even Gennady had failed this problem, that's far from certain. Or I could try to solve one of the two harder problems which seemed to require some thinking but looked tractable. I've decided to go after the 25-point problems, and after some thinking came up with a O(N*sqrtN) solution for the fourth problem ("Fox Hawks"). The problem was: you're given a boolean expression with at most 200000 boolean variables, each appearing exactly once, for example "((1 & (2 | 3)) | 4)". What's the k-th lexicographically set of variable values that evaluate this expression to true? It was not clear whether O(N*sqrt(N)) would pass within the time limit, so I hoped for the best and started implementing it. The implementation was a bit tricky and required more than an hour (including writing a simple but slow solution and comparing on a lot of small testcases). I've finally downloaded the input with about 30 minutes left in the contest - and my solution turned out a bit too slow (solved 12 testcases out of 20 in the time limit) :( Had I implemented it a bit more efficiently, or had I used a more powerful computer, I might've got it, and it turns out that would've earned me the victory. Well, better luck next time! After discussing my solution with Slava "winger" Isenbaev after the contest, I've also realized that it's not hard to change it into a O(N*logN) approach. I've done that after the contest, and after about 20 minutes of coding and 5 minutes of debugging I got a working solution that solved all cases in about 20 seconds (out of 6 minutes). Can you guess what's the O(N*logN) approach knowing that it's an improvement of an O(N*sqrtN) one? Now, let's continue covering the Open Cup contests from February. Before presenting a new round, let me give the solution ideas for the problems I posted last week The first problem was about finding the k-th lexicographically borderless word of length n (up to 64), where a word is borderless when it has no non-trivial borders. In order to solve this problem, let's learn to count borderless words first. The first idea is to notice that if a word of length n has any borders, then it must also have a border of length at most n/2 (because if a prefix is equal to a suffix and they intersect, then we can find a shorter prefix that is equal to a suffix - see the picture on the left). Because of this, when n is odd, the number of borderless words of length n is equal to the size of the alphabet times the number of borderless words of length n-1 - we can simpy put an arbitrary character in the middle of a word of length n-1. And when n is even, the number of borderless words of length n is equal to size of the alphabet times the number of borderless words of length n-1 minus the number of borderless words of length n/2: we can also put an arbitrary character into the middle of a word of length n-1, but we need to subtract cases where the new word has a new border of size n/2. Finding k-th lexicographically borderless word is done in a very similar manner: instead of just counting all borderless words, we can use the same approach to count all borderless words with the given prefix. The second problem was about finding two deterministic finite automata with at most n+1 states that, when used together, accept the given word and only that word. The automaton with n+2 states that accepts only the given word is straightforward. How do we get rid of one state? Well, let's glue two adjacent states together. This will result in an automaton that accepts the given word, but also all words where the letter in a certain position can be repeated an arbitrary number of times. If we do this once for the first letter, and the second time for the last letter, we'll obtain the two automata that we need, unless all letters in our word are equal. An in case all letters in our word are equal, it's not hard to see that there's no solution. Now, on to new tricky problems! Open Cup 2014-15 Grand Prix of Karelia happened 3 weeks ago ( , top 5 on the left). The most difficult problem F was about the Conway's look-and-say sequence starting with 2: we start with single digit 2, then repeatedly describe what we see. For example, on the first step, we see one "2", so we write "12". On the second step, we see one "1" and one "2", so we write "1112". On the third step, we see three "1"s and one "2", so we write "3112", and so on. How many digits does the n-th step contain (modulo p)? n is up to 10 Open Cup 2014-15 Grand Prix of Udmurtia happened 2 weeks ago (results, top 5 on the left). Let's talk about a relatively easy problem for a change. Problem B of this round was concerned with drawing parallelepipeds on a grid. More specifically, a parallelepiped is drawn on a grid like this: we start with a rectangle, then add three diagonal segments of the same length, and then connect their ends as well - see the picture on the left. The parallelepiped has three parameters: the two sides of the rectangle, and the size of the diagonal. All three parameters must be at least 3. A parallelepiped was drawn, but then all squares but two were erased, so you're given the coordinates of the two remaining squares, each up to a billion. What's the smallest possible total number of cells in the original drawing? This problem looks quite nasty from the outside, and it feels like it can have a lot of tricky corner cases. But it turns out it's possible to write a solution that sidesteps those and solves everything in quite general manner. How would you approach this problem? Thanks for reading, and check back next week!
{"url":"https://blog.mitrichev.ch/2015/03/this-week-in-competitive-programming_7.html","timestamp":"2024-11-02T17:56:54Z","content_type":"text/html","content_length":"95934","record_id":"<urn:uuid:dcef6bdc-8036-4ecb-b8a2-63e9340d90d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00496.warc.gz"}
One billion digits of One billion digits of π One billion (10^9) digits of pi (actually 1,000,000,001 digits if you count the initial "3") are in the file pi-billion.txt. The MD5 checksum is in pi-billion.md5. JA0HXV has calculated 100 billion digits of pi and posted them at the website: http://ja0hxv.calico.jp/pai/estart.html Kanada, et al. have calculated 1,241,100,000,000 (over 1 trillion digits of pi) in December 2002 but have not posted them: http://www.super-computing.org/pi_current.html
{"url":"https://stuff.mit.edu/afs/sipb.mit.edu/contrib/pi/","timestamp":"2024-11-14T20:44:16Z","content_type":"text/html","content_length":"1448","record_id":"<urn:uuid:1970277a-61d8-4d83-bbfb-7244272a90f9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00877.warc.gz"}
QRandomGenerator Class The QRandomGenerator class allows one to obtain random values from a high-quality Random Number Generator. More... Header: #include <QRandomGenerator> CMake: find_package(Qt6 REQUIRED COMPONENTS Core) target_link_libraries(mytarget PRIVATE Qt6::Core) qmake: QT += core Inherited By: QRandomGenerator64 Note: All functions in this class are reentrant. Note: These functions are also thread-safe: Public Types Public Functions QRandomGenerator(quint32 seedValue = 1) QRandomGenerator(const quint32 (&)[N] seedBuffer) QRandomGenerator(const quint32 *seedBuffer, qsizetype len) QRandomGenerator(std::seed_seq &sseq) QRandomGenerator(const quint32 *begin, const quint32 *end) QRandomGenerator(const QRandomGenerator &other) double bounded(double highest) quint32 bounded(quint32 highest) quint32 bounded(quint32 lowest, quint32 highest) int bounded(int highest) int bounded(int lowest, int highest) quint64 bounded(quint64 highest) quint64 bounded(quint64 lowest, quint64 highest) qint64 bounded(qint64 highest) qint64 bounded(qint64 lowest, qint64 highest) qint64 bounded(int lowest, qint64 highest) qint64 bounded(qint64 lowest, int highest) quint64 bounded(unsigned int lowest, quint64 highest) quint64 bounded(quint64 lowest, unsigned int highest) void discard(unsigned long long z) void fillRange(UInt *buffer, qsizetype count) void fillRange(UInt (&)[N] buffer) quint64 generate64() quint32 generate() void generate(ForwardIterator begin, ForwardIterator end) double generateDouble() void seed(quint32 seed = 1) void seed(std::seed_seq &seed) QRandomGenerator::result_type operator()() Static Public Members QRandomGenerator * global() QRandomGenerator::result_type max() QRandomGenerator::result_type min() QRandomGenerator securelySeeded() QRandomGenerator * system() Related Non-Members bool operator!=(const QRandomGenerator &rng1, const QRandomGenerator &rng2) Detailed Description QRandomGenerator may be used to generate random values from a high-quality random number generator. Like the C++ random engines, QRandomGenerator can be seeded with user-provided values through the constructor. When seeded, the sequence of numbers generated by this class is deterministic. That is to say, given the same seed data, QRandomGenerator will generate the same sequence of numbers. But given different seeds, the results should be considerably different. QRandomGenerator::securelySeeded() can be used to create a QRandomGenerator that is securely seeded with QRandomGenerator::system(), meaning that the sequence of numbers it generates cannot be easily predicted. Additionally, QRandomGenerator::global() returns a global instance of QRandomGenerator that Qt will ensure to be securely seeded. This object is thread-safe, may be shared for most uses, and is always seeded from QRandomGenerator::system() QRandomGenerator::system() may be used to access the system's cryptographically-safe random generator. On Unix systems, it's equivalent to reading from /dev/urandom or the getrandom() or getentropy() system calls. The class can generate 32-bit or 64-bit quantities, or fill an array of those. The most common way of generating new values is to call the generate(), generate64() or fillRange() functions. One would use it as: Additionally, it provides a floating-point function generateDouble() that returns a number in the range [0, 1) (that is, inclusive of zero and exclusive of 1). There's also a set of convenience functions that facilitate obtaining a random number in a bounded, integral range. Seeding and determinism QRandomGenerator may be seeded with specific seed data. When that is done, the numbers generated by the object will always be the same, as in the following example: QRandomGenerator prng1(1234), prng2(1234); Q_ASSERT(prng1.generate() == prng2.generate()); Q_ASSERT(prng1.generate64() == prng2.generate64()); The seed data takes the form of one or more 32-bit words. The ideal seed size is approximately equal to the size of the QRandomGenerator class itself. Due to mixing of the seed data, QRandomGenerator cannot guarantee that distinct seeds will produce different sequences. QRandomGenerator::global(), like all generators created by QRandomGenerator::securelySeeded(), is always seeded from QRandomGenerator::system(), so it's not possible to make it produce identical Bulk data When operating in deterministic mode, QRandomGenerator may be used for bulk data generation. In fact, applications that do not need cryptographically-secure or true random data are advised to use a regular QRandomGenerator instead of QRandomGenerator::system() for their random data needs. For ease of use, QRandomGenerator provides a global object that can be easily used, as in the following example: System-wide random number generator QRandomGenerator::system() may be used to access the system-wide random number generator, which is cryptographically-safe on all systems that Qt runs on. This function will use hardware facilities to generate random numbers where available. On such systems, those facilities are true Random Number Generators. However, if they are true RNGs, those facilities have finite entropy sources and thus may fail to produce any results if their entropy pool is exhausted. If that happens, first the operating system then QRandomGenerator will fall back to Pseudo Random Number Generators of decreasing qualities (Qt's fallback generator being the simplest). Whether those generators are still of cryptographic quality is implementation-defined. Therefore, QRandomGenerator::system() should not be used for high-frequency random number generation, lest the entropy pool become empty. As a rule of thumb, this class should not be called upon to generate more than a kilobyte per second of random data (note: this may vary from system to system). If an application needs true RNG data in bulk, it should use the operating system facilities (such as /dev/random on Linux) directly and wait for entropy to become available. If the application requires PRNG engines of cryptographic quality but not of true randomness, QRandomGenerator::system() may still be used (see section below). If neither a true RNG nor a cryptographically secure PRNG are required, applications should instead use PRNG engines like QRandomGenerator's deterministic mode and those from the C++ Standard Library. QRandomGenerator::system() can be used to seed those. Fallback quality QRandomGenerator::system() uses the operating system facilities to obtain random numbers, which attempt to collect real entropy from the surrounding environment to produce true random numbers. However, it's possible that the entropy pool becomes exhausted, in which case the operating system will fall back to a pseudo-random engine for a time. Under no circumstances will QRandomGenerator::system() block, waiting for more entropy to be collected. The following operating systems guarantee that the results from their random-generation API will be of at least cryptographically-safe quality, even if the entropy pool is exhausted: Apple OSes (Darwin), BSDs, Linux, Windows. Barring a system installation problem (such as /dev/urandom not being readable by the current process), QRandomGenerator::system() will therefore have the same On other operating systems, QRandomGenerator will fall back to a PRNG of good numeric distribution, but it cannot guarantee proper seeding in all cases. Please consult the OS documentation for more Applications that require QRandomGenerator not to fall back to non-cryptographic quality generators are advised to check their operating system documentation or restrict their deployment to one of the above. Reentrancy and thread-safety QRandomGenerator is reentrant, meaning that multiple threads can operate on this class at the same time, so long as they operate on different objects. If multiple threads need to share one PRNG sequence, external locking by a mutex is required. The exceptions are the objects returned by QRandomGenerator::global() and QRandomGenerator::system(): those objects are thread-safe and may be used by any thread without external locking. Note that thread-safety does not extend to copying those objects: they should always be used by reference. Standard C++ Library compatibility QRandomGenerator is modeled after the requirements for random number engines in the C++ Standard Library and may be used in almost all contexts that the Standard Library engines can. Exceptions to the requirements are the following: • QRandomGenerator does not support seeding from another seed sequence-like class besides std::seed_seq itself; • QRandomGenerator is not comparable (but is copyable) or streamable to std::ostream or from std::istream. QRandomGenerator is also compatible with the uniform distribution classes std::uniform_int_distribution and std:uniform_real_distribution, as well as the free function std::generate_canonical. For example, the following code may be used to generate a floating-point number in the range [1, 2.5): std::uniform_real_distribution dist(1, 2.5); return dist(*QRandomGenerator::global()); See also QRandomGenerator64. Member Type Documentation A typedef to the type that operator() returns. That is, quint32. See also operator(). Member Function Documentation qint64 QRandomGenerator::bounded(int lowest, qint64 highest) qint64 QRandomGenerator::bounded(qint64 lowest, int highest) quint64 QRandomGenerator::bounded(quint64 lowest, unsigned int highest) quint64 QRandomGenerator::bounded(unsigned int lowest, quint64 highest) This is an overloaded function. This function exists to help with overload resolution when the types of the parameters don't exactly match. They will promote the smaller type to the type of the larger one and call the correct QRandomGenerator::QRandomGenerator(quint32 seedValue = 1) Initializes this QRandomGenerator object with the value seedValue as the seed. Two objects constructed or reseeded with the same seed value will produce the same number sequence. See also seed() and securelySeeded(). template <qsizetype N> QRandomGenerator::QRandomGenerator(const quint32 (&)[N] seedBuffer) This is an overloaded function. Initializes this QRandomGenerator object with the values found in the array seedBuffer as the seed. Two objects constructed or reseeded with the same seed value will produce the same number sequence. See also seed() and securelySeeded(). QRandomGenerator::QRandomGenerator(const quint32 *seedBuffer, qsizetype len) This is an overloaded function. Initializes this QRandomGenerator object with len values found in the array seedBuffer as the seed. Two objects constructed or reseeded with the same seed value will produce the same number sequence. This constructor is equivalent to: std::seed_seq sseq(seedBuffer, seedBuffer + len); QRandomGenerator generator(sseq); See also seed() and securelySeeded(). [noexcept] QRandomGenerator::QRandomGenerator(std::seed_seq &sseq) This is an overloaded function. Initializes this QRandomGenerator object with the seed sequence sseq as the seed. Two objects constructed or reseeded with the same seed value will produce the same number sequence. See also seed() and securelySeeded(). QRandomGenerator::QRandomGenerator(const quint32 *begin, const quint32 *end) This is an overloaded function. Initializes this QRandomGenerator object with the values found in the range from begin to end as the seed. Two objects constructed or reseeded with the same seed value will produce the same number This constructor is equivalent to: See also seed() and securelySeeded(). QRandomGenerator::QRandomGenerator(const QRandomGenerator &other) Creates a copy of the generator state in the other object. If other is QRandomGenerator::system() or a copy of that, this object will also read from the operating system random-generating facilities. In that case, the sequences generated by the two objects will be different. In all other cases, the new QRandomGenerator object will start at the same position in the deterministic sequence as the other object was. Both objects will generate the same sequence from this point For that reason, it is not advisable to create a copy of QRandomGenerator::global(). If one needs an exclusive deterministic generator, consider instead using securelySeeded() to obtain a new object that shares no relationship with the QRandomGenerator::global(). double QRandomGenerator::bounded(double highest) Generates one random double in the range between 0 (inclusive) and highest (exclusive). This function is equivalent to and is implemented as: return generateDouble() * highest; If the highest parameter is negative, the result will be negative too; if it is infinite or NaN, the result will be infinite or NaN too (that is, not random). See also generateDouble() and bounded(). quint32 QRandomGenerator::bounded(quint32 highest) This is an overloaded function. Generates one random 32-bit quantity in the range between 0 (inclusive) and highest (exclusive). The same result may also be obtained by using std::uniform_int_distribution with parameters 0 and highest - 1. That class can also be used to obtain quantities larger than 32 bits; for 64 bits, the 64-bit bounded() overload can be used too. For example, to obtain a value between 0 and 255 (inclusive), one would write: Naturally, the same could also be obtained by masking the result of generate() to only the lower 8 bits. Either solution is as efficient. Note that this function cannot be used to obtain values in the full 32-bit range of quint32. Instead, use generate(). See also generate(), generate64(), and generateDouble(). quint32 QRandomGenerator::bounded(quint32 lowest, quint32 highest) This is an overloaded function. Generates one random 32-bit quantity in the range between lowest (inclusive) and highest (exclusive). The highest parameter must be greater than lowest. The same result may also be obtained by using std::uniform_int_distribution with parameters lowest and \a highest - 1. That class can also be used to obtain quantities larger than 32 bits. For example, to obtain a value between 1000 (incl.) and 2000 (excl.), one would write: Note that this function cannot be used to obtain values in the full 32-bit range of quint32. Instead, use generate(). See also generate(), generate64(), and generateDouble(). int QRandomGenerator::bounded(int highest) This is an overloaded function. Generates one random 32-bit quantity in the range between 0 (inclusive) and highest (exclusive). highest must be positive. Note that this function cannot be used to obtain values in the full 32-bit range of int. Instead, use generate() and cast to int. See also generate(), generate64(), and generateDouble(). int QRandomGenerator::bounded(int lowest, int highest) This is an overloaded function. Generates one random 32-bit quantity in the range between lowest (inclusive) and highest (exclusive), both of which may be negative, but highest must be greater than lowest. Note that this function cannot be used to obtain values in the full 32-bit range of int. Instead, use generate() and cast to int. See also generate(), generate64(), and generateDouble(). quint64 QRandomGenerator::bounded(quint64 highest) This is an overloaded function. Generates one random 64-bit quantity in the range between 0 (inclusive) and highest (exclusive). The same result may also be obtained by using std::uniform_int_distribution<quint64> with parameters 0 and highest - 1. Note that this function cannot be used to obtain values in the full 64-bit range of quint64. Instead, use generate64(). Note: This function is implemented as a loop, which depends on the random value obtained. On the long run, on average it should loop just under 2 times, but if the random generator is defective, this function may take considerably longer to execute. See also generate(), generate64(), and generateDouble(). quint64 QRandomGenerator::bounded(quint64 lowest, quint64 highest) This is an overloaded function. Generates one random 64-bit quantity in the range between lowest (inclusive) and highest (exclusive). The highest parameter must be greater than lowest. The same result may also be obtained by using std::uniform_int_distribution<quint64> with parameters lowest and \a highest - 1. Note that this function cannot be used to obtain values in the full 64-bit range of quint64. Instead, use generate64(). Note: This function is implemented as a loop, which depends on the random value obtained. On the long run, on average it should loop just under 2 times, but if the random generator is defective, this function may take considerably longer to execute. See also generate(), generate64(), and generateDouble(). qint64 QRandomGenerator::bounded(qint64 highest) This is an overloaded function. Generates one random 64-bit quantity in the range between 0 (inclusive) and highest (exclusive). highest must be positive. Note that this function cannot be used to obtain values in the full 64-bit range of qint64. Instead, use generate64() and cast to qint64 or instead use the unsigned version of this function. Note: This function is implemented as a loop, which depends on the random value obtained. On the long run, on average it should loop just under 2 times, but if the random generator is defective, this function may take considerably longer to execute. See also generate(), generate64(), and generateDouble(). qint64 QRandomGenerator::bounded(qint64 lowest, qint64 highest) This is an overloaded function. Generates one random 64-bit quantity in the range between lowest (inclusive) and highest (exclusive), both of which may be negative, but highest must be greater than lowest. Note that this function cannot be used to obtain values in the full 64-bit range of qint64. Instead, use generate64() and cast to qint64. Note: This function is implemented as a loop, which depends on the random value obtained. On the long run, on average it should loop just under 2 times, but if the random generator is defective, this function may take considerably longer to execute. See also generate(), generate64(), and generateDouble(). void QRandomGenerator::discard(unsigned long long z) Discards the next z entries from the sequence. This method is equivalent to calling generate() z times and discarding the result, as in: while (z--) template <typename UInt, QRandomGenerator::IfValidUInt<UInt> = true> void QRandomGenerator::fillRange(UInt *buffer, qsizetype count) Generates count 32- or 64-bit quantities (depending on the type UInt) and stores them in the buffer pointed by buffer. This is the most efficient way to obtain more than one quantity at a time, as it reduces the number of calls into the Random Number Generator source. For example, to fill a list of 16 entries with random values, one may write: See also generate(). template <typename UInt, size_t N, QRandomGenerator::IfValidUInt<UInt> = true> void QRandomGenerator::fillRange(UInt (&)[N] buffer) Generates N 32- or 64-bit quantities (depending on the type UInt) and stores them in the buffer array. This is the most efficient way to obtain more than one quantity at a time, as it reduces the number of calls into the Random Number Generator source. For example, to fill generate two 32-bit quantities, one may write: It would have also been possible to make one call to generate64() and then split the two halves of the 64-bit value. See also generate(). quint64 QRandomGenerator::generate64() Generates a 64-bit random quantity and returns it. See also operator()() and generate(). quint32 QRandomGenerator::generate() Generates a 32-bit random quantity and returns it. See also operator()() and generate64(). template <typename ForwardIterator> void QRandomGenerator::generate(ForwardIterator begin, ForwardIterator end) Generates 32-bit quantities and stores them in the range between begin and end. This function is equivalent to (and is implemented as): std::generate(begin, end, [this]() { return generate(); }); This function complies with the requirements for the function std::seed_seq::generate, which requires unsigned 32-bit integer values. Note that if the [begin, end) range refers to an area that can store more than 32 bits per element, the elements will still be initialized with only 32 bits of data. Any other bits will be zero. To fill the range with 64 bit quantities, one can write: std::generate(begin, end, []() { return QRandomGenerator::global()->generate64(); }); If the range refers to contiguous memory (such as an array or the data from a QList), the fillRange() function may be used too. See also fillRange(). double QRandomGenerator::generateDouble() Generates one random qreal in the canonical range [0, 1) (that is, inclusive of zero and exclusive of 1). This function is equivalent to: QRandomGenerator64 rd; return std::generate_canonical<qreal, std::numeric_limits<qreal>::digits>(rd); The same may also be obtained by using std::uniform_real_distribution with parameters 0 and 1. See also generate(), generate64(), and bounded(). [static] QRandomGenerator *QRandomGenerator::global() Returns a pointer to a shared QRandomGenerator that was seeded using securelySeeded(). This function should be used to create random data without the expensive creation of a securely-seeded QRandomGenerator for a specific use or storing the rather large QRandomGenerator object. For example, the following creates a random RGB color: Accesses to this object are thread-safe and it may therefore be used in any thread without locks. The object may also be copied and the sequence produced by the copy will be the same as the shared object will produce. Note, however, that if there are other threads accessing the global object, those threads may obtain samples at unpredictable intervals. Note: This function is thread-safe. See also securelySeeded() and system(). Returns the maximum value that QRandomGenerator may ever generate. That is, std::numeric_limits<result_type>::max(). See also min() and QRandomGenerator64::max(). Returns the minimum value that QRandomGenerator may ever generate. That is, 0. See also max() and QRandomGenerator64::min(). [static] QRandomGenerator QRandomGenerator::securelySeeded() Returns a new QRandomGenerator object that was securely seeded with QRandomGenerator::system(). This function will obtain the ideal seed size for the algorithm that QRandomGenerator uses and is therefore the recommended way for creating a new QRandomGenerator object that will be kept for some time. Given the amount of data required to securely seed the deterministic engine, this function is somewhat expensive and should not be used for short-term uses of QRandomGenerator (using it to generate fewer than 2600 bytes of random data is effectively a waste of resources). If the use doesn't require that much data, consider using QRandomGenerator::global() and not storing a QRandomGenerator object instead. See also global() and system(). void QRandomGenerator::seed(quint32 seed = 1) Reseeds this object using the value seed as the seed. [noexcept] void QRandomGenerator::seed(std::seed_seq &seed) This is an overloaded function. Reseeds this object using the seed sequence seed as the seed. [static] QRandomGenerator *QRandomGenerator::system() Returns a pointer to a shared QRandomGenerator that always uses the facilities provided by the operating system to generate random numbers. The system facilities are considered to be cryptographically safe on at least the following operating systems: Apple OSes (Darwin), BSDs, Linux, Windows. That may also be the case on other operating systems. They are also possibly backed by a true hardware random number generator. For that reason, the QRandomGenerator returned by this function should not be used for bulk data generation. Instead, use it to seed QRandomGenerator or a random engine from the <random> header. The object returned by this function is thread-safe and may be used in any thread without locks. It may also be copied and the resulting QRandomGenerator will also access the operating system facilities, but they will not generate the same sequence. Note: This function is thread-safe. See also securelySeeded() and global(). Generates a 32-bit random quantity and returns it. See also generate() and generate64(). Related Non-Members Returns true if the two engines rng1 and rng2 are at different states or if one of them is reading from the operating system facilities and the other is not, false otherwise. © 2024 The Qt Company Ltd. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Qt and respective logos are trademarks of The Qt Company Ltd. in Finland and/or other countries worldwide. All other trademarks are property of their respective owners.
{"url":"https://doc-snapshots.qt.io/qt6-6.7/qrandomgenerator.html","timestamp":"2024-11-07T05:40:21Z","content_type":"text/html","content_length":"83655","record_id":"<urn:uuid:1b74813d-6419-45bf-af15-abfd6360ef7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00622.warc.gz"}
BLS signature scheme From a high level perspective this is how BLS (also known as Boneh–Lynn–Shacham) signature scheme works. import sys sys.path.append("/home/icostan/Repos/pairings.py" ) import bn256 # helper function def h(msg: bytes) -> object: return bn256.g1_hash_to_point(msg); # public parameters m = b"Hello BLS!" G2 = bn256.twist_G # Alice k = bn256.rand_elem() # private key (orange) P = G2.scalar_mul(k) # public key generation (red) H = h(m) # hash to curve point (magenta) S = H.scalar_mul(k) # signing (blue) # Alice sends public key 'P' and signature 'S' to Bob # Bob H = h(m) # hash to curve point (magenta) v1 = bn256.optimal_ate(G2, S) # signature pairing v2 = bn256.optimal_ate(P, H) # message pairing "YOU ARE A CRYPTOSTAR!" if v1 == v2 else "YOU SUCK!" # verification (green) 'YOU ARE A CRYPTOSTAR!' A bit of naming conventions first, because shapes, colors, and names have meaning: • capital letter: point on elliptic curve • lower case: integer or bytes array • *: elliptic curve (scalar) multiplication • PRG: Pseudo-Random Generator • ellipse: elliptic curve operation, points on curves • rectangle: number generator • diamond: large integer (scalar) • double octagon: pairings • triangle: bytes array • orange: private key • magenta: hash the message to elliptic curve point • red: public key generation • blue: signing • green: verification 0. Library Import BN256 elliptic curve pairing library. import sys sys.path.append("/home/icostan/Repos/pairings.py" ) import bn256 <module 'bn256' from '/home/icostan/Repos/pairings.py/bn256.py'> 1. Public parameters Besides message m and generator G2 that are shown below we also know: • elliptic curves EC2 and EC1 with generator G1 • target extension field TF. but they are hidden in Barreto-Naehrig 256-bit (BN256) curve implementation. # helper function def h(msg: bytes) -> object: return bn256.g1_hash_to_point(msg); # public parameters m = b"Hello BLS!" G2 = bn256.twist_G <class 'bn256.curve_twist'> 2. Key generation This is as simple as randomly generating private key k then multiply by generator G2 to obtain the public key, which is another point on EC2. # Alice k = bn256.rand_elem() # private key (orange) P = G2.scalar_mul(k) # public key generation (red) <class 'bn256.curve_twist'> 3. Signing To sign the message m Alice has to hash the message to elliptic curve and get a point on EC1 then multiply that by private key k to obtain the signature S which is a point on EC1. H = h(m) # hash to curve point (magenta) S = H.scalar_mul(k) # signing (blue) # Alice sends public key 'P' and signature 'S' to Bob <class 'bn256.curve_point'> 4. Verification Bob receives public key P and signature S from Alice and hashes the message to curve to calculate the same H point in EC1. Now comes the beautiful, yet so powerful part of pairing-based cryptography. Bob uses generator G2 and signature S to calculate one side of the equation, then public key P and point H to calculate the other side, if equality holds then signature is valid. # Bob H = h(m) # hash to curve point (magenta) v1 = bn256.optimal_ate(G2, S) # signature pairing v2 = bn256.optimal_ate(P, H) # message pairing "YOU ARE A CRYPTOSTAR!" if v1 == v2 else "YOU SUCK!" # verification (green) <class 'bn256.gfp_12'> 'YOU ARE A CRYPTOSTAR!' Pairing intuition Now, the math behind pairings is quite complicated and to be honest I do not fully understand it (yet) but at least we can have a simplified visual intuition using 2 elliptic curves over rational numbers and a finite field. You can also check the references below, lots of good resources to learn from. Happy pairing!
{"url":"https://blog.costan.ro/post/2022-06-28-bls-signature-scheme/","timestamp":"2024-11-03T20:12:58Z","content_type":"text/html","content_length":"44536","record_id":"<urn:uuid:64e6b622-8b36-4a38-b24d-ebbfd28e556a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00699.warc.gz"}
16.4 The Weighted Average Cost of Capital (WACC) - Corporate Finance If you consider a market that is perfect, then the net worth of a particular firm certainly has nothing to do with the manner in which the firm is financed. This is one of the key aspects as far as this chapter is concerned. In order words, the capital cost is independent of the ratio of the debt. This has been illustrated in details already under section 6.4. The difference here is that the return rate is proportional to the risk. Hence, the current information on PV ought to suffice as far as the scope of this chapter goes. WACC stands for Weighted Average Cost of Capital. The thing about these examples is that it clarify any remaining doubts on the topic. You ought to receive the general formula for the same in the next chapter. 16.4 A an Example in a Risk-Averse World in Which Riskier Securities Must Offer Higher Expected Rates of Return If you consider scenarios where in the risk factor is averted, then you might as well apply the rules that were taught under the chapter 6.4. The example of the payoff table for a building in chapter 6.4 applies to companies and firms as well. Under conditions of risk aversion, the claims where the risk involved is more comes with a higher interest rates and vice versa. If you have gone through chapter 15 thoroughly, then you must be aware of the fact that the contingencies of equity and debt on the projects is considerably high. Now let us take up the example of a corporation. Let the structure be similar to that of a building. The corporation is subject to both the favorable and the unfavorable conditions that may be encountered and the rates are placed accordingly. There are various parameters that need to be considered, some of the major ones are given below: • Let the ratio of the probabilities of the sun shining and the tornado striking be 4 : 1. • The corresponding value of the corporation ought to be $ 100,000 and $ 20,000 respectively. • Considering such a scenario, the rate charged will be 10 %. As calculated in the previous chapters, the worth of the building in such a scenario ought to be $ 76,363.64. The only difference here lies in the fact that the expected rate of return in this case is comparatively lower than earlier. This means that a scenario of risk aversion is being considered. Suppose a bond for a particular building has a promised return of $ 28, 125. It demands a return rate of six percent. This rate without a doubt is much higher than scenarios where in the risk involved is lesser than this case. The various inputs for this example are given below: The primary task here is to determine the pay offs that are dependent on the various states. The capital cost of the equity which is levered is to be calculated by following the steps given below: 1. The first and foremost step requires you to calculate the amount received by an owner in the event that he or she owns an entire building, i.e. a scenario of 100 % equity. In addition to this, the return expected for a promised value of $ 28,125 to the holders of the bonds must be computed as well. This step is illustrated below for the given example: 2. The value that is expected for individual claims must be determined. While doing so, the probabilities of both the favorable and the adverse conditions must be taken into consideration. Bond value plus stock value must be equal to the value of the building. 3. Then the discount rates are applied correspondingly. 4. The assumptions of M & M are taken into account. This means that taxes, transaction costs, etc. are not application on any transactions. In case of cent percent equity financing, the same equations hold. Next, follow the steps as shown below: 5. Expected return rate = 11.95 % for an equity ownership that is levered. The payoff is $ 57,500 for a selling price of $ 51,363.64. If the claim worth’s for the various circumstances are provided, the return rate can be computed: In the beginning, only the capital costs were known, i.e. 0.1 and that of the firm would be 0.06. From these inputs, the equity of the firm that was levered was calculated to be 0.1195. The example in the table 6.3 illustrated the same for an environment that is risk neutral where the return rates were + 0.3993 or -1 in case of an equity that was levered. In case of equity that isn’t levered, it would be – 0.7381 or + 0.3095. This without a doubt is a lot more riskier in comparison with that of a loan that is corporate with return rate of either – 0.2 or + 0.125. However there was no impact of this on the expected return rate. If you consider the levered and unlevered ownership of equity. The capital cost for the first is higher than that of the second. The exact value of this change in the expected return rate cam be computed. Consider the CAPM model in the chapter 16.4 D. 16.4 B the WACC Formula (Without Taxes) Let us now consider a scenario that is independent of the manner in which the equity and the debt is distributed, namely the WACC. WACC stands for weighted average cost of capital. The example used earlier takes us into the topic of WACC. It is the capital cost of the claims of a firm that are nothing but the weighted average value. This is due to the fact that the firm’s value depends on the various assets that the firm contain and not the manner of distribution of equity and debt. As per the theoretical point of view, the value of a firm must be equivalent to the summation of the value of the stocks and the bonds. In the event that WACC is considered to be constant, the only thing that could make it possible is the inter dependencies of the various equities, debts and the value of the entire firm. If any two of them are unknown, the third can be computed quite easily. Let us now consider the previous example in terms of WACC irrespective of whether the sun shines or the tornado strikes. This illustrates our statement even further. The expected return rate and the PV can be expressed at interval 0 and 1 respectively. In the event that you wish to make the above equation look a lot simpler, you might as well avoid the subscripts of time. The complexity has been reduced by taking two separate instances of time only. One can further convert this into percentage terms for better understanding in the following manner: The current value of equity divided by the current firm value gives us the current value of the firm. Hence, the formula can be further expressed as: The % of weights are then multiplied in the following manner: Since the firm is owned by both equity and debt, their summation is equal to 1. As a result the 1 in the R. H. S. is cancelled out. Thus the WACC equation is obtained: An operation of expectation in front of the equation is implied as the capital cost itself is the return rate. If you are wondering as to what the effect of taxes and other distortions of the market will be on this equation, there no need to worry. All of these issues have been discussed in details under the chapters that are to follow. 16.4 C How the Cost of Capital and Quoted Interest Rates Vary With Leverage By now, the students must be well acquainted with the steps they need to follow in the event that the capital costs need to be calculated. Now we will be discussing the topic in greater details so as to ensure that no portion of the subject remains untouched. The impact of changes in the structure of capital on the security of an individual ought to be focused on. To do so we will be going back to a previous example. The question that ordinarily pops up in the mind of most students at this point in the how does the change in the amount that is being borrowed effect the expected and promised return rate and the ratio of debt to equity. There are few aspects of this chapter that ought to be very clear to students. Let’s get into another example. Let the return rate be about six percent. And the promised value of the debt be $ 28,125. This can in turn be carried out if the return rate that is free from any sort of risks is changed to 0.0555. Now what you must do is increment the capital debt expenses when the promised amount exceeds twenty thousand dollars. This can be increased to about 0.1 in the event that the entire property is put in. The formula can be reduced to: This is only applicable in the event that the rate that is calculated exceeds 0.0555. This can be further used to calculate the given example again for various structures and aversions of the The effect of cent percent equity can also be computed. Capital cost must be 0.1. Solve few more problems in the same manner. If the promised amount be about ten In case of a risky debt that promises fifty thousand dollars, then 0.0725. According to this, the table can be altered as follows: If you have computed the result for all the examples that have been considers, then you might as well verify them: Is the capital cost of a firm directly proportional to the amount of debt that has been taken? There are various critical faults that professionals in the field tend to commit. Some of the major arguable topics are as follow: • In the event of a greater debt, the risk involved in holding an equity increases which in turn results in an increase in the capital cost. • More the debt, more will be the risk and in turn the capital cost will be more. • BY now, individuals have the clear idea that a company or a firm is mainly composed of 2 aspects, namely, equity and debt. As a result, if more amount of money is borrowed, the capital cost may tend to exaggerate. Among the three arguable topic mentioned above, all are correct except for the last one. Well, there are various reasons behind this which have already been discussed in details under the scope of the previous chapter. The risk involved as far as an equity is concerned tends to increase in the event that more money is being borrowed. This is due to the fact that with more money being borrowed, the corresponding weight of a particular debt tends to increase. On the other hand as far as the equity weight is concerned, it tends to decrease. This is due to the fact the capital cost for a particular debt is often less than that for an equity. However, the sum of these costed raised to power remains constant. From the above date it is quite clear that the 1^st 2 statements are absolutely correct. However, I certainly can’t say the same for the third. The capital costs involved in both equity and debt are higher, however, the net capital cost remains unaltered. This is what M & M proposed, i.e. capital cost doesn’t have any sort of dependencies with equity or debt. The Summary Graphs The return rate that are expected are plotted along the y – axis and the promised value is plotted along the x axis. If you observe the graph carefully, you ought to notice that it comprises of three distinct regions. The figure given below contains the debt ratios for a large number of entries. The three domains are discussed in details below: • The first is the debt domain that is free from any sort of risks. Here the promised value to the creditors is a maximum of twenty thousand dollars. Since it is debt free, the creditor is entitled to 0.0555. This value is applicable to all firms of loans whose loan amount is currently less than $ 18, 948. If you observe the graph thoroughly, you ought to figure out that the return rate that is expected for an equity varies as far as this section of the graph is concerned. The ten percent return is only provided in the event that the amount of debt involved is zero. • The second region is a risky one where in both the equity and the debt are at risk. According to these statistics, a debt is considered to be risky only in the event that the debt amount is greater than $ 18,948.37 at present. The thing about this region of the graph is that both the bondholders and the shareholders get a greater return rate or at least they are expected to get. However, the value of the summation of the capital costs raised to power is still 0.1. If is due to the fact that the higher return equity is inversely proportional to the ratio of the debt. • The third region in the graph depicts a scenario where in the entire firm is owned by debt, i.e. cent percent debt. Suppose the firm has a promised value of hundred thousand dollars, the owners of the equity don’t benefit at all. The capital cost is now taken to be 0.1. However, if the firm makes a promise of more than hundred thousand dollars then the offer might as well be ignored. If you compare figure 6.1 and real life values, the graph will not be similar in all aspects. This is due to the fact that the example that has been considered in this case is not possible in real life. This refers to the fact that as far as the circumstances are concerned, the firm had only 2 possible outcomes, i.e. it is a binomial. If the pay offs for a particular firm are distributed evenly, then it can be considered to be somewhat less abnormal. For such an example, you might as well refer to the graph in the figure 6.2. It shows a pay of eighty four thousand dollars and has an SD of thirty two thousand dollars. Though this example is quite similar to figure 6.1, however, in this case, the number of possibilities is not restricted to 2. The process is same as before. The only change lies in the fact that the number of cases have increased. Since the possibility of the value of a firm being zero exists, therefore, no bond can be issued that is free from any sort of risk. Risks must definitely be involved as far as the various bonds are concerned. Thus the risk free region of the graph in figure 6.1 has been totally removed. However, there can be regions where the risk involved is negligible in the event that the debt ratio is about 2 : 5. This is something that is extremely common in today’s world. Thus, you can now relate the risk free world with the real world. This was the sole objective of discussing the risk free world in the first place. However, the fact remains that individuals are still a part of the perfect world. Hence, irrespective of the structure of the capital cost, the capital cost is still 0.1. Can the equity’s cost of capital be lower than the rate that the firm is paying to its creditors? Can the equity’s cost of capital be lower than the rate that the firm is paying to its creditors? Considering the various mistakes related to the capital costs, the return rates that are to be promised are a subject to rise. By now, you must be well aware of the fact that cost of capital for a debt is always lower than that for an equity. The question that ought to pop up now is whether or not the capital cost can be lesser than the rate of interest. This is the point where the owners must be extremely careful as they must figure out the rates that the investors are looking forward to. In the figure 16.2 it is clearly visible that the bonds return rate rises drastically. As the risk involved for a particular firm increases, the promised rate of return must also increase. This is something that isn’t going to change. Something has to be present that would compensate for the increased risk. A most common mistake that investors tend to make is that they entirely focus on the return rates that have been promised and not on the rates that he or she is expected to receive. Under such circumstances, the investor may incur heavy losses. The CAPM capital cost is the figure 16.1 is about 0.1195 for an equity which is without a doubt much less than the interest rate that has been quoted. These are certain critical aspects in the field of finance which in no way can be ignored. The quoted interest rate of 0.125 is more. In brief, I ought to put forth the concept that in the real world scenarios, the quoted rate of interest is mostly greater than that of the capital cost involved in equity. 16.4 D THE CAPM, WACC, AND NPV—A SEAMLESS FIT What we have studied till now doesn’t explain how the 3 aspects in the subhead can fit in without much of a hassle. Well, there’s absolutely no need for you to worry. This section will be clearly explaining how it has been made possible. A major conclusion that can be drawn here is that 0.06 and 0.1195 on the debt and equity respectively is without a doubt calculated taking into the account what the investors demands. By now you must be expecting that the theoretical values overlap as well. In the event that the theatrical values vary greatly, you are certainly in some trouble. A major question that ought to pop up is the reason behind the overlap of the demand and supply. Without a doubt this is only possible if the CAPM model is followed. Thankfully for you, the values actually overlap. Provided you have carried out all the calculations correctly. All the 3, WACC, NPC and CAPM can be combined together. The fact is that these three actually perform quite well when they are together. CAPM comes in handy at the time of computing the return rates that were expected on equity as well as on debt. The summation of the values raised to power is in turn used to get the WACC which is in turn the denominator if you look at the formula for NPV. There actually go hand in hand. Let 0.1 be the market beta for a debt that is taken at a low risk. The worth of the debt is four hundred dollars at present. However if the beta rises to two and a half, the equity would have a value of two hundred and fifty dollars. The return rate in an environment that is free from all sorts of risks is 0.04 and the premium for the equity is 0.03. Let’s us first compute the return rate that is not only expected by is appropriate at the same time both for equity and debt. The return rates are calculated using the CAPM as follows: Next the claim weight are to be calculated as follows is accordance with the structure of the capital: The capital cost is then calculated: If you aren’t comfortable with this method you might as well try another one that is discussed below: This in turn tells us that the capital cost would be: The standard method yielded a capital cost value which is equal to what be obtained from this method as well. As far as the project of firm type are concerned, this capital cost can very well be used. It can also be used in the event that the owner or the manager is looking forward to put discounts on the flow of cash. For instance, let’s consider a project that brings a revenue of about eight hundred dollars at a probability of 0.48 and a revenue of six hundred dollars at a probability of 0.52. Well, in the problem: Here the flow of cash that was being expected is adjusted in order to get the desired value. A question that often pops up is whether or not the WACC is the average of the rate of interest raised to power and the expected return rate as far as the equity is concerned? Well, the answer to this question is definitely NO. As far as the rate of interest quoted by the bank it concerned, it is the return rate that was promised on the debt. However, the rate of interest that is used in the WACC is comparatively higher than this as a result of the premium that is default. The question remains, how can the return rate that is expected for a debt be computed? Well, the process is same as that of calculating the same for an equity or any sort of other claims having a financial background. The CAPM structure is often used as well in order to cope up with the complexity. This task however isn’t always easy. There are several aspects that need to be considered such as the risk involved and the premium that is default. This scenario allows us to work with an expected return rate which is close to the rate under a risk free environment. As far as the perfect market is concerned, it can be proven that the WACC, the CAPM and the adjust value for debt ratio are quite compatible with one another as far as the perfect world is concerned. The WACC equation has been obtained under the scope of this chapter. This is the exact equation that we had obtained in 9.2. This equation is one of the few that connects the various betas. The three equations share the same base. However, the claims and firms whose beta value is higher is considered to be more risky. As a result, the return rate that is expected must be higher than usual. 16.4 E THE EFFECT OF DEBT ON EARNINGS PER SHARE AND PRICE/EARNINGS RATIOS Do you know what EPS is? Well, the term EPS stands for Earnings per Share. Have you wondered what would be the impact of debt on the EPS? Well, this question is absolutely baseless as EPS is proportional with the share number and not with any other characteristic of the firm. There have been scenarios where in the number of shares vary yet the capital structure remains the same. If the EPS is about seven dollars per share, then it may be said that $7,000,000 equity value has hundred thousand shares. This has already been discussed in details under the scope of the chapter 14.3 D but there no explanations were given. The propositions of M & M weren’t known yet they were followed then. The ratio of P to E might vary as well. Links of Previous Main Topic:- Links of Next Financial Accounting Topics:-
{"url":"https://myhomeworkhelp.com/the-weighted-average-cost-of-capital-wacc/","timestamp":"2024-11-02T05:18:46Z","content_type":"text/html","content_length":"311163","record_id":"<urn:uuid:5f805cec-603d-40f6-8aa1-026643da12d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00735.warc.gz"}
IMC (International Mathematical Contest) ??????????, ?? ??? ?? ???? Composition and Scope of the Test 4th Grade • 1. Composition of the test: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Eligibility requirements: - Any students in the 6th grade or lower can apply. • 3. Test time is 60 minutes. Calculators are prohibited. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test (same as common core) 1. 1. operations and Algebraic Thinking - Use the four operations with whole numbers to solve problems. 2. Number and Operations in Base Ten - Generalize place value understanding four multi-digit whole numbers. 3. Number and Operations-Fractions - Number and operations-Fractions 4. Measurement and Data - Solve problems involving measurement and conversion of measurements from a larger unit to a smaller unit. 5. Geometry - Draw and identify lines and angles, and classify shapes by properties of their lines and angles 6th Grade • 1. Composition of the test: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Eligibility requirements: - Any students in the 6th grade or lower can apply. • 3. Test time is 60 minutes. Calculators are prohibited. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test (same as common core) 1. Ratios and Proportional Relationships - Understand ratio concepts and use ratio reasoning to solve problems. 2. The Number System - Apply and extend previous understandings of multiplication and division to divide fractions by fractions. - Compute fluently with multi-digit numbers and find common factors and multiples. 3. Expressions and Equations - Apply and extend previous understandings of arithmetic to algebraic expressions. - Reason about and solve one-variable equations and inequalities. - Represent and analyze quantitative relationships between dependent and independent variables. 4. Geometry - Solve real-world and mathematical problems involving area, surface area, and volume. 5. Statistics and Probability - Develop understanding of statistical variability. - Summarize and describe distributions. 8th Grade • 1. Composition of the test: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Test time is 90 minutes. Calculators are prohibited. • 3. Eligibility requirements: - Any students in the 8th grade or lower can apply. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test (same as common core) 1. The Number System - Know that there are numbers that are not rational, and approximate them with rational numbers. 2. Expressions and Equations - Work with radicals and integer exponents. - Understand the connections between proportional relationships, lines, and linear equations. 3. Functions - Define, evaluate, and compare functions. - Use functions to model relationships between quantities. 4. Geometry - Understand congruence and similarity using physical models, transparencies, or geometry software. - Understand and apply the Pythagorean Theorem. - Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. 5. Statistics and Probability - Investigate patterns of association in bivariate data. 10th Grade • 1. Composition of the test: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Test time is 90 minutes. Calculators are prohibited. • 3. Eligibility requirements: - Any students in the 10th grade or lower can apply. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test -Number Theory (Integers) -Graph Theory, Equations, Inequality -Trigonometry (Definition of Trigonometric Functions, Law of Sines, Law of Cosines) -Counting (Combinations, Permutations, Combinations with repetition) & Probability -Plane Geometry (Pythagorean Theorem, similarity, application of Pythagorean Theorem and similarity) -Space Geometry -Sequences, Series 12th Grade • 1. Composition of the test: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Test time is 90 minutes. Calculators are prohibited. • 3. Eligibility requirements: - Any students in the 12th grade or lower can apply. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test -Number Theory (Integers) -Graph Theory, Equations, Inequality -Trigonometry (Definition of Trigonometric Functions, Law of Sines, Law of Cosines, Various Formulas (Sum & Difference Formulas, Double Angle Formula...etc.)) -Counting (Combinations, Permutations, Combinations with repetition) & Probability -Plane Geometry (Pythagorean Theorem, similarity, application of Pythagorean Theorem and similarity) -Space Geometry & Vectors -Sequences, Series -Complex Numbers College/University Grade • 1. Eligibility requirements: - Problems 1~20: 20 multiple choice questions (5 choices) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 2. Composition of the test: - Problems 1~10: 10 multiple choice questions (5 choices) - Problems 11~20: 5 short-answer questions (Short answer type, where the correct answer is an integer from 0 to 9999) - Problems 21~25: The answer consists only of integers from 0 to 9999 (0 and 9999 inclusive) • 3. Test time is 90 minutes. Calculators are allowed. • 4. Scoring Method A correct answer is 6 points, an incorrect answer is -3 points, and no answer is calculated as 0 points, and the perfect score is 150 points. Scope of the Test All of Calculus -Differential Equations -Polar Curves Team Round Test 1. It consists of two tests for middle school students and high school students. 2. The total duration of the test is 120 minutes, and the use of a calculator is prohibited. 3. Each question is worth 10 points, for a total of 150 points. 4. The answers should be written directly on the test with a black ballpoint pen. 5. Minimum of 1 to 4 players per team. Scope of the Test Middle School Students 1. The Number System - Know that there are numbers that are not rational, and approximate them with rational numbers. 2. Expressions and Equations - Work with radicals and integer exponents. - Understand the connections between proportional relationships, lines, and linear equations. 3. Functions - Define, evaluate, and compare functions. - Use functions to model relationships between quantities. 4. Geometry - Understand congruence and similarity using physical models, transparencies, or geometry software. - Understand and apply the Pythagorean Theorem. - Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. 5. Statistics and Probability - Investigate patterns of association in bivariate data. High School Students -Number Theory (Integers) -Graph Theory, Equations, Inequality -Trigonometry (Definition of Trigonometric Functions, Law of Sines, Law of Cosines, Various Formulas (Sum & Difference Formulas, Double Angle Formula...etc.)) -Counting (Combinations, Permutations, Combinations with repetition) & Probability -Plane Geometry (Pythagorean Theorem, similarity, application of Pythagorean Theorem and similarity) -Space Geometry & Vectors -Sequences, Series -Complex Numbers IMC Math Modeling Contest (IMMC) 1. A test to read the given research paper, make a problem that is happening around us in relation to the paper, solve the problem with mathematics, and write a solution. 2. It is a total of 8 hours; the solution is written as a word file and submitted. 3. The length should be between 1,000 and 3,000 words. 4. You can use books, Internet, calculator, and coding but if you use ChatGPT to copy it, the test will be canceled and several universities in collaboration with us will be notified. Using ChatGPT is judged as cheating. 5. IMC Math Modeling Contest (IMMC) This is designed to test students' creative math skills. 6. Minimum of 2 to a maximum of 4 students per team. 7. Each team member does not have to be from the same school or country.
{"url":"http://imc-impea.org/IMC/bbs/content.php?co_id=test_Guide","timestamp":"2024-11-09T19:26:44Z","content_type":"text/html","content_length":"35866","record_id":"<urn:uuid:ddc733ab-c786-4cfb-a9b2-019f3b5ee1d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00898.warc.gz"}
WeBWorK Standalone Renderer Consider the series $\displaystyle \sum_{n=1}^{\infty} a_n$ where In this problem you must attempt to use the Root Test to decide whether the series converges. Compute Enter the numerical value of the limit L if it converges, INF if it diverges to infinity, MINF if it diverges to negative infinity, or DIV if it diverges but not to infinity or negative $L =$ Which of the following statements is true? A. The Root Test says that the series converges absolutely. B. The Root Test says that the series diverges. C. The Root Test says that the series converges conditionally. D. The Root Test is inconclusive, but the series converges absolutely by another test or tests. E. The Root Test is inconclusive, but the series diverges by another test or tests. F. The Root Test is inconclusive, but the series converges conditionally by another test or tests. Enter the letter for your choice here: In order to get credit for this problem all answers must be correct.
{"url":"https://wwrenderer.libretexts.org/render-api?sourceFilePath=Library/Rochester/setSeries7AbsolutelyConvergent/benny_root1.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&answersSubmitted=0&showSummary=1&displayMode=MathJax&language=en&outputFormat=nosubmit","timestamp":"2024-11-12T00:26:04Z","content_type":"text/html","content_length":"6304","record_id":"<urn:uuid:6c83d392-028f-46a6-8ea7-64f1416ec7c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00083.warc.gz"}
act Interview Questions GenPact Interview Questions Last Updated: Jan 03, 2024 As a budding Software Engineer of today, one would always want to work for companies that believe in their products and aim at creating a huge impact in the world. One such company is Genpact. Genpact is a multinational professional services company that helps companies transform. The term "Genpact" stands for Generating Impact. They drive digitally-led innovation and digitally-enabled intelligent operations for their clients, guided by their objective of making the world a better place for people. Genpact drives real-world transformation at scale, guided by their experience reinventing and running thousands of processes for hundreds of clients, many of whom are Global Fortune 500 businesses. They use design to think, digital to dream, and data and analytics to solve They connect every dot, reimagine every process, and change the ways firms function from New York to New Delhi and more than thirty nations in between. Genpact is a fantastic place to start your career as a new hire. It offers a great working environment as well as a welcoming ambience that is good for both personal and corporate growth. Because of Genpact's fast-paced development environment, today's young Software Engineers can learn a lot in a short amount of time. Genpact has legacy code bases for their products. Also, it has a few products being built using the cutting edge technologies of today. Therefore, working at such a company would definitely prove to be beneficial for any Software Engineer looking to work on exciting projects that have a great impact on the This article not only explains how to reply to common Genpact interview questions but also sets out a path for both experienced and inexperienced Software Engineers to land their ideal job at Genpact. The entire Interview Process has been explained in this article and along with that, the most frequently asked questions regarding the Genpact Recruitment Process have also been answered in this article. GenPact Recruitment Process 1. Interview Process. Engineering teams, like any other digital business, are vital to the success of Genpact's mission. As a result, Genpact is always looking to grow its technical team, hiring a mix of seasoned specialists and recent grads for a variety of internal developer positions. At Genpact, the interview process is usually broken down into three stages: • One round of online evaluation (The Coding Round is usually conducted on portals like Hackkerank, etc.) • A single round of Technical interviews. • One Behavioral or Human Resources Round. Create a free personalised study plan Create a FREE custom study plan Get into your dream companies with expert guidance Get into your dream companies with expert.. 2. Interview Rounds. The Genpact Interview Process for Freshers is virtually comparable to the Genpact Selection Process for Experienced, with the exception of the severity of questions posed in the successive rounds and the number of rounds (Experienced Folks may be directly interviewed and may be allowed to skip the first round as decided by the Recruiter). The recruitment process at Genpact is usually broken down into three stages: 1. The Online Evaluation Round or the Coding Round: The first round of the Genpact is the online valuation round which is usually held on online platforms like Hackkerank, HackerEarth, etc. In this round, the candidates are asked to write down the solution codes for two or three questions which are based on Data Structures and Algorithms. Also, around ten to twenty Multiple Choice Questions are asked in this round which is mostly based on Computer Science Fundamentals. One must have a solid grasp of the various concepts of Programming and subjects like Database Management Systems, Operating Systems, SQL, Computer Networks, Object-Oriented Programming, a few Programming Languages like Java, C ++, C, Javascript, etc. in order to ace this round. This round is usually eliminative in nature and only the candidates who score well in both these sections are given the green flag for the next round of interviews in the Genpact Recruitment Process. The Genpact technical multiple choice questions and Genpact programming questions are of an easy to medium complexity level, and anyone familiar with the fundamentals of computer science should be able to breeze through this round with ease. 2. The Technical Interview Round: The second round of the Genpact Hiring Process is a Technical Round in which the applicants are first asked a couple of questions on Data Structures and Algorithms. They might also be asked to code their solutions for the interviewers at Genpact and the code quality, code logic, etc. are thoroughly assessed by the interviewers. After that follows a series of questions that are used to test the technical knowledge of the candidate. Candidates might be asked to write down the SQL queries to do various types of operations on Relational Databases. Questions about a candidate's experience as well as questions about any Cloud-based technologies also may be asked. Knowing how Application Programming Interfaces (APIs) function and how distributed systems are designed can be a huge plus in this interview, making the candidate stand out in the eyes of the interviewers. Questions about any aspect of the candidate's education are likely to be asked during this round. To pass this Genpact walk-in interview round, you must have a solid understanding of computer science basics. Candidates should be familiar with data structures, algorithms, database management systems, operating systems, networking, object-oriented programming ideas, and their preferred programming language, such as C++, Java, Python, and others. A few Unit Testing, Functional Testing questions, etc. can also be asked in this round if the candidate is an experienced Software Engineer. The panel will also ask you questions regarding your resume, so be sure you know what you have written. 3. The Human Resources (HR) or Behavioral Round: The Human Resources or the HR round is the final phase of the Genpact Recruitment Process, and it tries to determine whether or not the candidate is a cultural match for Genpact. Candidates can learn more about Genpact and Genpact tools on the following website: https://www.genpact.com/ in order to ace this interview. Questions based on puzzles can also be asked in this round in order to check how smart the candidate is in general and how well he or she reacts to awkward and tough situations. The following are some of the questions that may be asked during the Human Resources or HR round: • What are your strengths and weaknesses? • Tell me a little bit about Genpact. • What value do you bring to Genpact or what is your vision for generating an impact in the world while working at Genpact? • Will you be able to relocate to other parts of India? • Explain what you know about yourself. • Why are you interested in working at Genpact? • How much do you expect to be paid? This is a challenging question to respond to. Even the most seasoned employees get asked this question. You might wish to learn what the company's typical employee raise is before responding to this question. If an applicant meets all of the above criteria and has demonstrated great technical skills in the previous rounds, he or she will almost certainly be recruited at Genpact, India. 3. Eligibility Criteria. The fresher qualifying standards of most other organizations are comparable to GenPact's. Candidates should check the following eligibility criteria (both graduation and general criteria) before applying for the GenPact Recruitment Process: Graduation Criteria: The following are the graduation criteria for the GenPact Recruitment Process of Software Engineers:- Graduation Criterias Information 1. Bachelor's of Engineering (B.E.) / Bachelor's of Technology (B. Tech) in one of the following disciplines: CSE/ECE/IT/EEE/TELECOM Branch of Study or Department of Study 2. M.Sc (Master of Science) in Computer Science & Information Technology or related fields. 3. M.Tech (Master of Technology) / M.E. (Master of Engineering) in Computer Science & Information Technology or related fields. 4. B.C.A (Bachelor's of Computer Application) or M.C.A. (Master of Computer Application) Full-Time courses recognised by the Central or State governments of India. Mode of Study (Not the part-time or correspondence courses available.) Minimum Percentage criteria with which Graduation needs to be Sixty Percent (60%) Backlogs No Backlogs are active during the GenPact Recruitment Process. Academic Criteria: Given below is the mandatory academic criteria for the GenPact Selection Process:- Academic Criteria Information Minimum Percentage required in Senior Secondary or Diploma Seventy Percent General Requirements (or Required Skills): For securing a job at GenPact, a candidate must have a set of general abilities. Listed below are a handful of them:- • Interpersonal and communication abilities should be exceptional. • Should have a technical bent of mind. • Professionalism is one's attitude toward one's work should be there. • Excellent organizational and detail-oriented skills. • One should be able to understand and perform his or her tasks in an independent manner. • Having flexibility in the day and night shifts. GenPact Technical Interview Questions: Freshers and Experienced 1. What is meant by Proactive Updates, Retroactive Updates and Simultaneous Update in Database Management Systems? The meaning of the given terms are as follows: • Proactive Updates: These changes are made to the database before it is put into use in the actual world. • Retroactive Updates: These updates are applied to a database after it has been operational in the real world. • Simultaneous Updates: These updates are applied to the database at the same moment as they become effective in the actual world. 2. What is meant by Intension and Extension in Database Management Systems? The meaning of Intension and Extension are as follows: • Intension: Intension, often known as Database Schema, is a term that refers to the description of a database. This is defined during database design and is largely unmodified. • Extension: The number of tuples available in the database at any given time is referred to as the extension. The value of this variable changes as tuples are created, updated, and destroyed. As a result, the data in the database at a certain point in time is referred to as the database's extension or, more generally, the database's snapshot. 3. Define Index Hunting. What is it used for in Database Management Systems? Index hunting is the process of enhancing a collection of indices. The purpose of doing Index Hunting is that indexes make the performance of the queries better and also reduce the time required to process a query. Given below are the various ways in which Index Hunting improves the performance of the queries: • The most optimal queries are suggested with the usage of the query optimizer. • For checking the effect, measurements such as index, query distribution, and performance are used. • Optimization is also done on the databases for a few of the problem queries. 4. What do you understand about transactions in SQL? What are the properties of transactions in SQL? A transaction is a logical unit of work performed on a database. Transactions are logically ordered units or sequences of work that can be completed manually by a human or automatically by a database application. The propagation of one or more changes to the database is referred to as a transaction. For instance, whether you create, update, or delete a record from a table, you are completing a transaction on that table. To preserve data integrity and address database issues, it's critical to keep track of these transactions. In practice, you will group several SQL queries together and run them all at the same time as part of a transaction. Transactions n SQL contain the four standard qualities listed below, which are frequently abbreviated as ACID. • Atomicity: Atomicity ensures the successful completion of all processes inside the work unit. Otherwise, the transaction will be aborted at the point of failure, and all preceding activities would be reversed. • Consistency: When a transaction is successfully committed, consistency ensures that the database changes states properly. • Isolation: Isolation allows transactions to execute independently of one another while remaining transparent to them. • Durability: Durability ensures that a committed transaction's outcome or effect is preserved in the event of a system failure. Advance your career with Mock Assessments Refine your coding skills with Mock Assessments Real-world coding challenges for top company interviews Real-world coding challenges for top companies 5. Define Subqueries in SQL. Given a table of employees, write down a SQL query using subqueries to find all the employees with salaries greater than five thousand. A subquery in SQL is a query that is defined inside another query to retrieve data or information from the database. The outer query of a subquery is referred to as the main query, while the inner query is referred to as the subquery. Subqueries are always processed first, and the subquery's result is then passed on to the main query. It can be nested inside any query, including SELECT, UPDATE, and OTHER. Any comparison operators, such as >, or =, can be used in a subquery. The syntax of a subquery in SQL is given as follows: SELECT columnName [, columnName ] FROM tableOne [, tableTwo ] WHERE columnName OPERATOR (SELECT columnName [, columName ] FROM tableOne [, tableTwo ] A SQL query using subqueries to find all the employees with salaries greater than five thousand in the employee's table is given below: SQL> SELECT * FROM EMPLOYEES WHERE EMPLOYEE_ID IN (SELECT EMPLOYEE_ID FROM EMPLOYEES WHERE SALARY > 4500) ; 6. State a few benefits of using C ++ as a language. A few benefits of using C ++ as a language are as follows: • C++ not only preserves all parts of the C language but also simplifies memory management. • C++ is a highly portable language, which implies that code written in it may operate on any platform. • Classes, objects, inheritance, polymorphism, and abstraction are all ideas of C++, which is an object-oriented programming language. • Inheritance is a notion in C ++. Through inheritance, unnecessary code can be removed and existing classes can be reused. • Data hiding aids the programmer in creating secure applications that are not vulnerable to intruders. • Message passing is a communication method that allows objects to communicate with one another. • C++ has a large library of functions. 7. State the pros and cons of using Object Oriented Programming Languages. The pros of using Object-Oriented Programming Languages are as follows: • Object-Oriented Programming Languages follow a bottom-up approach which helps programmers in simulating the real world and its entities. • It makes code re-usability possible. • Using abstraction, the user is not exposed to superfluous data. • Object-Oriented Programming Languages requires designers to go through a lengthy and thorough design phase, which yields a better design with fewer faults. • Object-Oriented Programming Languages break down a large problem into smaller pieces. • Programmers are able to achieve their objectives more quickly via Object-Oriented Programming Languages. • The usage of Object-Oriented Programming Languages reduces the amount of complexity of code written by programmers. • Code may be easily redesigned and extended without affecting other features. The cons of using Object-Oriented Programming Languages are as follows: • It is necessary to do a lot of planning in order to use these languages. • There is a tendency for classes to be overly generalized in Object-Oriented Programming Languages. • These are not suitable for small problems as they lead to over-complication. • In certain instances, programs written in Object-Oriented Programming Languages can consume a lot of memory which is obviously unwanted. 8. State your understanding of Pure Object Oriented Programming Languages. Is Java one? If not, then why? A programming language is referred to as a pure object-oriented language if everything within the program is treated as an object. The pure Object Oriented Programming language does not support primitive types. A pure object-oriented programming language must also satisfy the following properties: • Encapsulation: Encapsulation can be defined as the wrapping up of data members and member functions into a single entity. In encapsulation, data and the functions that alter it are bound together in Object-Oriented Programming. Let us take a real-world example of encapsulation into consideration: a company has various departments, including accounting, finance, and sales, to name a few. All financial transactions are handled by the finance department, which also keeps track of all financial data. The sales department, therefore, is in charge of all sales-related activities and keeps track of all sales. For whatever reason, an official from the finance department may occasionally request all of the sales data for a specific month. He is not allowed to access the data in the sales area directly in this case. He must first speak with another sales officer and request that he deliver the needed information. It is all about encapsulation. The data from the sales department, as well as the personnel who can affect it, are grouped under the heading "sales division." Data abstraction or hiding is also a result of encapsulation. Because encapsulation hides data, it is a good idea to use it. The data of any of the sections, such as sales, finance, or accounting, is hidden from any other component in the above example. • Inheritance: Inheritance refers to a class's capacity to derive features and traits from another class. One of the most significant characteristics of Object-Oriented Programming is inheritance. □ Sub Class or Derived Class: A Sub Class or Derived Class is a class that inherits properties from another class. □ Base Class or Super Class: A Base Class or Super Class is a class whose properties are inherited by subclasses. Inheritance supports the concept of "reusability," which means that if we want to create a new class but there is already one that contains some of the code we need, we can derive our new class from the old one. We are reusing the old class's fields and functions in this way. Dog, Cat, and Cow are examples of Animal Base Class Derived Classes. • Polymorphism: Polymorphism refers to the fact that something exists in multiple forms. Polymorphism, in simple terms, is the ability of a message to be displayed in multiple formats. At the same time, a person might have a variety of characteristics. At the same time, he is a father, a spouse, and a worker. As a result, the same person behaves differently in different settings. Polymorphism is the term for this. In different situations, an operation may behave differently. The behaviour is determined by the data types utilised in the operation. Operator and function overloading are supported in C++ which inherit Polymorphism. □ Operator Overloading: Operator overloading is the process of forcing an operator to behave differently in different situations. □ Function Overloading: When a single function name is used to accomplish many tasks, this is known as function overloading. Inheritance is frequently implemented via polymorphism. An example of Polymorphism is given below: Consider the following scenario: We need to construct a function to add several integers; sometimes there are two integers, and other times there are three integers. We can write the sum Method with the same name but different parameters and the method concerned will be invoked based on the parameters. • Abstraction: One of the most fundamental characteristics of object-oriented programming in C++ is data abstraction. Abstraction refers to revealing only the most important information while concealing the details. Data abstraction refers to exposing only the most important aspects of the data to the outside world while concealing the implementation specifics. Consider the case of a man at the wheel of a car. The man only knows that pressing the accelerators will increase the car's speed and that applying the brakes will stop it, but he has no idea how the speed is increased by pressing the accelerators, nor does he understand the car's inner mechanism or how the accelerator, brakes, and other controls are implemented in the car. This is the definition of abstraction. There are two ways in which Abstraction can be implemented: □ Abstraction using Classes: We may use classes to implement Abstraction in C++. Using the available access specifiers, the class assists us in grouping data members and member functions. A-Class has the ability to control which data members are visible to the outside world and which are not. □ Abstraction in Header Files: Header files are another sort of abstraction that can be used in C++. Consider the pow() method in the math.h header file, for example. We just call the function pow() in the math.h header file whenever we need to calculate the power of a number and supply the numbers as arguments, without knowing the underlying algorithm through which the function calculates the power of numbers. ☆ Objects are all predefined types. ☆ Objects are all user-defined types. ☆ All operations on objects must be done through methods that have been exposed to the objects. Because predefined data types are not recognized as objects in Java, it is not a pure object-oriented programming language. 9. State the various levels of data abstraction in Database Management Systems. In Database Management Systems, there are three basic levels of data abstraction. They are as follows: • Physical Level: The physical level of abstraction specifies how data is stored and is the lowest degree of abstraction. • Logical or Conceptual Level: After the Physical level, there is the Logical level of abstraction. This layer decides what data is saved in the database and how the data pieces relate to one • View Level: The greatest level of abstraction, the View Level describes only a portion of the entire database. 10. State some benefits and drawbacks of Kernel Level Threads. Some of the benefits of kernel-level threads are as follows: • Kernel level threads can manage and schedule all threads. • The scheduler may decide to use more CPU time in a large threaded process. • Kernel level threads are great for frequency blocking applications. Some of the drawbacks of Kernel Level threads are as follows: • Implementing kernel threads is harder than implementing user-level threads. • The kernel-level thread is usually slower than the user-level thread. 11. List the few types of Operating Systems which you know about. Also list some examples of Operating Systems. A few types of Operating Systems which I know about are as follows: • Batch Operating System: A batch operating system is a computer program that automatically sequences jobs without the need for human interaction. • Time-Sharing Operating System: Many people can share computer resources with a time-sharing operating system. There is maximum resource utilization in this type of operating system. • Distributed Operating Systems: A distributed operating system (DOS) is software that manages a number of computers and makes them appear to be one. • Network Operating System: In-Network Operating Systems, computers with various operating systems can connect to a common network. It is used usually for security purposes. • Real-Time Operating Systems: Real-Time Operating Systems (RTOS) are employed in situations where a great number of events, most of which are external to the computer system, must be accepted and processed quickly or under tight deadlines. Industrial control, telephone switching equipment, aircraft control, and real-time simulations are examples of such uses. Some examples of Operating Systems are as follows: • Windows (Graphical User Interface Operating System based for Personal Computers) • GNU or Linux (for Personal Computers, File and Print server, Workstations, ISP, Three-tier Clients and Servers) • iOS (Apple’s OS for iPhone, iPad, and iPod Touch) • macOS (for Apple’s Personal Computers and Workstations like MacBook, iMac, etc. • Android (Google’s Operating System for Google's smartphones, tablets, smartwatches, etc.) 12. What are the different types of Real Time Operating Systems? The term "Real-Time Operating System" refers to a system that is subjected to real-time constraints, i.e., the response must be guaranteed within a certain time restriction or the system must satisfy a specific deadline. For instance, a flight control system, real-time monitoring, and so on. Real-time systems with timing constraints include the following types: • Hard Real-Time Operating System — A system of this type will never miss a deadline. The implications of missing the deadline might be disastrous. If tardiness increases, the utility of a hard real-time system's result declines abruptly, and it may even become negative. Tardiness refers to how late a real-time system does a task in comparison to its deadline. As an example, consider a flight controller system. • Soft Real-Time Operating System — This type of system can occasionally miss its deadline with a low likelihood. There are no dire implications if you miss the deadline. With the increasing delay, the utility of a soft real-time system's result rapidly reduces. Switches on phones are examples of Soft Real-Time Operating Systems. 13. What do you understand about Threads in Operating Systems? A thread is a single sequential flow of task execution in a process, and it is also known as a thread of execution or a thread of control. Thread execution is possible within any operating system's process. Aside from that, several threads can exist within a process. A distinct program counter and a stack of activation records and control blocks are used by each thread of the same process. Threading is frequently referred to as a light technique. A process can be broken down into numerous threads. Many tabs in a browser, for example, can be considered as threads. MS Word employs many threads to prepare the text in one thread, receive input in another thread, and so on. Within a process, a thread is a single sequence stream. Threads have the same qualities as the process, hence lightweight processes are referred to as threads. Threads are executed one after the other, giving the appearance that they are running in parallel. Each thread has a unique state. Each thread is unique and has the following in them: • A counter for a program. • A series of registers. • A stacking area. Threads are not self-contained since they share code, data, and other Operating System resources. There are two types of threads: • User Level Thread (ULT) — This type of thread is created in the user-level library rather than using system functions. Thread switching does not necessitate a call to the Operating System or a kernel interrupt. The kernel is unaware of the user-level thread and treats them as single-threaded processes. • Kernel Level Thread (KLT) - Kernel Level Thread or KLT is in charge of thread management. Instead of each process having its own thread table, the kernel maintains a master thread table that keeps track of all the threads in the system. In addition, the Kernel Level Thread keeps track of the processes using the standard process table. The kernel of the operating system includes system calls for creating and managing threads. 14. What are the advantages of thread over processes? The advantages of thread over processes are as follows: • Faster thread context switching: Thread context switching is faster than process context switching. Process context shift necessitates greater CPU overhead. • Responsiveness: If the process is separated into numerous threads, the output of one thread can be returned promptly as it completes its execution. • Resource sharing: Within a process, resources such as code, data, and files can be shared throughout all threads. It is worth noting that the stack and registers can't be shared between threads. Each thread has its own registers and stack. • Effective multiprocessor system utilization: If a single process has numerous threads, we can schedule several threads on multiple processors. The process will run more quickly as a result of • Increased System Throughput: When a process is separated into several threads, each thread function is treated as a single task, the number of jobs done per unit of time increases, boosting the system's throughput. • Communication: Because the threads share a similar address space, communication between them is easier. During the process, we must use a special communication approach to communicate between the two processes. 15. What do you understand about Multithreading in Operating Systems? Multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to run many threads of code at the same time, with the operating system supporting it. Multiprocessing is not the same as Multiprocessing. Threads in a multithreaded application share resources from a single or several cores, such as computational units, CPU caches, and the Translation Lookaside Buffer or TLB (A translation lookaside buffer or TLB is a memory cache that speeds up access to user memory. It is a component of the Memory Management Unit or MMU on the chip. The TLB, also known as an address translation cache, records recent virtual memory to physical memory translations). Multithreading tries to increase the use of a single core by exploiting thread-level parallelism as well as instruction-level parallelism in multiprocessing systems with several full processing units in one or more cores. Because the two approaches are complementary, they are used together in practically all modern system architectures that include multiple multithreading CPUs and multithreading cores. We can see that in the above image, one single process has two threads of execution that run on one processor. 16. State some advantages and some disadvantages of User Level Threads. Some advantages of User Level Threads are as follows: • User Threads can be easily implemented over kernel threads. • User-level threading can be applied to operating system types that do not support kernel-level threading. • It is faster and more efficient. Context switch times are shorter than kernel-level threads. • It does not require any changes to the operating system and representing streams at the user level is simple. • Small registers, PCs, stacks, and flow control blocks are stored in the address space of a user-level process. • It is easy to create, change, and synchronize streams without process intervention. Some Disadvantages of User Level Threads are as follows: • User Level Threads lack coordination between thread and kernel. • If one thread causes a page fault, the whole process is blocked. 17. Explain the concept and two types of Data Independence. Data independence is a concept according to which, the change in data at one level of the DBMS three-tier architecture should not influence data at another level. Data independence is of two types of data independence: • Physical Data Independence: According to Physical Data Independence, any change in the physical location of tables and indexes should have no bearing on the conceptual level or the external display of data. This data independence is simple to achieve and is supported by the majority of Database Management Systems. • Conceptual Data Independence: According to Conceptual Data Independence, data at the conceptual and external levels must be independent. This means that any changes to the conceptual schema should have no impact on the external schema. Adding or removing characteristics from a table, for example, should have no effect on the user's view of the table. However, unlike physical data independence, this form of independence is more difficult to achieve since changes in conceptual schema are mirrored in the user's view. 18. What are Database objects in Database Management Systems? A database object is any object defined in a database that is used to store or reference data. Everything we create from the create command is called a database object. It can be used to store and manipulate data. Some examples of database objects are as follows: • Table - Table is a basic storage unit with compound rows and columns. Its syntax is as follows: CREATE TABLE [schema.]table (column datatype [DEFAULT expr][, ...]); • Views - Views are a logical representation of subsets of data from one or more tables. A view is a logical table that is based on another view or a table. A view is similar to a window through which data from tables can be viewed or updated, but it does not include any data of its own. Base tables are the tables that a view is built on. In the data dictionary, the view is saved as a SELECT statement. Its syntax is as follows: CREATE [OR REPLACE] [FORCE|NOFORCE] VIEW view [(alias[, alias]...)] AS subquery [WITH CHECK OPTION [CONSTRAINT constraint]] [WITH READ ONLY [CONSTRAINT constraint]]; • Sequence - Sequences generate primary key values. A sequence is a database object that has been created by a user and can be shared by several users to generate unique integers. The creation of a primary key value, which must be unique for each row, is a common use for sequences. An internal Oracle routine generates the sequence and increments (or decrements) it. Its syntax is as follows: CREATE SEQUENCE sequence [INCREMENT BY n] [START WITH n] [{MAXVALUE n | NOMAXVALUE}] [{MINVALUE n | NOMINVALUE}] [{CYCLE | NOCYCLE}] [{CACHE n | NOCACHE}]; • Indexes - Indexes are used to improve the performance of some queries on the databases. This database object is used to create an index in the database. It is a schema object that can speed up row fetching by using a pointer. Indexes can be created explicitly or automatically. If you don't have an index on the column, full crawling of the table will occur. Indexes provide direct and fast access to table rows. Its goal is to reduce disk Input and Output needs by using an indexed path to quickly locate data. The index is used and maintained automatically by the database server. Once an index is created, no direct action is required by the user. Indexes are logically and physically independent of the table they index. This means they can be created or deleted at any time and do not affect the base tables or other indexes. Its syntax is as follows: CREATE INDEX index ON table (column[, column]...); • Synonyms - This database object is used to build database indexes. It creates a synonym to make object access easier (another name for an object). You can use synonyms to make referring to a table owned by another user easier and to shorten long object names. You must prefix the table name with the name of the user who created it, followed by a period, to refer to a table owned by another user. Creating a synonym for a table, view, sequence, procedure, or other object avoids the requirement to qualify the object name with the schema and gives you an alternate name. This approach is very handy when dealing with long object names, such as views. In terms of syntax, the meaning of various things are as follows: □ PUBLIC: creates a synonym accessible to all users □ SYNONYM: it is the name of the synonym to be created □ OBJECT: identifies the object for which the synonym is created Its syntax is as follows: CREATE [PUBLIC] SYNONYM synonym FOR object; 19. State some benefits of using views in a Database Management System. Some benefits of using views in a Database Management System are as follows: • Views can represent a subset of the data in a table. As a result, a view can limit the extent to which the underlying tables are exposed to the outside world: a user may be allowed to query the view but not the whole of the base database. • Views can be used as aggregated tables, in which the database engine aggregates data (sum, average, and so on) and displays the generated results alongside the data. • Views allow us to combine and simplify numerous tables into a single virtual table. • Views take up extremely minimal storage space; the database simply maintains the specification of a view, not a copy of all the data it displays. • Views can obscure data complexity. • Views can give additional security depending on the SQL engine utilized. 20. What do you understand about the concept of Data Warehousing? Data warehousing is the process of collecting, extracting, processing, and importing data from numerous sources and storing it in a single database. A data warehouse can be thought of as a central repository for data analytics that receives data from transactional systems and other relational databases. A data warehouse is a collection of historical data from an organization that aids in decision making. The data from a data warehouse is further used for Data visualization, reporting, business intelligence and many more activities. 21. State your understanding of Entity Relationship (or the ER) model. Also, define the following terms: Entities, Entity Types, and Entity Sets in Database Management Systems. A diagrammatic approach to database architecture in which real-world things are represented as entities and relationships between them are indicated is known as an Entity-Relationship model. Given below is an image of a sample ER diagram for a student studying in college. The definition of the given terms are as follows: • Entity: A real-world object with attributes that indicate the object's qualities is defined as an entity. A student, an employee, or a teacher, for example, symbolizes an entity. • Entity Type: A group of entities with the same attributes is referred to as an entity type. An entity type is represented by one or more linked tables in a database. Entity type or attributes can be thought of as a trait that distinguishes the entity from others. A student, for example, is an entity with properties such as student id, student name, and so on. • Entity Set: An entity set is a collection of all the entities in a database that belongs to a given entity type. For instance, a group of all students, employees, teachers, and so on represents an entity set. 22. What do you understand about the ACID properties in Database Management Systems? ACID is an acronym for Atomicity, Consistency, Isolation, and Durability. A transaction is a logical unit of work that reads and updates the contents of a database in some instances. Transactions employ read and write operations to access data. To maintain database consistency, certain properties are followed before and after a transaction. They are referred to as ACID properties. Let us now take a closer look at each of these characteristics: • Atomicity: When a transaction is atomic, it means that it either happens all at once or it does not happen at all. There is no intermediate ground, which means that there are no steps to transactions. Each transaction is viewed as a single entity that is either completed or not. It entails the two processes listed below. □ Abort: Any database updates are lost if a transaction aborts. □ Commit: When a transaction commits, the changes contained within it become visible. Atomicity is often known as the "all or nothing rule. Think about the following scenario: T, which is made up of two separate transactions: T1 and T2. A $100 transfer was made from account X to account Y. The amount deducted from X but not added to Y is deducted if a transaction fails after T1 but before T2 (for example, after write(X) but before write(Y)). As a result, the database is in a state of inconsistency. As a result, to ensure that the database state is legitimate, the transaction must be completed in its entirety. • Consistency: To ensure that the database is consistent, integrity constraints must be maintained both before and after the transaction. It relates to the accuracy of a database. As seen in the sample above, the total amount before and after the transaction must be maintained. The total before T is 500 + 200 = 700. The total after T is 400 + 300 = 700. As a result, the database is well-organized. There is inconsistency when T1 succeeds but T2 fails. As a result, T isn't finished. • Isolation: This property ensures that several transactions can run concurrently without generating database state inconsistencies. Transactions are carried out in a non-disruptive manner. Changes in one transaction are not visible to other transactions until the change for that transaction is put to memory or committed. This feature ensures that concurrently running transactions provide the same state as if they were run sequentially in some order. To better understand Isolation, consider the following scenario. Let us say X and Y are both 500, and there are two transactions T and T." Assume T has been operating until Read (Y), after which T" will start. T" reads the correct value of X but the incorrect value of Y as a result of this interleaving, and the sum computed by T" (X+Y = 50, 000+500=50, 500) is inconsistent with the sum at the end of the transaction: T = 50, 000 + 450 = 50, 450 (X+Y = 50, 000 + 450 = 50, 450). This results in a database inconsistency due to the loss of 50 units. As a result, transactions must be carried out in isolation, and modifications should be seen only after they have been written to the main memory. • Durability: This property ensures that database updates and modifications are stored and written to disk after the transaction has concluded and that they survive even if the system fails. These modifications are now permanently recorded in nonvolatile memory. As a result, the transaction's effects are never lost. Each transaction is a series of actions that works as a single unit, provides consistent outcomes, is separated from other processes, and the modifications it makes are durably stored, according to the ACID properties. 23. Define the terms: Aggregation and Atomicity in Database Management Systems. The definition of the given terms are as follows: • Aggregation: Aggregation is an Entity Relationship model feature that allows one relationship set to interact with another relationship set. • Atomicity: The atomicity attribute specifies that a database alteration must either adhere to all of the rules or not at all. As a result, if one portion of the transaction fails, the transaction as a whole fails. GenPact Interview Preparation 1. Tips to Prepare for GenPact Interview Readers of this article should strongly consider the following interview tips for preparing for a Genpact Hiring Process: • Put yourself to the test to see how quickly you can solve a set of coding problems. This will help you improve your problem-solving skills as well as your logic frameworks. • Recognize your strengths and weaknesses, as well as the experiences and talents that make you qualified for the job. Consider coming up with a one of a kind example to assist you to stand out. The interviewer wants to know what motivates you, why you are interested in the company, and how your previous work experience has prepared you for this position. • Prepare to demonstrate leadership, teamwork, professional and academic success, communication skills, and the ability to overcome obstacles by sharing your experiences. • Be aware of the Genpact tools and technologies which are being used by them nowadays by reading up on their blog or website. Know more about the company more in general. • The better prepared you are for an interview, the more likely you are to get hired. Learn everything there is to know about interviews, including how to prepare for them, the stages, rounds, and questions. For human resource development (HRD) and management interviews, answers to frequently asked questions should be prepared ahead of time. You can also learn about, among other things, firm performance, organizational structure, vision, and work-life balance. • Maintaining a positive and welcoming attitude is always beneficial. Introduce yourself confidently to start the conversation on a favourable note. Practice introducing yourself before the interview in front of a mirror to be confident during the entire hiring process. • Look over the company's interview experience articles. This will give you a good idea of what to expect and how the interview will go. • Consider conducting a mock interview. This will provide you with an idea of how the interview will go. You may use the InterviewBit platform for it. You will be partnered with your peers, and both of you will be permitted to interview, which will be advantageous to you. Frequently Asked Questions 1. How to answer the following behavioral question: What do you know about GenPact? You can begin with a brief overview of GenPact. For example, Genpact is a multinational professional services firm that assists businesses in their transformation. Genpact is an acronym for "Generating Impact". They help their clients achieve their goal of making the world a better place for people by driving digitally-led innovation and digitally-enabled intelligent operations. After that, you can discuss one or two key Genpact initiatives that you particularly enjoyed and how they impacted your life, in order to persuade the interviewer that you are also interested in working for the company. You can also discuss a couple of the company's core values as well as some of the company's most recent projects. 2. How to answer the following behavioral question: Why do you want to join GenPact? GenPact attracts and retains top people by providing comprehensive and rapid growth possibilities through a digital learning environment that is available at any time, from any location, and on any device. GenPact is a fantastic place to start your career as a new hire. It provides a great working environment as well as a welcoming atmosphere with a positive vibe that promotes personal and corporate progress. These are a few of the reasons why any budding coder of today should join GenPact. 3. How should you answer the following behavioral question: Explain all your strengths and weaknesses? This is definitely one of the most frequently asked topics in the Human Resource (HR) interview rounds. You should not be too confident in your answer to this question. On the other hand, dealing with errors seems unreliable. Honesty is always a good idea. Here are some benefits that can be discussed in the interview: • I am a goal-oriented team player and eager to learn new things every day. • My communication skills are exceptional. • My ability to handle pressure and stay calm during difficult times sets me apart. • I have strong leadership qualities that will benefit me in the long run. Now let's look at some of the flaws worth mentioning in the interview: • When I am new to a certain technology or situation, I have a lot of doubts about myself. • I am a self-critical person. • I am afraid of standing on a stage and delivering speeches(afraid to speak in public) Admitting your mistakes is essential, but it is also important to explain how you have worked to fix them. This will give the interviewer the impression that you are always looking for ways to improve yourself. 4. Is the Genpact interview easy? It does not matter whether the GenPact interview is challenging or easy. The truth is that the more you prepare for the interview, the easier it will be to pass. Acquire sufficient understanding of, among other things, interview stages, rounds, and questions. The candidate's CV (prior experience, past works) and the job requirements are used to conduct the interview. Programming languages, logical thinking, and products, software and projects on which the candidate has recently or previously worked are among the topics covered. 5. Is Genpact good for freshers? Yes, GenPact is a good company for freshers as it provides a fast-paced development environment which will help them to learn a variety of things in a very short duration of time. The codebases at GenPact are of both types: legacy code bases as well as code bases are written using the cutting edge technologies of today. Hence, the learning curve at GenPact is pretty steep. 6. What should you say in response to the following question: Why are you qualified for this position? When asked questions like these, emphasize your skills and abilities that are relevant to the position. An instance of a typical response to the given question could be as the following: "For a variety of reasons, but most significantly because of my will to succeed, I am confident that I am qualified for this position. I have all of the skills required to succeed in this profession. I am constantly attempting to educate myself on the most recent technologies. My abilities and skills are a fantastic match for the requirements of this post. Above all, my communication and leadership skills make me an excellent fit for that position." 7. What is the salary of a fresher at Genpact? The average compensation for a Genpact Fresher is Rs. 2,01,795 per year. At Genpact, fresher wages can range from Rs. 1,22,171 to Rs. 2,48,684 per year. Excel at your interview with Masterclasses Know More
{"url":"https://www.interviewbit.com/genpact-interview-questions/","timestamp":"2024-11-06T03:13:03Z","content_type":"text/html","content_length":"244496","record_id":"<urn:uuid:c14c33cf-c780-462b-a8d0-8be8f86685bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00442.warc.gz"}
How Do You Put the Sides of a Triangle in Order According to Size if You Know Two Angles of the Triangle? Solving equations can be tough, especially if you've forgotten or have trouble understanding the tools at your disposal. One of those tools is the subtraction property of equality, and it lets you subtract the same number from both sides of an equation. Watch the video to see it in action!
{"url":"https://virtualnerd.com/common-core/hsf-geometry/HSG-CO-congruence/C/10/triangle-order-sides-example","timestamp":"2024-11-02T09:38:24Z","content_type":"text/html","content_length":"29682","record_id":"<urn:uuid:35d998ee-e0df-4637-a189-e65c1c8f00ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00344.warc.gz"}
Barthel group Barthel research group, Duke Physics condensed matter theory and numerics Welcome Publications Research Team Talks & Teaching CV Research highlights: Dynamic response of strongly-correlated quantum systems. Dynamic response. - The response of a quantum system to a perturbation allows us to probe its low-energy physics, in particular, the nature and spectrum of excitations. Typical examples of response functions are spectral functions in electronic systems and dynamic structure factors in spin systems. They are the natural observables in scattering experiments. We have worked on tensor network state techniques for the computation of precise response functions and apply them to investigate quantum magnets and ultracold atom systems. Method development. - In [1], we showed how to compute dynamic structure factors of strongly correlated systems at finite temperatures using matrix-product-state (MPS) purifications and linear prediction. Due to the increase of entanglement with time, the precise simulations are usually limited to finite time windows. Linear prediction helps to extrapolate to longer times and substantially improve spectral resolution. Furthermore, the maximum reachable times can be increased by at least a factor of two by rearranging time evolution steps to reduce entanglement growth in the purifications [2,3]. We also showed how infinite boundary conditions, i.e., a simulation in the thermodynamic limit using infinite MPS, can be used to reduce the number of time evolution runs to just two [4,5]. Lastly, as an alternative to purifications, one can employ a stochastic approach called METTS [White 2009], which has advantages for 2D systems [6,7]. Generally, the big advantage of these techniques is that they work directly in the real-time/real-frequency domain and hence do not require the (usually uncontrolled) analytical continuation employed in corresponding quantum Monte Carlo simulations [1]. Applications. - We have applied these techniques to several systems. For example, we worked with experimentalists of the Helmholtz-Zentrum Berlin and Oak Ridge National Lab to study thermal spin structure factors of quasi-1D spin-1/2 magnets as obtained by neutron scattering [8]. We have also determined the universal scaling function for bosonic spectral functions in the z=2 quantum critical regime [2]. In [5], we investigated bilinear-biquadratic spin-1 chains, which describe quantum magnets like CsNiCl[3], NENP, or LiVGe[2]O[6]. The Haldane phase, which features symmetry-protected topological order, is usually described by the O(3) nonlinear sigma model. However, we found qualitative discrepancies of this approximate field theoretical description. The evolution of multisoliton continua in the critical phase of the spin-1 chains can be understood qualitatively through the renormalization group analysis of the corresponding level-one SU(3) Wess-Zumino-Witten (WZW) model. Numerically, we found previously unnoticed stable excitations above the contracting multisoliton continua. [1] "Spectral functions in one-dimensional quantum systems at finite temperature using the density matrix renormalization group" T. Barthel, U. Schollwöck, and S. R. White arXiv:0901.2342, pdf, Phys. Rev. B 79, 245101 (2009) [2] "Scaling of the thermal spectral function for quantum critical bosons in one dimension" T. Barthel, U. Schollwöck, and S. Sachdev arXiv:1212.3570, pdf [3] "Precise evaluation of thermal response functions by optimized density matrix renormalization group schemes" T. Barthel arXiv:1301.2246, pdf, New J. Phys. 15, 073010 (2013) [4] "Infinite boundary conditions for response functions and limit cycles in iDMRG, demonstrated for bilinear-biquadratic spin-1 chains" M. Binder and T. Barthel arXiv:1804.09163, pdf, Phys. Rev. B 98, 235114 (2018) [5] "Low-energy physics of isotropic spin-1 chains in the critical and Haldane phases" M. Binder and T. Barthel arXiv:2005.03643, pdf, Phys. Rev. B 102, 014447 (2020) [6] "Minimally entangled typical thermal states versus matrix product purifications for the simulation of equilibrium states and time evolution" M. Binder and T. Barthel arXiv:1411.3033, pdf, Phys. Rev. B 92, 125119 (2015) [7] "Symmetric minimally entangled typical thermal states for canonical and grand-canonical ensembles" M. Binder and T. Barthel arXiv:1701.03872, pdf, Phys. Rev. B 95, 195148 (2017) [8] "Multispinon continua at zero and finite temperature in a near-ideal Heisenberg chain" B. Lake, D. A. Tennant, J.-S. Caux, T. Barthel, U. Schollwöck, S. E. Nagler, and C. D. Frost arXiv:1307.4071, pdf, Phys. Rev. Lett. 111, 137205 (2013) Tensor network state methods for strongly-correlated systems in D≥2 dimensions. Strong correlations. - Due to their complexity, several systems of high technological relevance like high-temperature superconductors, frustrated magnets, and systems with topological order are still unsatisfactorily understood. This is due to the fact that the Hilbert space dimension grows exponentially in the system size and that perturbative, mean-field, as well as effective single-particle methods (like DFT) fail for strongly correlated systems. Hope comes from the analysis of entanglement properties, which shows that nature typically only explores a small corner of the exponentially big Hilbert space, and we may hope to capture it with an appropriately chosen reduced set of effective degrees of freedom. Tensor network states. - We are working on tensor network state (TNS) methods, which have become a dominant numerical tool for quantum many-body physics. TNS are designed such that the number of effective degrees of freedom can be tuned (and hence accuracy and computation costs) but does not need to grow exponentially with the system size. In particular, the many-body quantum state is approximated by a network of partially contracted tensors, with bond dimensions of these tensors controlling the accuracy. Development of TNS techniques. - Some of our contributions concern the efficient TNS-based evaluation of dynamic response functions. We developed a generic and efficient algorithm that makes it possible to apply TNS to fermionic systems in D≥2 dimensions with only marginal computational overhead by avoiding a global Jordan-Wigner transformation; see [1,2] and references therein. This gives TNS a major advantage for the investigation of fermionic systems in D≥2 dimensions and frustrated magnets because it avoids the negative-sign problem that hampers quantum Monte Carlo for such systems. TNS have been applied in countless studies and produced important new insights. However, due to the scaling of computation costs in the bond dimension m (e.g., O(m^12) for 2D PEPS), the overwhelming majority of studies address one-dimensional (1D) and quasi-1D systems, and practicable m are rather small for systems in D≥2 dimensions. We are pursuing different routes to substantially reduce TNS computation costs while maintaining expressiveness: (a) In [3], we found that this can be achieved by imposing constraints on the canonical polyadic (CP) rank of the tensors. We have also adapted the approach for machine learning tasks, optimizing tree tensor network classifiers with rank-constrained tensors for image classification [4]. The CP rank constraints make it possible to work with networks of high vertex degree. (b) In [5], we investigated how simulations based on the multi-scale entanglement renormalization ansatz (MERA) [Vidal 2007] can be made more efficient through tensor Trotterization (choosing the tensors as circuits of two-qubit gates) and/or a stochastic evaluation of energy gradients (variational Monte Carlo). (c) With the goal of realistically capturing microscopic material details, we are working on a new TNS impurity solver for dynamical mean-field theory (DMFT). DMFT can be interpreted as an extension of mean-field theory, where the mean field is now a dynamical single-particle Green's function. A translation-invariant system is split into a small impurity (a cluster), and the rest of the lattice is interpreted as the impurity's environment. The full-lattice problem is reduced to a self-consistent impurity problem, where interactions are restricted to the impurity, and the environment is modeled by a non-interacting bath. [1] "Contraction of fermionic operator circuits and the simulation of strongly correlated fermions" T. Barthel, C. Pineda, and J. Eisert arXiv:0907.3689, pdf, Phys. Rev. A 80, 042333 (2009), also in Virtual Journal of Nanoscale Science and Technology 20, Issue 20 (2009), and in Virtual Journal of Quantum Information 9, Issue 11 (2009) [2] "Unitary circuits for strongly correlated fermions" C. Pineda, T. Barthel, and J. Eisert arXiv:0905.0669, pdf, Phys. Rev. A 81, 050303(R) (2010) [3] "Tensor network states with low-rank tensors" H. Chen and T. Barthel arXiv:2205.15296, pdf [4] "Machine learning with tree tensor networks, CP rank constraints, and tensor dropout" H. Chen and T. Barthel arXiv:2305.19440, pdf, IEEE Trans. Pattern Anal. Mach. Intell. 1 (2024) [5] "Scaling of contraction costs for entanglement renormalization algorithms including tensor Trotterization and variational Monte Carlo" T. Barthel and Q. Miao arXiv:2407.21006, pdf Investigating quantum matter with entanglement renormalization on quantum computers. VQA with Trotterized MERA. - So far, we have strong evidence that equilibrium properties of most strongly correlated systems of practical interest can be simulated efficiently using tensor network states (TNS). However, TNS simulations in D≥2 dimensions are quite expensive. On classical computers, the computational costly steps are tensor contractions which scale with a high power in the TNS bond dimension. Recently, we have developed a resource-efficient and noise-resilient variational quantum-classical algorithm (VQA) for the investigation of strongly-correlated quantum matter on quantum computers [1]. It is based on multi-scale entanglement renormalization tensor networks (MERA) and gradient-based optimization. Due to the narrow causal cones of MERA, the algorithm can be implemented on noisy intermediate-scale (NISQ) devices with a relatively small number of qubits and still describe very large systems. The number of required qubits is system-size independent. Translation invariance can be used to make computation costs square-logarithmic in the system size and describe the thermodynamic limit. The basic idea is to leverage the power of quantum computers to implement the MERA tensor contractions. To this purpose all tensors are Trotterized, i.e., realized as brickwall circuits of two-qubit gates [2]. We have established a polynomial quantum advantage for different critical spin models [3] and are working on various improvements and experimental demonstrations. Absence of barren plateaus for isometric TNS. - A common obstacle for VQA are so-called barren plateaus, where the average energy-gradient amplitude decreases exponentially in the system size. Barren plateaus are prevalent in various VQA like quantum neural networks. VQA with barren plateaus are not trainable as the inability to precisely estimate exponentially small gradients will result in random walks on a basically flat energy landscape. In the recent contributions [4,5], we proved that the variational optimizations of isometric TNS (MPS, TTNS, and MERA) for extensive Hamiltonians with finite-range interactions are free of barren plateaus. The variance of the energy gradient, evaluated by taking the Haar average over the TNS tensors, has a leading system-size independent term and decreases according to a power law in the bond dimension. For a hierarchical TNS (TTNS or MERA), the variance of the gradient decays exponentially in the layer index of the considered tensor. These results also motivate certain improved initialization schemes. Lastly, we found a direct connection between gradient vanishing and cost-function concentration for quantum circuits [6]. [1] "Quantum-classical eigensolver using multiscale entanglement renormalization" Q. Miao and T. Barthel arXiv:2108.13401, pdf, Phys. Rev. Research 5, 033141 (2023) [2] "Scaling of contraction costs for entanglement renormalization algorithms including tensor Trotterization and variational Monte Carlo" T. Barthel and Q. Miao arXiv:2407.21006, pdf [3] "Convergence and quantum advantage of Trotterized MERA for strongly-correlated systems" Q. Miao and T. Barthel arXiv:2303.08910, pdf [4] "Absence of barren plateaus and scaling of gradients in the energy optimization of isometric tensor network states" T. Barthel and Q. Miao arXiv:2304.00161, pdf [5] "Isometric tensor network optimization for extensive Hamiltonians is free of barren plateaus" Q. Miao and T. Barthel arXiv:2304.14320, pdf, Phys. Rev. A 109, L050402 (2024) [6] "Equivalence of cost concentration and gradient vanishing for quantum circuits: An elementary proof in the Riemannian formulation" Q. Miao and T. Barthel arXiv:2402.07883, pdf, Quantum Sci. Technol. 9, 045039 (2024) Scaling and evolution of entanglement in many-body systems. Entanglement entropies quantify the complexity of quantum matter and its utility for quantum information processing. The computation costs in tensor network simulations are intimately related to entanglement properties of the studied systems. Also, one can use the entanglement structure to optimize the network geometry. In addition to such practical issues, the information-theoretic lens on quantum many-body systems leads to a deeper understanding of the physics. We are trying to gain a comprehensive understanding of the entanglement scaling in many-body systems: • In [1], we derived a conformal field theory (CFT) formula for subsystem Rényi entropies and the boundary entropy in ground states of critical 1D systems. • In [2], we established the log-area law for critical fermionic systems with a finite Fermi surface, where the proportionality constant is predicted by the Widom conjecture. We showed that there is now logarithmic correction for 2D critical fermions with a zero-dimensional Fermi surface, and confirmed the area law for critical 2D bosons. • For collective spin models in the thermodynamic limit, we derived the logarithmic scaling of the groundstate entanglement entropy at criticality and developed a perturbative method for 1/N expansions of entanglement entropies [3,4]. • In [5], we elucidated entanglement properties of excited states in Bethe-ansatz integrable systems. • Employing the eigenstate thermalization hypothesis (ETH), we found that the entanglement entropy of (almost) all energy eigenstates is described by a single crossover function [6,7,8]. These functions capture the full crossover from the groundstate entanglement regime at low energies and small subsystem size (area or log-area law) to the extensive volume-law regime at high energies or large subsystem size. For quantum-critical regimes, the crossover functions have universal scaling properties. • In the context of tensor network states (TNS), we proved, for example, that MERA states in D≥22 dimensions obey the entanglement area law and are, actually a subclass of PEPS [9]. In [10], I determined the scaling of entanglement properties in thermofield double states for typical 1D systems at finite temperatures. Thermofield double states are employed in TNS simulations for at nonzero temperatures. The results of field-theoretical arguments and numerical examples, show that the cost for the classical simulation at finite temperatures grows only polynomially with the inverse temperature and is system-size independent -- even for quantum critical systems. We are also interested in other information-theoretic aspects. For example, we proved that the time needed to realize measurements to a predefined accuracy scales in general exponentially in the system size [11]. [1] "Entanglement and boundary critical phenomena" H.-Q. Zhou, T. Barthel, J. O. Fjærestad, and U. Schollwöck arXiv:cond-mat/0511732, pdf, Phys. Rev. A 74, 050305(R) (2006), also in Virtual Journal of Nanoscale Science and Technology 14, Issue 23 (2006) [2] "Entanglement scaling in critical two-dimensional fermionic and bosonic systems" T. Barthel, M.-C. Chung, and U. Schollwöck arXiv:cond-mat/0602077, pdf, Phys. Rev. A 74, 022329 (2006) [3] "Entanglement entropy beyond the free case" T. Barthel, S. Dusuel, and J. Vidal arXiv:cond-mat/0606436, pdf, Phys. Rev. Lett. 97, 220402 (2006), also in Virtual Journal of Nanoscale Science and Technology 14, Issue 24 (2006) [4] "Entanglement entropy in collective models" J. Vidal, S. Dusuel, and T. Barthel arXiv:cond-mat/0610833, pdf, J. Stat. Mech. P01015 (2006) [5] "Bound states and entanglement in the excited states of quantum spin chains" J. Mölter, T. Barthel, U. Schollwöck, and V. Alba arXiv:1407.0066, pdf, J. Stat. Mech. P10029 (2014), Special Issue "Quantum Entanglement in Condensed Matter Physics" J. Stat. Mech. 2014 [6] "Eigenstate entanglement: Crossover from the ground state to volume laws" Q. Miao and T. Barthel arXiv:1905.07760, pdf, Phys. Rev. Lett. 127, 040603 (2021) [7] "Scaling functions for eigenstate entanglement crossovers in harmonic lattices" T. Barthel and Q. Miao arXiv:1912.10045, pdf, Phys. Rev. A 104, 022414 (2021) [8] "Eigenstate entanglement scaling for critical interacting spin chains" Q. Miao and T. Barthel arXiv:2010.07265, pdf, Quantum 6, 642 (2022) [9] "Real-space renormalization yields finitely correlated states" T. Barthel, M. Kliesch, and J. Eisert arXiv:1003.2319, pdf, Phys. Rev. Lett. 105, 010502 (2010) [10] "Typical one-dimensional quantum systems at finite temperatures can be simulated efficiently on classical computers" T. Barthel arXiv:1708.09349, pdf [11] "Fundamental limitations for measurements in quantum many-body systems" T. Barthel and J. Lu arXiv:1802.04378, pdf, Phys. Rev. Lett. 121, 080406 (2018) Nonequilibrium dynamics and driven-dissipative systems. Nonequilibrium dynamics in closed systems. - Fundamental questions concerning the quantum dynamics of many-body systems are about the equilibration and thermalization after the system is driven out of equilibrium. In an early contribution, I derived general preconditions under which subsystems of large quasi-free systems converge to steady states [1]. These steady states are derived from generalized Gibbs ensembles and obey entanglement area laws. The criteria also lead to simple counter-examples, where systems fail to equilibrate. We have also studied out-of-equilibrium dynamics for non-integrable systems of ultracold atoms in optical lattices, e.g., concerning expansion dynamics and localization [2], quench dynamics and equilibration in systems with multiple atom species [3], as well as domain-wall melting in the presence of experimentally relevant hole and spin-flip defects [4]. We are currently working on new tensor network state methods to extend reachable times in the study of non-equilibrium dynamics, starting from low-entangled states. Decoherence, criticality, and phase transitions in driven-dissipative systems. - In several experimental frameworks, a high level of control on complex quantum systems has been accomplished. Prominent examples are circuits of superconducting qubits, coupled quantum dots, ultracold atoms in optical lattices or tweezers, which can also be excited into Rydberg states to achieve strong interactions, ions in electromagnetic traps, and polaritons in circuit-QED or semiconductor-microcavity systems. Due to practical constraints and our aim of manipulating these systems efficiently, they are inevitably open in the sense that they are coupled to the environment. Such environment couplings naturally lead to dissipation and decoherence, which pose challenges for modern quantum technology. On the other hand, driving and dissipation in open systems could be designed to stabilize useful states or to drive the system into (novel) phases of matter that may not be accessible in equilibrium. In [5], we analyzed for example the stabilization of Bose-Einstein condensates through driving and dissipation. So far, our work is focused on Markovian quantum systems. Their dynamics is governed by a Lindblad master equation which, in addition to the unitary Hamiltonian part, comprises Lindblad operators that capture the environment couplings: • We have used Trotter decompositions and Lieb-Robinson bound techniques to prove that Markovian dynamics can be simulated efficiently on unitary quantum computers and with system-size independent costs on classical computers [6,7]. We employ tensor network methods that rest on these findings to simulate large open quantum systems. • We showed, for example, that the internal interactions of a dissipative system can hinder decoherence. And, under certain circumstances, the interplay of the internal interactions and dissipation processes can lead to divergent decoherence times, i.e., a critical system with an algebraic coherence decay [8]. • In [9], we discussed how block-triangular structures in the Liouvillian super-operator, corresponding to (possibly hidden) dynamical constraints, can make driven-dissipative phase transitions • Based on work by Prosen and Seligman, we laid out a formalism for the solution of quasi-free and of quadratic open fermion and boson systems [10]. • In [11], we used this formalism to elucidate fundamental differences concerning criticality in open and closed systems, and found that, without symmetry constraints beyond invariance under single-particle basis and particle-hole transformations, all gapped quadratic Liouvillians belong to the same phase. We are currently employing Keldysh field theory and renormalization group analyses to investigate driven-dissipative phase transitions in interacting systems with a focus on fundamental differences to closed systems [5]. [1] "Dephasing and the steady state in quantum many-particle systems" T. Barthel and U. Schollwöck arXiv:0711.4896, pdf, Phys. Rev. Lett. 100, 100601 (2008), also in Virtual Journal of Quantum Information 8, Issue 3 (2008) [2] "Quasiperiodic Bose-Hubbard model and localization in one-dimensional cold atomic gases" G. Roux, T. Barthel, I. P. McCulloch, C. Kollath, U. Schollwöck, and T. Giamarchi arXiv:0802.3774, pdf, Phys. Rev. A 78, 023628 (2008) [3] "Magnetism, coherent many-particle dynamics, and relaxation with ultracold bosons in optical superlattices" T. Barthel, C. Kasztelan, I. P. McCulloch, and U. Schollwöck arXiv:0809.5141, pdf, Phys. Rev. A 79, 053627 (2009) [4] "Domain-wall melting in ultracold-boson systems with hole and spin-flip defects" J. C. Halimeh, A. Wöllert, I. P. McCulloch, U. Schollwöck, and T. Barthel arXiv:1307.0513, pdf, Phys. Rev. A 89, 063603 (2014) [5] "Driven-dissipative Bose-Einstein condensation and the upper critical dimension" Y. Zhang and T. Barthel arXiv:2311.13561, pdf, Phys. Rev. A 109, L021301 (2024) [6] "Dissipative quantum Church-Turing theorem" M. Kliesch, T. Barthel, C. Gogolin, M. Kastoryano, and J. Eisert arXiv:1105.3986, pdf, Phys. Rev. Lett. 107, 120501 (2011), see also D. Browne "Viewpoint: Quantum simulation hits the open road" Physics 4, 72 (2011) [7] "Quasi-locality and efficient simulation of Markovian quantum dynamics" T. Barthel and M. Kliesch arXiv:1111.4210, pdf, Phys. Rev. Lett. 108, 230504 (2012) [8] "Algebraic versus exponential decoherence in dissipative many-particle systems" Z. Cai and T. Barthel arXiv:1304.6890, pdf, Phys. Rev. Lett. 111, 150403 (2013), covered by Phys.org "Quantum particles find safety in numbers" phys.org, 10/16/2013 [9] "Superoperator structures and no-go theorems for dissipative quantum phase transitions" T. Barthel and Y. Zhang arXiv:2012.05505, pdf, Phys. Rev. A 105, 052224 (2022) [10] "Solving quasi-free and quadratic Lindblad master equations for open fermionic and bosonic systems" T. Barthel and Y. Zhang arXiv:2112.08344, pdf, J. Stat. Mech. 113101 (2022) [11] "Criticality and phase classification for quadratic open quantum many-body systems" Y. Zhang and T. Barthel arXiv:2204.05346, pdf, Phys. Rev. Lett. 129, 120401 (2022) Stochastic dynamics in network systems and epidemic outbreaks. Somewhat surprisingly, we found that tensor network approximations and ideas from quantum Monte Carlo can be adapted to study stochastic nonequilibrium dynamics in classical networks. This is an exciting branch of nonequilibrium statistical physics with a wide range of applications such as spin glasses, technical networks, epidemic dynamics, and social networks. A common technique for the investigation of stochastic dynamics is Markov-chain Monte Carlo (MCMC), where one starts from an initial state and propagates in time according to a stochastic model. However, in this approach, observables converge very slowly when increasing the number of samples. Hence, MCMC does not allow for an efficient study of temporal correlations and rare (but consequential) events like big epidemic outbreaks or large-scale outages in technical networks. • While tensor networks are used to encode entangled states in quantum many-body systems, we found that they can be employed to efficiently encode conditional probabilities for state trajectories in stochastic network dynamics [1,2], reducing the time complexity from exponential to polynomial. With simulations of Glauber-Ising dynamics on Boolean networks, we have established that the new tensor network technique has substantial advantages over MCMC methods. It gives access to precise temporal correlations, which are otherwise very difficult to assess in systems with bidirectional dynamics. There are still some limitations: The new tensor network technique is based on the dynamic cavity method which is an extension of Mézard's cavity method for equilibrium systems and, hence, restricted to locally tree-like networks. Also, computation costs increase strongly with increasing vertex degrees. • Recently, I have developed an alternative Monte Carlo algorithm that is not restricted to tree-like graphs and allows us to sample stochastic dynamics conditioned on specific (rare) events. New efficient simulation techniques are needed to properly understand the preconditions of rare catastrophic events and to improve mitigation strategies. For example, large epidemic outbreaks often emerge via a confluence of rare, stochastic events. Potentially dangerous spillover infections from animals into humans are frequent but rarely take off: The introduced infectious disease typically dies quickly or spreads to a small number of local cases and fizzles out. Only a small percentage of these local outbreaks manage to pass bottlenecks in the heterogeneous contact network, to expand, and become widespread epidemics. The following link provides codes that employ the dynamic cavity method and a matrix-product representation for edge messages to simulate stochastic dynamics on networks. The matrix product edge messages (MPEM) are generated in an iteration over time, including controlled truncations of low-weight components for efficiency. We'd be happy to help in case of questions and to collaborate. Download: MPEM toolkit version 1.0 [1] "A matrix product algorithm for stochastic dynamics on networks, applied to non-equilibrium Glauber dynamics" T. Barthel, C. De Bacco, and S. Franz arXiv:1508.03295, pdf, Phys. Rev. E 97, 010104(R) (2018) [2] "The matrix product approximation for the dynamic cavity method" T. Barthel arXiv:1904.03312, pdf, J. Stat. Mech. 013217 (2020) have a nice day!
{"url":"http://barthel.phy.duke.edu/index_nojs.py?page=research","timestamp":"2024-11-02T01:30:22Z","content_type":"text/html","content_length":"68521","record_id":"<urn:uuid:732e05a4-de69-4ca7-aab6-0dc50ca067d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00876.warc.gz"}
i have a thought about nomuras statement on"growing" Not open for further replies. well nomura did state there lvl ups would be something like growing? well remember in the trailer? they had armor. and they have bits and pieces of armor in the pics/scans for BBS. well what if they "grow" into there pieces of armor? Sep 4, 2007 well nomura did state there lvl ups would be something like growing? well remember in the trailer? they had armor. and they have bits and pieces of armor in the pics/scans for BBS. well what if they "grow" into there pieces of armor? Hmmm I actually like the sound of that and I hope that is what Nomura meant. i do to. it sounds true and might be a possibility. you are right on that. i like kingdom hearts game and i play the game everyday when i get home form school. its a thought. i want to play this game soooooooooooooo bad. you dont even know!!!. i wish it would hurry up and come out. this or dissidia final fantasy which ever comes first. yeah i know and im gettin the game so for my birthday and christmas......... my character is kairi on kingdom hearts hello are u there talk to me and im so bored Oct 17, 2007 welcome to the forums kairifan06. just a tip don't double post. EDIT Aug 24, 2007 I Think the armor pieces are like a reward for every time VAT get to a certain level. Every time they reach a certain point, they get another piece of the armor. Aug 23, 2007 well nomura did state there lvl ups would be something like growing? well remember in the trailer? they had armor. and they have bits and pieces of armor in the pics/scans for BBS. well what if they "grow" into there pieces of armor? This is EXACTLY what I said.I like your thinking kid. Example you start with one arm of armour and then you level up and get like a brest plate or something. Nov 2, 2006 Oct 14, 2007 Maybe some abilities cn be represented by armor/accesories as well. The closest I ever gave.. May 5, 2007 Hm. Seems like a good idea. I like the sound of it, but didn't Nomura say that the armor in the video was simply to hide VAT's faces? Apr 15, 2007 it's possible. I guess they can aquire new armor when they defeat bosses or soemthing and the bonus lvl thingy from KH 2 comes ina nd they have the new arnor plates.... these all seem like very likely ideas. thanks i try to think logically. Oct 30, 2007 maybe when you level up to a cetain point you can get the armor but you need to buy or maybe the bosses drop the pieces when you beat them or they appear on you The boy who blocked his own shot Apr 14, 2007 Getting a new piece of armor with each level... The idea hadn't even crossed my mind since I heard Nomura saying that... Now I really hope Nomura uses a system similar to that. Then again, at the same time I don't, because I know at one point, I'll think the character looks so awesome I won't want it to change... Sep 23, 2007 Maybe what he meant by growing may be skill, magic, speed or the keyblades strengh! and the armor was to hide their faces and make the secret ending more gripping! The closest I ever gave.. May 5, 2007 I haven't the foggiest idea what he could mean by growing, but I do like this idea, though I think it might have to do more with the literal meaning of growing, but I have NO idea how. x.x; So Nov 21, 2007 Yea, I think the reward system is a good idea. On the subject of growing, he said you have to match up four obligations to the "situation", so maybe it has to do with choice making. Good choices help you grow into a better wielder. Bad choices go against you. Not open for further replies.
{"url":"https://www.khinsider.com/forums/index.php?threads/i-have-a-thought-about-nomuras-statement-on-growing.95714/#post-2585971","timestamp":"2024-11-08T06:02:28Z","content_type":"text/html","content_length":"146401","record_id":"<urn:uuid:d9d57a48-232c-4577-82e3-2262e0c7d147>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00264.warc.gz"}
Eran Nevo Institute of Mathematics, Hebrew University, Givat Ram, Jerusalem 91904, Israel. E-mail: nevo[at]math.huji.ac.il TL: +972-2-6586871 Teaching 2023/24: • Introduction to Mathematics research (Fall 2023/24) [website] • Discrete Mathematics (Spring 2024) [website] □ Combinatorics - with connections to commutative algebra, topology, geometry and convexity. Projects (students are welcome to join!) □ Face numbers and flag-face numbers of simplicial complexes (for example: manifold triangulations) and polytopes (for example: cubical polytopes). Tools include for example: the Stanley-Reisner ring, toric varieties, framework rigidity. Computational complexity aspects of face numbers are also studied. □ Betti tables of monomial ideals (for example: edge ideals) and related invariants (for example: regularity). Topological tools are also available here via Hochster's formula. □ Embeddability of simplicial complexes in manifolds. Tools include for example: the van-Kampen obstruction and higher minor notions. Papers published or accepted Last update: 23. 05. 2024.
{"url":"http://www.math.huji.ac.il/~nevo/","timestamp":"2024-11-09T02:55:40Z","content_type":"text/html","content_length":"16801","record_id":"<urn:uuid:4391f191-09c6-4bcb-943b-ca89e0bbb29b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00009.warc.gz"}
The dynamics of dynamic time warping in fMRI data: A method to capture inter-network stretching and shrinking via warp elasticity In neuroimaging research, understanding the intricate dynamics of brain networks over time is paramount for unraveling the complexities of brain function. One approach commonly used to explore the dynamic nature of brain networks is functional connectivity analysis. However, while functional connectivity offers valuable insights, it fails to consider the diverse timescales of coupling between different brain regions. This gap in understanding leaves a significant aspect of brain dynamics unexplored in neuroimaging research. We propose an innovative approach that delves into the dynamic coupling/connectivity timescales of brain regions relative to one another, focusing on how brain region couplings stretch or shrink over time, rather than relying solely on functional connectivity measures. Our method introduces a novel metric called “warping elasticity,” which utilizes dynamic time warping (DTW) to capture the temporal nuances of connectivity. Unlike traditional methods, our approach allows for (potentially nonlinear) dynamic compression and expansion of the time series, offering a more intricate understanding of how coupling between brain regions evolves. Through the adaptive windows employed by the DTW method, we can effectively capture transient couplings within varying connectivity timescales of brain network pairs. In extensive evaluations, our method exhibits high replicability across subjects and diverse datasets, showcasing robustness against noise. More importantly, it uncovers statistically significant distinctions between healthy control (HC) and schizophrenia (SZ) groups through the identification of warp elasticity states. These states are cluster centroids, representing the warp elasticity across subjects and time, offering a novel perspective on the dynamic nature of brain connectivity, distinct from conventional metrics focused solely on functional connectivity. For instance, controls spend more time in a warp elasticity state characterized by timescale stretching of the visual domain relative to other domains, suggesting disruptions in the visual cortex. Conversely, patients show increased time spent in a warp elasticity state with stretching timescales in higher cognitive areas relative to sensory regions, indicative of prolonged cognitive processing of sensory input. Overall, our approach presents a promising avenue for investigating the temporal dynamics of brain network interactions in functional magnetic resonance imaging (fMRI) data. By focusing on the elasticity of connectivity timescales, rather than adhering to functional connectivity metrics, we pave the way for a deeper understanding of neuropsychiatric disorders in neuroscience research. 1 Introduction There is a growing body of evidence supporting the notion that the brain functions as a highly intricate interconnected system (Achard et al., 2006; Calhoun et al., 2014; Hutchison et al., 2013; Iraji et al., 2020; Menon, 2011; O’Neill et al., 2018; Sporns, 2016; Van den Heuvel & Sporns, 2013; Yaesoubi et al., 2015). In the field of neuroimaging, researchers typically explore this interconnectedness through functional connectivity (FC) analysis. FC investigates and characterizes patterns of coordinated activity within the brain by assessing temporal similarities among different brain regions. This approach has been extensively applied using functional magnetic resonance imaging (fMRI) data (Biswal et al., 1995; Chang & Glover, 2010; Fox et al., 2005). Traditional approaches such as Pearson correlation or partial correlation analyze static connectivity. Functional network connectivity (FNC) studies temporal similarity among overlapping whole-brain FC networks (Jafri et al., 2008). Recent advancements enabled the investigation of time-varying changes in FC over time (Allen et al., 2014; Chang & Glover, 2010; Hutchison et al., 2013; Sakoğlu et al., 2010), termed dynamic functional connectivity (Hutchison et al., 2013), time-varying functional connectivity (TVFC) (Lurie et al., 2020), or time-resolved functional connectivity (trFC) (Calhoun et al., 2014; Fukushima et al., 2018; Hutchison et al., 2013; Iraji et al., 2020; Lurie et al., 2020). The term “time-resolved functional network connectivity” (trFNC) is typically used in the context of studying trFC within intrinsic networks derived from fMRI data. This typically involves techniques such as independent component analysis to estimate these intrinsic networks (Calhoun et al., 2001; Du et al., 2020). Several studies have studied trFC/trFNC utilizing methods such as sliding window Pearson correlation (SWPC) (Allen et al., 2014; Faghiri et al., 2023; Leonardi & Van De Ville, 2015; Sakoğlu et al., 2010), phase synchrony (PS), (Glerean et al., 2012; Honari et al., 2021; Pedersen et al., 2018), wavelet coherence (Chang & Glover, 2010; Yaesoubi et al., 2015), and coactivation patterns (Eickhoff et al., 2011; Liu et al., 2018). SWPC pairs the sample Pearson correlation estimator with a sliding window to estimate temporally resolved Pearson correlation. This approach offers simplicity and computational efficiency. Wavelet cross-coherence is an effective method for estimating coherence, which is essentially the frequency analogue to correlation, along with phase lags (Chang & Glover, 2010; Torrence & Compo, 1998), which may occur in fMRI (Friston, 2009). Despite the advantages of wavelet coherence, SWPC remains popular due to its established nature and ease of use, providing valuable insights into dynamic brain function. While functional connectivity analysis provides useful information about brain function and its connectivity, traditional methods such as SWPC have notable limitations in their study of brain interconnectedness. It does not account for the varying time lag between blood-oxygenation-level-dependent fMRI signals, disregarding potential variations due to factors such as brain region location and neural processes (Friston, 1994; Lindquist et al., 2009). Moreover, SWPC fails to consider induced FC changes from physiological noise and motion artifacts over time, leading to potentially inaccurate connectivity estimates (Rangaprakash et al., 2018). Additionally, SWPC’s reliance on fixed window types and sizes influences connectivity results significantly (Hutchison et al., 2013). Dynamic time warping (DTW) offers a solution to some of these issues by aligning time series with variable temporal shifts, improving static functional connectivity estimation (Linke et al., 2020; Meszlényi et al., 2017; Philips et al., 2021). Meszlényi et al. (2017) demonstrated DTW’s superior ability to capture connectivity amidst global noise and transient interactions compared with static correlation, also showing its reduced sensitivity to global signal regression, which shifts the distribution of Pearson correlation coefficients (Murphy et al., 2009). DTW is a dynamic programming algorithm that aligns pairs of time series to each other obtaining a measure of similarity by allowing adaptive “elastic” transformations of time series pairs to find similar patterns (Senin, 2008). This measure of similarity is obtained by computing the minimum distance cost of alignment between pairs of time series, within an allowable window of the signal pairs (Sakoe & Chiba, 1978). The distance cost of DTW is obtained by finding the minimal sum of the distance (e.g., Euclidean, city-block) costs of warping two signals to each other. Meszlényi et al. (2017) used the distance cost from alignment pairs of time series as the functional connectivity measure against correlation coefficients (Linke et al., 2020). They focused on comparing the robustness of DTW distances and correlation coefficients to global regression signal, and the test–retest reliability of two measures in three cohorts of children, adolescents, and adults with and without autism spectrum disorders. They show that DTW outperforms classic correlation coefficients in these studies and show that more group differences could be detected from DTW than correlation. While the DTW study performed by Meszlényi et al. (2017) is helpful, it provides a single measure of similarity between two brain networks and not the underlying nature of the complexities of their couplings. Also, methods that estimate trFC often overlook the crucial possibility of changes in coupling timescales between brain region pairs. FC methods such as SWPC are limited in their ability to capture variations in coupling timescales between brain regions. They fail to account for scenarios where one brain region may exhibit dynamic coupling with another at different temporal resolutions. For instance, consider a scenario where 10 seconds of one brain network or region is dynamically coupled to 15 seconds of another brain network or region. In such cases, FC measures such as SWPC and instantaneous methods such as instantaneous phase synchrony would fail to capture these intricate dynamic coupling resolutions. In contrast, DTW has emerged as a promising approach for FC analysis, as it effectively allows for nonlinear mapping and dynamic elastic transformations between time series pairs. Building upon this capability, DTW can be extended to study the stretching and shrinking of coupling or connectivity timescales between brain region time series. We introduce an innovative approach, utilizing the adaptive “elastic” and varying windows provided by DTW, to dissect the intricate interplay between brain regions. Central to our methodology is the concept of “warp deviation,” which we define as the differential in warping paths, produced from DTW, between time series pairs. This concept allows us to estimate the relative timings of brain networks. Building upon warp deviation, we further conceptualize “warp elasticity”—a novel metric representing the degree of temporal expansion (stretching) or contraction (shrinking) of brain regions couplings relative to each other. By computing the derivative of warp deviation, we can effectively quantify warp elasticity, thus illuminating dynamic fluctuations in brain connectivity timescales at specific moments. Our approach represents a significant advancement in neuroimaging studies by providing a nuanced understanding of asymmetric patterns, where stretching of the temporal coupling in one brain region (indicating high cognitive processing) corresponds to shrinking of the temporal coupling (potentially indicating slower processing) in another. Here, we hypothesize that temporal stretching in the connectivity timescale of one brain region, potentially indicative of high-level processing, is counterbalanced by temporal shrinking in the connectivity timescale of another, suggesting low-level processing of brain networks. This methodology not only promises to unravel the complex dynamics of brain connectivity but also provides crucial insights into the neurological underpinnings of these processes. We demonstrate the replicability of the warp elasticity across subject samples and datasets and demonstrate its robustness to noise. Additionally, we observed statistically significant differences between schizophrenia (SZ) and healthy control (HC) groups in the dynamics of the clusters obtained from the warp elasticity. These findings highlight the potential of DTW as a valuable tool for studying brain connectivity dynamics. In our study, we utilized independent component analysis (ICA) to extract intrinsic networks from fMRI data. These brain networks represent distinct functional patterns within the brain. Subsequently, we employed DTW to analyze these networks and computed the warp elasticity for each pair of brain networks. 2 Method 2.1 Dynamic time warping DTW is an algorithm that is employed to find an optimal alignment between two time series signals by stretching both of them in time while minimizing the distance between corresponding points in the two time series signals (Paliwal et al., 1982; Sakoe & Chiba, 1978). This involved first computing a distance matrix between the two time series signals, where each entry represents the distance between two points in the two series. We refer to this matrix as the index-by-index distance matrix since the distance is computed for each index pair between the time series pair. The matrix is then traversed from the last index pair to find the optimal path that minimizes the cumulative distance between corresponding points in the two series. The term used to represent the total accumulated distance is the “distance cost.” The choice of distance metric for calculating the distance cost in DTW can include options such as Euclidean distance, city-block distance, symmetric Kullback-Leibler divergence, and squared Euclidean distance. In our study, we opted for the Euclidean distance due to its simplicity and computational efficiency. Furthermore, our decision was guided by previous studies in the field, which also utilized the Euclidean distance metric. This choice ensures consistency with prior research in fMRI studies (Linke et al., 2020; Meszlényi et al., 2017; Philips et al., 2021). The following steps outline the procedure for executing the DTW algorithm. Let $X$ and $Y$ be discrete time series of length $N$ and $M,$ respectively. 1. Compute the pairwise Euclidean distance matrix between all pairs of points in $X$ and $Y$. 2. Initialize the cumulative distance matrix $D$ with all entries set to infinity, except for $D[0,0]=0$. 3. For each index $i$ in the range $[1,N]$ and each index $j$ in the range $[1,M]$, compute the cumulative distance $D[i,j]$ as follows: $D[i,j]=d(X[i],Y[j]) +min(D[i−1,j],D[i,j−1],D[i−1,j−1])$ where $d(X[i],Y[j])$ is the distance between the $i$-th point in $X$ and the $j$-th point in $Y$. 4. The optimal warping path is then found by backtracking through $D$ from $D[N,M]$ to $D[0,0]$, following the path with the lowest cumulative distance. 5. The warping path can be used to align the two time series signals by warping one of them according to the indices in the path. 2.2 Warping path Prior to the DTW algorithm, each time series is assumed to be mapped to one another by their consecutive indexes. For two time series, TS-A and TS-B, of equal lengths, index 1 of TS-A is mapped to index 1 of TS-B, index 2 of TS-A and TS-B are mapped together and continually maps in parallel to one another till the end as shown in Figure 1(A). In addition to the distance cost, the DTW algorithm also provides another output: the optimal warping path. The optimal warping path is determined by following the path of the lowest accumulated distance within the index-by-index distance matrix. It is commonly referred to as the “warping path” of DTW and is shown in Figure 2 with the green line within the heatmap matrix. This warping path is essentially a sequence of indexes from the time series pair, depicting how each time series is stretched or relatively shrunk. This path effectively captures the extent of stretching and relative shrinking needed to align the time series pair. It is important to note that we use the term “relative shrinking” because DTW does not compress time series, it only stretches them. However, when one time series is stretched, it appears as a shrink relative to the time series that remains unstretched. It is important to clarify that the term “stretching” in this context refers to the amount of adjustment needed by the DTW algorithm to align time series accurately. This should not be confused with the genuine stretching or shrinking occurring within the time series data, which is what we aim to estimate and understand through our The DTW algorithm aligns the time series to maximize the similarity between the signals. This means that the algorithm can map pairs of signals to various indices of each other if it optimizes the similarity between the time series pair as shown in Figure 1(B). However, the DTW algorithm also incorporates a warping window constraint. This constraint sets a limit on the extent of stretching and mapping length that can occur during the DTW alignment. For instance, without any window constraint, index 10 of time series TS-A could be mapped to index 30 of time series TS-B. Yet, when a window constraint of 10 is applied, the highest index in TS-B to which index 10 of TS-A can be mapped is 20. This constraint helps control the degree of distortion between the aligned time series while still allowing for meaningful alignments. Importantly, it should be clarified that the warping window does not dictate a predetermined level of stretching or mapping length. Instead, it represents the upper limit on the extent of stretching and mapping that is permissible within the alignment process. The effectiveness of DTW hinges on the careful choice of the window constraint. It is crucial to select a window size that strikes a balance: it should be sufficiently large to capture complex nonlinear dynamics accurately, yet not so large that it introduces less meaningful alignment of time series pairs. In this study, we used a similar approach from Meszlényi et al. (2017) in setting the window for our DTW analysis. In fMRI, a common practice is to apply a bandpass filter with a lower limit of 0.01 Hz. This threshold corresponds to a full signal cycle of 100 seconds. Anticorrelations observed in fMRI data suggest inverse relationships between brain region activities. These anticorrelations are rather weak, transient, and less stable (Chang & Glover, 2010; Shehzad et al., 2009). Several studies have also suggested that physiological noise and global signal regression could introduce anticorrelations (Chang et al., 2009; Chang & Glover, 2010; Murphy et al., 2009; Saad et al., 2012; Shmueli et al., 2007; Wise et al., 2004). For such anticorrelations to occur, the signals might be out of phase by half a cycle, equating to a 50-second lag. Thus, analyzing the data over a 100-second window is logical, as it encompasses a full cycle of the signal, allowing for the detection of both synchronous and out-of-phase relationships (Meszlényi et al., 2017). Rather than using an approximation of 100 seconds for the DTW window, we use the activity signal spectrum to guide the selection of the window size: where N is the window size and $Fs$ is the sampling frequency (1/0.72 seconds – 1.39 Hz for HCP and 1/2 seconds – 0.5 Hz for fBIRN). Because of the band pass filter applied to the data, the high-pass cutoff $f3db$ should be higher than 0.01 Hz which results in a minimum N value of ~123 for HCP and ~45 for fBIRN. Note 123 and 45 are the number of time points rather than seconds. 2.3 Warp deviation Warp deviation quantifies the disparity between the warping paths generated by DTW for a pair of signals. In simpler terms, it measures the temporal distortion of one signal to another. The warp deviation is calculated as the temporal difference in the warping paths $ix$ and $iy$ of two time series $x$ and $y,$ respectively, at a specific time (t). The formula for the warp deviation of the time series $x$ is given below: The warp deviation of the time series $y$ will simply be the negation of $WDx(t)$. When $WDx(t)$ has positive values, it indicates a delayed response of time series $x$ to $y$. This is because if, at a given time (t), $x$, delays $y$, the warping path of $x$, $ix(t)$ will have time indexes higher than $iy(t)$. This observation is significant because such delays could occur due to the stretching of time series x, causing it to lag time series y for subsequent responses. On the contrary, negative $WDx(t)$ values indicate that the time series $x$ progresses faster than $y$ in its temporal alignment. This could also suggest a cumulative shrinkage occurring in time series $x$ with respect to $y$. In Figure 2, the initial mapping path is depicted by the black line, while the DTW warping path is represented by the green line. To obtain the warp deviation, we calculate the difference between the TS-A side and the TS-B side of the DTW warping path (green line in Fig. 2) taking TS-A as reference for the upper triangle and negated when using TS-B as reference for the lower triangle. To visually represent the warp deviation, we utilize a colormap within the heatmap matrix depicted in Figure 2. In this matrix, the upper triangle corresponds to the warp deviation linked to TS-B, whereas the lower triangle represents the warp deviation related to TS-A. Positive warp deviation values (red stripes) show the time by which the time series delays the other while negative warp deviation values (blue stripes) show the time by which the time series progresses faster than the other. 2.4 Warp elasticity Warp elasticity is designed to estimate the extent of stretching or shrinking happening between a pair of interconnected time series. We determine the degree of stretching or shrinking in a time series by calculating the temporal derivative of the warp deviation. This approach is logical because the warp deviation reflects the delay or lead of one time series to another, resulting from stretching or shrinking. As cumulative stretching leads to delay and cumulative shrinking leads to a lead in time series, taking the derivative of this delay or lead allows us to estimate the stretching and shrinking. The warp elasticity of time series x is expressed as follows: Since the unit of $WDx(t)$ is time, its temporal derivative is dimensionless. This signifies that the warp elasticity represents stretching and shrinking factors. For instance, a factor of 0.5 indicates a 50% stretching of one time series to another, while −0.5 indicates a 50% shrinkage. A factor of 0 denotes no shrinkage or stretching in one time series to the other. We conducted a simulation study to investigate how well warp elasticity estimates the stretching and shrinking temporal adaptation of two time series to mimic the temporal adaptation of the timescale of functional connectivity among brain regions. Conceptually, one could consider that there might be adaptation in the communication scale (which for fMRI would be a combination of neural interaction and hemodynamic interactions) driven by higher-level processing networks, whereas lower-level processing networks, such as sensory networks, would have a less adaptive nature. For our toy example, we generated two signals for this purpose. The first signal, which we refer to as TS-A, consisted of a random signal with a length of 210 time points, sampled at a rate of 1 second. To analyze the adaptation of processing speed, we divided this TS-A signal into seven equal segments, each containing 30 time points, collectively covering the 210-timepoint duration. To create the second signal, denoted as TS-B, we resampled each of these seven segments to specific lengths: 20, 40, 20, 30, 40, 20, and 40 time points, respectively. The sampling factor that yields the new lengths is computed by dividing the initial timepoint length by the new length of timepoints. Thus, the sampling rates for these seven segments are 0.67, 1.33, 0.67, 1, 1.33, 0.67, and 1.33, respectively. These sampling factors represent the stretching and shrinking of TS-B. Subsequently, we concatenated these resampled segments to form the TS-B time series. Random noise was introduced to time series data sets TS-A and TS-B, followed by band limiting both TS-A and TS-B to a frequency range of 0.01 to 0.15 Hz, aligning with the parameters utilized in our fMRI analysis. In Figure 1(C), we show TS-A and TS-B while highlighting the corresponding segments of resampling occurring in TS-B using black bars. In Figure 1(D), the warp deviation of TS-A is depicted. It is observed that when the sampling factor is lesser than 1, indicating shrinking in TS-B and relative stretching in TS-A, there is a consistent rise in the warp deviation of TS-A. Conversely, when the sampling factor is greater than 1, denoting a relative shrinkage in TS-A, there is a continual decrease in the warp deviation. Moreover, when the sampling rate is 1, indicating no change in size, a constant value is maintained within that segment. As anticipated, the stretching and shrinking patterns in the segments are reflected in the rate of change of the warp deviation. In Figure 1(E), the warp elasticity of TS-A alongside the ground truth elasticity is illustrated. In our ground truth simulation, we generated 1000 random time series pairs with the same stretch and shrink structure as described earlier. For each pair, we computed the warp elasticity measure independently. Subsequently, we averaged the warp elasticity values across all these random samples. Finally, we compared these average warp elasticity values with the ground truth, allowing us to validate the accuracy of our methodology across various random scenarios. The ground truth elasticity is calculated by determining how many time points a segment in one time series has been stretched or shrunk and dividing it by the total length of the time series’ time points in that segment. For instance, in the initial segment of our example where TS-B was sampled with a factor of 0.67, it means TS-B has 20 time points while TS-A has 30 time points. Considering TS-A, there is a relative stretch of 10 time points compared with TS-B. Hence, the actual stretch factor for TS-A in that segment is +0.33 [10/30]. Positive values denote stretching, and negative values indicate shrinking. In Figure 1(E), the ground truth elasticity factor of TS-A is represented by a black dashed line, and the estimated warp elasticity is shown in red. As depicted in Figure 1(E), the correlation coefficient between the ground truth elasticity and the warp elasticity measure is 0.88, indicating a strong correlation between the ground truth and the warp elasticity measure. This simulation example effectively demonstrates how our method, the warp elasticity, can reveal time-varying stretching and shrinking behaviors in signals as they adapt to one another. This sheds light on the intricate dynamics of signal alignment, showcasing the potential of our approach to understanding the nuanced interactions between signals. Since the warp elasticity estimates are functions of sampling time, we accommodate for changes in warp elasticity relative to sampling time by expressing it in seconds. For instance, consider a 30-second segment with a stretching factor of 0.5. This means that for every second, the time series is stretched by 0.5 seconds, leading to a total stretching of 15 seconds over the 30-second segment, regardless of the sampling rate. Thus, expressing warp elasticity values in seconds ensures consistency, facilitating comparisons across datasets with different sampling rates. This methodology becomes particularly relevant in our study, where we are analyzing datasets with different temporal resolutions. For example, the fBRIN dataset has a TR of 2 seconds, whereas the HCP dataset has a TR of 0.72 seconds. By standardizing warp elasticity in relation to sampling time, we aim to make the results from these different datasets comparable and consistent. This approach will be a key aspect of our subsequent analyses in this study as it also provides us with a way to view the estimations in terms of seconds instead of constants. 2.5 fMRI data This study received approval from an ethics board, and all participants provided consent by signing a form approved by the institutional review board (IRB). Our first dataset included resting-state fMRI data collected from 827 subjects via the Human Connectome Project (HCP) database (Van Essen et al., 2012, 2013). Specifically, we analyzed second-session scans acquired using a Siemens Skyra 3T scanner with a multiband accelerated, gradient-echo echo-planar imaging (EPI) sequence. The scanning parameters were set to a repetition time (TR) of 0.72 seconds, 72 slices, an echo time (TE) of 58 ms, and a flip angle of 90°. A voxel size of 2 × 2 × 2 mm was used to acquire 1200 time points of fMRI data for each subject. The use of the HCP dataset in our study served two primary objectives. Foremost, it established the foundational basis for evaluating the noise sensitivity of our metric. The rationale behind the selection of the HCP dataset emanated from its notable high-quality and meticulous preprocessing standards, coupled with an extensive participant pool that can assess the robustness of our metric evaluation. Moreover, the dataset’s widespread adoption within the neuroscience community facilitated facile cross-study result comparisons, thus accentuating its suitability for our research. Secondly, we used data from SZ patients and controls collected via the Function Biomedical Informatics Research Network (fBIRN) study. For participants to be included, they needed to meet specific criteria: their head motion during scans had to be equal to or less than 3° and 3mm, and their functional data should demonstrate successful normalization by aligning individual brain masks with a group mask. The scans were collected at a repetition time (TR) of 2 seconds. These conditions led to a total of 160 HCs with an average age of 37.04 ± 10.86 years, ranging from 19 to 59 years. Among these, 45 were female and 115 were male. Additionally, there were 151 patients diagnosed with SZ, with an average age of 38.77 ± 11.63 years, ranging from 18 to 62 years. In this group, 36 were female and 115 were male. The HCs and SZs were meticulously matched in terms of age, gender distribution, and mean framewise displacement during scans (age: p = 0.1758; gender: p = 0.3912; mean framewise displacement: p = 0.9657). Notably, the HCs had no history of past or current psychiatric disorders based on the Structural Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders assessment, nor did they have first-degree relatives diagnosed with Axis-I psychotic disorders. It is important to highlight that the SZs, while diagnosed with schizophrenia, were in a clinically stable condition during the time of their scans. In our study, we employed the fBIRN dataset for conducting group comparisons, aiming to assess the capability of our method in identifying significant differences between the groups, specifically SZ and HC. 2.6 fMRI processing Functional magnetic resonance imaging (fMRI) data require extensive preprocessing to correct for various sources of noise and artifacts before analysis. The preprocessing steps commonly applied in fMRI studies include slice timing correction, realignment, spatial normalization, and spatial smoothing (Esteban et al., 2019; Penny et al., 2011; Turner et al., 1998). Following preprocessing, we implemented the NeuroMark pipeline, a fully automated spatially constrained ICA on the preprocessed fMRI data (Du et al., 2020). Using the neuromark_fMRI_1.0 template, we generated 53 intrinsic connectivity networks (ICNs) for each subject. These ICNs are grouped into brain domains including Subcortical (SC), Auditory (Aud), Sensorimotor (SM), Visual (Vis), Cognitive Control (CC), Default Mode (DM), and Cerebellum (Cb). To further enhance the quality of these ICNs, we applied detrending and despiking techniques to remove drifts, abrupt fluctuations, and large artifacts that may not have been removed by the initial preprocessing steps. These techniques helped improve the accuracy and reliability of subsequent analyses. To optimize data quality, the time series of intrinsic connectivity networks (ICNs) were bandpass filtered in the range of 0.01 to 0.15 Hz, a standard frequency range in fMRI research that is relevant for identifying brain domain BOLD signals (Braun et al., 2012; Yaesoubi et al., 2015). An infinite impulse response (IIR) filter was designed using the butter function in MATLAB applied via the filtfilt function to ensure zero phase shifts and preserve phase information, which can be nonlinearly altered by IIR filters (Caballero-Gaudes & Reynolds, 2017; Oppenheim, 1999; Proakis, 2007). Finally, z-scoring was performed on the ICNs. 2.7 Analysis In our research, we conducted a series of rigorous analyses to establish the credibility of warp elasticity as a robust metric. Our primary objectives encompass demonstrating the metric’s replicability and robustness, while also showcasing the novel insights it offers for interpreting fMRI data. Through these analyses, we affirm the metric’s reliability as a viable metric for neuroimaging analysis. Furthermore, we showcase how warp elasticity reveals promising patterns in fMRI, thus contributing to an enhanced comprehension of neural dynamics. In our study, we utilize k-means clustering to group the data derived from warp elasticity estimations. This technique helps us identify recurring temporal patterns in the warp elasticity of component pairs across various subjects, following the methodology described by Allen et al. (2014). These identified patterns are referred to as “functional connectivity states” in FC analysis, but in this study, we refer to them as “warp elasticity (WE) states.” By analyzing these states, we can track the transitions between them and quantify the duration subjects spend in each state. This is particularly insightful as it may highlight differences between healthy individuals and patients with brain disorders (Du et al., 2018; Faghiri et al., 2023; Fu et al., 2018; Kazemivash & Calhoun, 2022; Kazemivash et al., 2023; Kim et al., 2017; Sang et al., 2023; Sun et al., 2021). This technique is useful for checking the consistency of trFNC measures and comparing different trFNC methods, ensuring reliable and robust results in neuroscience research (Abrol & Calhoun, 2022; Abrol et al., 2017; Faghiri et al., 2022; Motlaghian et al., 2023; Romanello et al., 2022; Wiafe et al., 2023). The WE states represent global recurring stretching and shrinking patterns between brain network pairs. In essence, each state encapsulates a recurring stretching or shrinking pattern exhibited by brain region pairs throughout the fMRI scan. These patterns occur at different times within the scan, offering insight into the global stretching and shrinking recurring dynamics across the brain. 2.1.1 Comparison with existing trFNC measures Our study is centered on investigating the stretching and shrinking parameter, which is not addressed by existing trFNC methods such as SWPC or PC. However, we offer a simulation to compare warp elasticity with these trFNC measures across pairs of time series exhibiting stretching and shrinking. This aims to emphasize the differences between warp elasticity and trFNC measures, specifically SWPC and PS. SWPC is calculated using various window sizes—specifically 15, 45, 75, 105, and 135 seconds—to comprehensively assess its capability to detect or account for stretching and shrinking across different coupling timescales. The SWPC of a time series pair, denoted as $x(t)$ and $y(t)$, for a selected window, $Δ$, is expressed as follows: PS is computed by determining the difference in the estimated instantaneous phase of a signal pair using the Hilbert transform. To ensure comparability with SWPC, a function is applied to constrain the phase difference results between −1 and 1. The cosine function is useful for this purpose as it preserves anticorrelations, avoids phase unwrapping, and considers phase ambiguity (Honari et al., 2021). Given a time series pair, denoted as $x(t)$ and $y(t)$, along with their corresponding estimated instantaneous phases represented as $φx(t)$ and $φy(t)$ using the Hilbert transform, the phase synchrony between $x(t)$ and $y(t)$ is expressed as follows: Given that PS is sensitive to the selection of an appropriate filter bandwidth, as per the Bedrosian theorem (Bedrosian, 1962; Xu & Yan, 2006), we conduct PS for various bandwidths: 0.01–0.04 Hz, 0.03–0.07 Hz, 0.05–0.09 Hz, 0.07–0.11 Hz, and 0.09–0.13 Hz using a Butterworth filter. Similar to the simulation conducted in the warp elasticity section and as illustrated in Figure 1, we generate 1000 pairs of time series, each comprising 400 timesteps, with predetermined relationships. In the initial half of each pair, the time series are Gaussian random, indicating no relationship across all samples. However, for the latter half of the signal pairs, we generate 50-second segments of random signals for TS-A and subsequently resample them at varying sampling rates to create TS-B, thereby inducing time-varying stretching and shrinking effects in their couplings. The resampling rates applied to the second half to create TS-B are 0.6, 1.4, 0.6, and 1.4 as illustrated in Figure 3(A). In this simulation, for the initial half of the time series pairs, we anticipate that both SWPC and PS will estimate the ground truth relationship of zero. This expectation arises from the fact that these signals are generated using a univariate Gaussian distribution independently. Consequently, we expect warp elasticity to exhibit no stretching or shrinking effects, resulting in a value of 0 seconds. For the second half, we anticipate that both SWPC and PS will exhibit high correlation and high synchrony, respectively, close to a value of 1 for both measures. This expectation arises from the fact that the segments are perfectly synced and correlated, with the only variation being due to different temporal resolutions induced by the resampling rates. Warp elasticity is anticipated to quantify the amount of stretching and shrinking in seconds, aligning with the resampling factors as computed in the warp elasticity section. The ground truth for SWPC and PS is depicted in Figure 3 (B), while the ground truth for warp elasticity is illustrated in Figure 3(C). The difference in ground truths for trFNC measures and warp elasticity underscores the differences between these measures. trFNC evaluates the time-resolved connectivity between a pair of signals, whereas warp elasticity measures the temporal distortions specifically, stretching and shrinking, of the couplings between a pair of signals. 2.1.2 Noise sensitivity We examined the impact of noise on warp elasticity by introducing varying levels of noise to the fMRI signals. This approach enabled us to establish distinct signal-to-noise ratios (SNRs) for our data during the evaluation process. Treating the HCP fMRI data as the pure signal, we systematically added noise of different intensities to achieve specific SNR values. We are interested in observing the sensitivity of warp elasticity in a specific range of SNRs that is associated with fMRI. Several studies have reported the SNR range of fMRI data (Hahn & Rowe, 2012; Hughes & Beer, 2012 ; Särkkä et al., 2012; Welvaert & Rosseel, 2013). Welvaert and Rosseel (2013) provided a concise overview of multiple research papers that discussed the SNR in fMRI and revealed that the temporal SNR (tSNR) of fMRI is within the range of 1.07 and 84.54 with a mean of 12.98. tSNR is a term used for computing the SNR for fMRI time. The typical way of estimating tSNR involves dividing the mean value of the fMRI time by the variance of the noise time series (Nan & Nowak, 1999; Triantafyllou et al., 2005; Welvaert & Rosseel, 2013). For this reason, our noise sensitivity test does not include the z-scoring step in our fMRI postprocessing. where $S¯$ is the mean of the fMRI time series and $σN$ is the variance of the noise time series. To test how sensitive our metric is to different levels of noise, we introduce noise signals at different variance levels to the fMRI time series, aligning these variances with specific tSNR values. We do this by generating 100 different noise signals at various temporal signal-to-noise ratio (tSNR) levels for each brain network time series independently. These tSNR levels were randomly sampled from a Gaussian distribution with a mean of zero and a standard deviation corresponding to the targeted tSNR value. It is important to note that the noise signals added to each brain network were individually created, ensuring that the variability in noise across networks was captured in our analysis. These tSNR levels ranged from 100 to 0.01, with a specific focus on values within the range of 84.54 to 1.07. For each tSNR value, we followed Equation (2) to create 100 sets of noise samples. These noise samples were then added to the original fMRI data, resulting in the generation of 100 distinct noisy fMRI signals for each tSNR and each brain network independently. This entire process was repeated for all pairs of brain networks. We calculated correlation coefficients between the warp elasticity of noisy fMRI signals and the warp elasticity of the actual fMRI signals at each SNR value. Our goal is to examine the correlation between each noisy fMRI time series and the original, noise-free fMRI time series. We anticipate observing a high correlation, even for low tSNR values, demonstrating that our warp elasticity method is less sensitive to noise. We conducted a noise sensitivity assessment on simulated data, mirroring the tSNR range utilized in the fMRI noise sensitivity analysis. We created 100 random pairs of time series, each pair with their corresponding ground truth. Subsequently, we generated 100 random pairs of noise signals from a Gaussian distribution with a mean of zero and a standard deviation aligned with the targeted tSNR value. These noise signals were added to the signal pairs representing the ground truth stretching/shrinking. To evaluate the effect of noise on our estimates, we compared the estimations of WE between the signals with varying tSNRs and their respective ground truths using correlation analysis. 2.1.3 Bootstrapping In this analysis, our primary objective is to demonstrate the robustness of our proposed metric, warp elasticity. To achieve this, we leverage the HCP dataset due to its higher participant pool, which offers a higher subject count compared with the fBIRN dataset. Our approach involves randomly selecting four sets of 200 subjects from the HCP dataset. To assess the robustness of the warp elasticity across different subject compositions, we implement our novel methodology on the four selected subject sets separately to obtain the estimated measures for both sets. We scale the warp elasticity values by a factor of 0.72 since that is the TR of the HCP dataset. Subsequently, we employ the k-means clustering algorithm on the warp elasticity measure, employing the city-block distance function, to cluster the time-resolved vectors resulting from our estimations. This analysis allows us to explore the consistent patterns of connectivity across different subject subsets. We selected a cluster number of 3 for this analysis based on the elbow criterion, by taking the ratio of the within-cluster sum of squared distances (WSS) to the between-cluster sum of squared distances (BSS) (see Supplementary Material for elbow plot). 2.1.4 Group comparison We employed the fBIRN dataset to assess the effectiveness of warp elasticity in discerning group disparities, specifically between individuals with schizophrenia and a healthy control group. Our approach involved applying warp elasticity across the entire fBIRN dataset. We scale the warp elasticity values by a factor of 2 since that is the TR of the fBIRN dataset. Subsequently, we employed k-means clustering with a city-block distance metric to obtain recurring warp elasticity states (Allen et al., 2014). We chose 3 clusters based on the elbow criterion, by taking the ratio of WSS to BSS (see Supplementary Material for elbow plot). The k-means clustering is performed on the full fBIRN dataset including HC and SZ groups, and subsequently separated the clusters based on group membership (HC and SZ), allowing for a direct comparison between the two groups. After clustering, we visualized the obtained clusters. In addition, to quantify the changes over time, we computed group-specific mean dwell times, percentage occupancy (fraction rate) for each state, and a transition matrix. “Mean dwell time” is defined as the average duration that a subject remains within a specific cluster after entering it; “fraction rate” refers to the average portion of time a subject spends in a particular cluster; “transition matrix” is a matrix that quantifies the probability of a subject transitioning from one cluster state to another (Allen et al., 2014; Iraji et al., 2020). In this study, we refer to the metrics of mean dwell time, fraction rate, and the transition matrix collectively as WE state dynamics. In our analysis comparing individuals with SZ and HC, we used the two-sample t-test to examine differences in the WE state dynamics. The t-test is appropriate for comparing two independent groups and assumes normally distributed data with equal variances. To address the risk of Type I errors due to multiple comparisons, we applied the False Discovery Rate (FDR) correction, specifically the Benjamini-Hochberg procedure. This approach adjusts p-values to account for the proportion of expected false positives. Our significance threshold was set at an FDR-adjusted p-value of less than 0.05, following common conventions to control for multiple comparisons. This threshold was chosen to ensure that our findings are both statistically robust and meaningful, reflecting true differences in the WE state dynamics between the SZ and HC groups. 3 Results 3.1 Comparison with existing trFNC measures Figure 3 demonstrates the performance of two trFNC measures, sliding window Pearson correlation and phase synchrony, in the context of varying coupling timescales. In Figure 3(B), we present the estimations of SWPC and PS alongside their respective ground truths. The estimation with each window size for SWPC and each bandwidth for PS is compared with the ground truths using Pearson correlations. These correlation values are depicted in the estimation section of Figure 3(B). The initial half of SWPC and PS estimations across all 1000 samples effectively capture the ground truth of zero synchrony and correlation, as expected. The second half demonstrates poor performance in estimating the ground truth using both SWPC and PS methods. The reason for this poor performance is the varying temporal coupling timescales. Specifically, in SWPC, smaller window sizes exhibit spikes at the points where stretching or shrinking begins to occur between TS-A and TS-B, notably for the 15 seconds window size and around time points 200, 300, and 400 seconds. Higher window sizes also capture these transient couplings, but due to their longer window lengths, these transients are estimated slightly early. Similarly, this effect is observed in the estimation of PS for all bandwidths. It is worth noting that, as shown in Figure 3(A), time points 200, 300, and 400 represent time points where stretching or shrinking initiates or ends between TS-A and TS-B. Consequently, points around those instances show no significant stretching or shrinking between TS-A and TS-B. It is evident that in the second half of TS-A and TS-B, negative phase synchronization and negative correlations occur for PS and SWPC, respectively. This phenomenon arises because of the stretching and shrinking effects present in the data. Warp elasticity effectively estimates no stretching or shrinking for the initial half of the signal pairs, TS-A and TS-B across all 1000 samples. In the second half of TS-A and TS-B, warp elasticity accurately estimates the coupled stretching and shrinking timescale, demonstrating a correlation of 0.91 with the ground truth, as depicted in Figure 3(C). 3.2 Warp elasticity noise sensitivity We first examined warp elasticity sensitivity to noise using the HCP dataset. We generated 100 noisy fMRI signals at tSNR levels ranging from 100 to 0.01. Correlation coefficients between warp elasticity of noisy and actual fMRI signals were calculated, revealing noise impact on warp elasticity. The results are illustrated in Figure 4, which displays the correlations between the noisy fMRI signals and the actual fMRI signals using various visual representations. In Figure 4(A), the histograms depict the distribution of correlation coefficients across all component pairs and samples for each specific tSNR value. Each tSNR value is represented by a histogram, with the red histograms highlighting the range most relevant to fMRI (1.07 to 84.54), as previously mentioned. Figure 4(A) also showcases box plots and scatter plots, offering an alternative visualization of the information presented in the raincloud plot. These provide a clearer insight into the presence of outliers within the correlation coefficient distributions compared with the histograms. The relevant fMRI tSNR histograms within Figure 4(A) (depicted by the red histograms) reveal an interesting observation: the distribution of correlation coefficients is primarily clustered at 1, spanning from tSNR values of 100 down to approximately 7. This indicates a strong positive correlation between the variables. For tSNR values ranging from 3.7 to 1, the correlation coefficients vary from 1 to 0. Most of the coefficients are close to 1, signifying high positive correlations, with only a minor tail extending toward correlation coefficients close to 0. Also, certain samples or network pairs display relatively low correlation values, as evidenced by the box and scatter plots in Figure 4(A). This important visual cue emphasizes that while our method demonstrates reduced sensitivity to noise, a subset of samples/network pairs still exhibit correlations that diverge from the typical trend. The significance of this variation, however, is very small and can be safely disregarded. Nevertheless, we believe it is important to present these findings to provide a comprehensive perspective on the outcomes of our study. Figure 4(B) provides an overview of the average correlation between noisy signals and the actual fMRI data, encompassing various tSNR values. The red dashed lines emphasize the relevant tSNR range for fMRI (1.07 to 84.54). The plot demonstrates strong associations (correlation coefficients ranging from 1 to ~0.66) between noisy fMRI and actual fMRI signals for the relevant fMRI tSNR range. For an evaluation of the global performance of warp elasticity, averaging the correlation coefficients across all samples and network pairs provides the most accurate assessment of its noise sensitivity. This goes on to show that the proposed warp elasticity method exhibits remarkable resilience in the presence of noise particularly in fMRI data. Furthermore, we examined the noise sensitivity of warp elasticity with ground truth elasticity from simulated data. Figure 4(C) illustrates the average correlation between warp elasticity of simulated data with varying tSNR values and the ground truth stretching and shrinking factors. Warp elasticity exhibits correlation coefficients exceeding ~0.71 for tSNR values ranging from 100 to 3.72. However, for tSNR values below ~1.93, the reliability of warp elasticity diminishes in the simulation noise sensitivity analysis. 3.3 Bootstrapping Figure 5 illustrates the robustness of the warp elasticity method through a bootstrap replicability test. We conducted this test by randomly selecting four sets of 200 subjects from the HCP dataset and independently applying the warp elasticity method to each set. Subsequently, we utilized k-means clustering with three clusters to group the estimated warp elasticity and displayed their cluster centroids in Figure 5(A). The rows in Figure 5(A) correspond to the four bootstrap samples, while the columns represent the three clusters for each bootstrap sample. The results show a high level of similarity between clusters of the bootstrap samples. The correlation matrix of all pairs of bootstrap samples for each cluster is shown in Figure 5(B). In Figure 5(B), it is evident that sample 4 exhibits the lowest correlation coefficients compared with all other samples in all clusters. The minimum coefficient, 0.65, is observed between samples 4 and 2 in cluster 2. However, samples 1, 2, and 3 showcase correlation coefficients above 0.85. 3.4 Group comparison We conducted a group comparison analysis using the fBIRN dataset to assess the effectiveness of our method in capturing differences between SZ and HC groups. We employed the warp elasticity method on the entire dataset comprising 311 subjects. These measures were subsequently summarized using k-means clustering, resulting in WE states. We isolated the WE states associated with the SZ and HC groups from the full dataset. Figure 6 displays the cluster centroids of these WE states for both SZ and HC. The warp elasticity values represent the estimated stretching and shrinking between brain networks. Positive values indicate stretching, while negative values indicate shrinking. In Figure 6, the pairwise warp elasticity between brain networks forms a nonsymmetric matrix. However, it is crucial to interpret these results following the conventional approach, reading them row by column. This is a result of the phenomenon of stretching and shrinking discussed earlier; when one network stretches against another, the latter shrinks by the same magnitude. Hence, the upper and lower triangles of the matrix are essentially mirror images of one another. For instance, in cluster 2, the visual networks (Vis) in the row display positive values of about +0.2 in the lower triangle with respect to subcortical (SC), auditory (Aud), and sensory-motor (SM) networks in the column. This indicates that Vis networks stretch approximately 20% with respect to SC, Aud, and SM networks. Conversely, if SC, Aud, and SM networks are to be considered, they should be read from the rows, and Vis from the columns. In this case, negative values of about −0.2 in the upper triangle signify approximately 20% shrinking of SC, Aud, and SM networks with respect to Vis. These observations are consistent with the analogy of stretching and shrinking, indicating that stretching in one network corresponds to relative shrinkage of the same amount in the other network. To analyze the group differences between the HC and SZ groups using warp elasticity, we assessed the mean dwell times, fraction rates, and transition matrices for these two groups based on the identified WE states. Figure 7(A) displays the group (average) mean dwell times for both SZ and HC. Notably, an asterisk (*) accompanying the group mean dwell times for cluster 2 highlights a statistically significant difference between the SZ and HC groups using a false discovery rate (FDR) corrected two-sample t-test. The green error bars in the figure represent the standard error associated with the group mean dwell times. Similarly, in Figure 7(B), the group fraction rates for SZ and HC are illustrated with clusters 1 and 2 exhibiting significant differences. Figure 7(C) provides insight into the transition matrices by showing the $−log1010$ transformation of these matrices for SZ and HC in the first two matrices. In the third matrix, the sign of the difference in transition matrices between SZ and HC is multiplied by the $−log1010$ transformation of the false discovery rate (FDR) corrected p-values of the transition matrices of SZ and HC. The analysis highlights statistically significant differences in the transitions from WE state 2 to state 2 (noted as 2→2), and from state 2 to state 1 (2→1), as indicated by the corresponding cells (2,2) and (2,1) in the matrices of Figure 7(C). These transition cells reveal significant differences between subjects with SZ and HC. Specifically, the differences are noted in the probability of subjects remaining in WE state 2 (for 2→2 transition) and the probability of subjects transitioning from state 2 to state 1 (for 2→1 transition). This suggests a distinct pattern in the WE state transitions for individuals with schizophrenia compared with healthy individuals. 4 Discussion The findings presented regarding the warp elasticity method underscore the reliability of DTW as a robust measure for estimating the dynamics of connectivity timescales by accommodating the temporal stretching and shrinking of brain region couplings. In our study, we adhered to standard preprocessing procedures for fMRI data, which involved utilizing ICA to extract intrinsic brain networks. Additionally, we implemented rigorous quality control steps, including detrending, despiking, filtering, and z-scoring. Subsequently, we applied DTW to pairs of brain networks and quantified the warp Our investigation focused on discerning consistent patterns of stretching and shrinking among brain network couplings, as captured through cluster centroids while investigating the ability of warp elasticity in discerning group differences specifically SZ and HC groups. This approach provided valuable insights into how different brain regions adapt to each other over time. Importantly, the limitation associated with SWPC, which employs fixed windows for computations, was effectively mitigated by the concept of warp elasticity. Figure 1, our illustrative example, demonstrates that in Figure 1(C) and (E), each segment (window) from the ground truth aligns with one another, even when these segments undergo shifts. This illustrates that warp elasticity effectively captures the stretching and shrinking of time series couplings within specific adaptive windows. This highlights the method’s capability in accommodating variations in window lengths, encompassing longer, shorter, and shifted windows, independently for each signal in a pair across the entire run. This stands in contrast to conventional static windows used in SWPC, emphasizing the adaptive and dynamic nature of our approach. Figure 3(B) highlights the poor performance of SWPC and PS in capturing the dynamics of coupled stretching and shrinking timescales, as indicated by the correlations between their estimations and respective ground truths across various window sizes for SWPC and bandwidths for PS. Conversely, the notably high correlation of 0.91 between the ground truth of coupled stretching and shrinking timescale dynamics and the estimations provided by warp elasticity underscore its reliability in accurately estimating these dynamics. Another noteworthy observation is the presence of transient anticorrelations and negative phase synchronization in the SWPC and PS estimations depicted in Figure 3(B). Various factors, such as hemodynamic delay, global signal regression, and physiological noise, have been reported to explain the occurrence of negative correlations between brain region pairs. In this study, we demonstrate that one effect of the stretching and shrinking dynamics is the emergence of transient anticorrelations in SWPC and transient negative synchrony in PS. Traditional measures often fail to account for these transient phenomena, posing a risk of false estimations of trFC and trFNC. However, warp elasticity proves capable of capturing the stretching and shrinking of coupling timescales, thereby avoiding false anticorrelations and negative synchronizations that may arise due to the stretching and shrinking effects of coupled timescales. The stretching of brain network coupling relative to others may imply a prolonged or delayed neural and hemodynamic response within a specific brain network, indicating that it takes longer for this network to reach its peak activity or return to baseline compared with another network. Conversely, shrinking may suggest a faster response or a more rapid decay in neural and hemodynamic activity within one network compared with another. It is crucial to highlight that the HRF of brain regions can vary significantly, not only among different brain regions but also among individual subjects ( Aguirre et al., 1998). Failing to consider this HRF variability can lead to inaccurate FC estimations (Rangaprakash et al., 2018). Our approach effectively addresses this inherent variability and additionally captures dynamic changes that might be related to neural or hemodynamic factors, via stretching and shrinking of timescales within dynamic windows. Another interpretation is that the warp elasticity may enable different brain regions or networks to process information at varying temporal scales relative to one another. This variation could stem from neural effects, or it could also be associated with dynamically changing hemodynamic effects, which are complex and typically not thoroughly studied. To ensure the reliability of our measure in the presence of noise, we rigorously evaluated the sensitivity of the warp elasticity method to varying noise levels within the reported range of tSNRs in fMRI data. The results of our assessment demonstrate the robust performance of our method in the presence of noise. Across 100 samples of noisy fMRI signals at different tSNR values, we computed the average correlation of warp elasticity between these noisy fMRI time series and the actual noiseless fMRI time series. Remarkably, these correlations yielded coefficients that consistently fell within the range of 1 to 0.66 for the relevant fMRI tSNR values typically reported in fMRI studies, as illustrated in Figure 4(B). In addition, we investigated the relationships between simulated data with varying levels of tSNR values and their corresponding ground truths. We found that the reliable range of tSNR for accurately estimating the ground truth was approximately between 100 and 3.7 in Figure 4(C). Within this range, the correlation between the estimated values and the ground truth ranged from 0.87 to 0.71. These findings align with the outcomes reported by Meszlényi et al. (2017), who conducted a similar analysis comparing the noise sensitivity of DTW distances with correlation coefficients and observed that DTW distances were less sensitive to noise compared with correlation coefficients. These compelling outcomes highlight the warp elasticity measure as a considerably less sensitive metric to noise, underlining its reliability and efficacy for fMRI analysis. In our evaluation of warp elasticity replicability, we utilized 200 randomly selected subjects from the HCP dataset and performed analyses across four bootstrap samples, focusing on cluster centroids derived from clustered warp elasticity. The results of our replicability assessment are presented in Figure 5. In Figure 5(B), we display the correlations between all four bootstrap samples across clusters. Notably, there exists a high correlation between samples 1, 2, and 3, with a minimum correlation coefficient of 0.85 observed between samples 3 and 2 in cluster 2. This high correlation signifies that samples 1, 2, and 3 exhibit a robust and consistent warp elasticity, indicating a high level of replicability among these random bootstrap samples. On the other hand, sample 4 demonstrates a comparatively lower level of correlation when compared with the rest of the samples. It exhibits a minimum coefficient of 0.65 between samples 4 and 2 in cluster 2, indicating a slightly less correlated warp elasticity for sample 4. However, given the nature of bootstrap analysis, such variability is expected. Nevertheless, these results remain convincing, underscoring the highly replicable nature of warp elasticity across randomly selected subjects. Our analysis extended to examining how warp elasticity could capture group differences between individuals with SZ and HC. We clustered the results of warp elasticity for both SZ and HC subjects into three clusters, as illustrated in Figure 6. One intriguing discovery from our group analysis was the striking resemblance between the WE states observed in the group difference analysis and those identified in the bootstrap analysis. Specifically, in the group analysis illustrated in Figure 6, there is a notable similarity between cluster 1 of the group analysis (Fig. 6) and the bootstrap analysis (Fig. 5). Likewise, cluster 2 of the group analysis (Fig. 6) exhibits striking similarities with the bootstrap analysis (Fig. 5). This alignment between WE states across different analyses further underscores the robustness and consistency of the observed patterns of warp elasticity. Another striking observation revolves around the consistent range of stretching and shrinking observed in both the bootstrap and group analyses. Notably, the warp elasticity estimates in the group analysis fall within the range of −0.2 to +0.2 seconds, while in the bootstrap analysis, the range extends from −0.28 to +0.28 seconds. Despite the consistency in the range of the warp elasticity between the HCP and fBIRN datasets, there exists a slight difference of ±0.08 seconds. This disparity is speculated to be influenced by several factors, including variations in the age groups of the datasets, the number of subjects included, and potential differences in the time frame of the scans. The group analysis was conducted using the fBIRN dataset, comprising 311 subjects (including both healthy control and SZ individuals), 159 time points, and a mean age of 38.77 ± 11.63 years. Conversely, the bootstrap analysis was based on the HCP dataset, which included 827 subjects, 1200 time points, and a mean age of 28 ± 6.5 years. This subtle difference in the range of warp elasticity between the two analyses may be attributed to the age discrepancy. Specifically, the younger brains in the HCP dataset may demonstrate more pronounced stretching /shrinking compared with the relatively older brains in the fBIRN dataset. Additionally, the variation in time points might contribute, as individuals in the HCP dataset might exhibit more extensive stretching/shrinking due to an extended scanning period compared with those in the fBIRN dataset. Moreover, the difference in the number of subjects could potentially contribute to more generalized estimations in the HCP dataset, as it provides a broader range of subjects compared with the fBIRN dataset, which has a smaller subject pool. It is important to highlight that despite the slight difference in the range of warp elasticity values, the overall patterns between the group and bootstrap analyses remain consistent. In Figure 6, the WE states appear similar between patients with schizophrenia and controls. However, significant differences emerge in the dynamics of these WE states, particularly in terms of their mean dwell times, transitions, and occupancy. This could be attributed to the possibility that stretching and shrinking patterns constitute a fundamental neural dynamic, unchanged between these two groups. Coupled with the similarity of the group analysis WE states to that of the bootstrap analysis, these WE states strike an interest in further speculations of what they could potentially mean. Cluster 1 in Figures 4 and 5 shows stretching of CC and DM networks with respect to all other networks SM, Aud, Vis, Cb, and SC regions. The stretching observed in the CC and DM networks during resting-state scans suggests that there might be an increased level of cognitive engagement or activity at certain times. These networks play a role in higher-order cognitive processes such as self-reflection, mental planning, and executive control. The stretching could reflect fluctuations in the degree to which individuals engage in these cognitive activities while at rest. Furthermore, stretching in the Vis networks relative to SM, Aud, Cb, and SC, particularly in cluster 1 of sample 4 of the bootstrap analysis (Fig. 5), is also evident although it is not as pronounced as in the CC and DM regions. This stretching may indicate that the Vis networks are working in coordination with the DM and CC networks during periods of mind wandering and mental imagery. The literature suggests evidence of mental imagery being involved with Vis networks (Ganis et al., 2004; Le Bihan et al., 1993; Pearson, 2019). It is noteworthy to mention that, as shown in Figure 7, the fraction rate of this state is the highest compared with other states, indicating that it is the most prevalent state. This aligns with our expectations regarding the heightened activity of DM and CC regions during Cluster 2 exhibits stretching in the Vis networks with respect to all other networks in both the group analysis and bootstrap analysis. This observation might suggest a WE state characterized by heightened mental imagery or visual thought processing during resting-state scans. While the resting state is typically associated with the default mode network, the emergence of this state with unconventional results is not entirely unexpected. This is because we know that imagination and mind wandering are common during resting state. We speculate that the Vis networks could potentially play a more significant role in mind wandering and imagination than previously thought, given their stretching relative to all other brain networks (Ganis et al., 2004; Pearson, 2019). It is important to note that this interpretation is speculative and based on prior knowledge. Cluster 3 presents an intriguing inverse pattern compared with Cluster 2, where the Vis networks appear to shrink relative to all other networks in the group analysis. Notably, this pattern is also primarily observed in sample 4 of the bootstrap analysis within cluster 3. This raises a significant question, as per our previous speculations. It would imply a state where Vis networks are exhibiting lower processing activity in comparison with all other networks. The implications of this state are uncertain and warrant further investigation. Another noteworthy aspect of this state that aligns with the bootstrap results is the stretching of Cb networks with respect to all other networks. This finding hints at the potential involvement of cerebellar regions in higher cognitive processes, which is consistent with prior studies (Adamaszek et al., 2017; Beuriat et al., 2022; E et al., 2014; Koziol et al., 2014; Van Overwalle et al., 2020). These studies have indicated that cerebellar regions play a role in coordinating cognitive tasks such as timing and sequencing. We detected a statistical difference between SZ and HC groups in cluster 2’s mean dwell times, following FDR-corrected p-values obtained from a two-sample t-test between SZ and HC for the clusters highlighted in Figure 7(A). Notably, HC subjects spend significantly more time within cluster 2 (a state that suggests mental imagery and visual thought processing) once they enter it compared with SZ subjects. Similar significant differences were found for cluster 2 in the fraction rate, as depicted in Figure 7(B). HC subjects also spent significantly more time in cluster 2 than the SZ control group. Furthermore, we identified significant differences between individuals with SZ and HC in cluster 1 based on the fraction rates of the two groups. In this case, the group fraction rate for SZ was found to be significantly higher than that of the HC group, meaning SZ groups spend longer time in a state that suggests higher-order cognitive processing such as DM and CC processing over HC at rest. Lastly, the examination of transition matrices reveals significant differences in transitions, specifically (2→1), and (2→2) transitions, as illustrated in Figure 7(C). These findings align with the results obtained from mean dwell times and fraction rates and provide further insights. The (2→2) transition indicates a significantly higher probability of HC remaining in WE state 2, which is associated with visual thought processing, compared with individuals with SZ. Specifically, HCs are more likely to stay in this visual processing state than SZ. Conversely, the (2→1) transition highlights a significant difference in the likelihood of subjects with SZ transitioning from WE state 2, linked to visual thought processing, to state 1, which is characterized by mind wandering and higher levels of cognitive processing. This transition is more probable for SZ subjects than for HCs, as demonstrated in Figure 7(C). These findings suggest distinct differences in cognitive processing patterns and state transitions between HC and SZ groups. The observed disparities in stretching and shrinking dynamics of brain region coupling between individuals with SZ and HC are intriguing, yet the exact implications remain uncertain. It is evident, however, that these differences exist, potentially stemming from the intricate interplay of neural and hemodynamic factors. This complexity underscores the need for further exploration, particularly through the application of advanced dynamic programming algorithms such as DTW. In our study, we utilized warp elasticity to estimate the stretching and shrinking of signal coupling within specific adaptive windows, as illustrated in Figure 1. While this approach provided valuable insights, it is important to note the limitations. The visualization of stretching and shrinking windows in the actual fMRI time series pair was not feasible due to the intricate and multifaceted nature of these phenomena. Moreover, the precise significance of these stretching and shrinking patterns within interconnected brain networks remains elusive. To offer some intuitive understanding, consider cluster 2 in Figure 6 as an example. It suggests that the Vis networks may stretch approximately 6 seconds in a 30-second timeframe (which is 0.2 times 30 seconds) in relation to CC and DM networks. Despite these visualization and interpretative challenges, our research results are both intriguing and hold great potential. The intriguing results we have obtained serve as a catalyst for further investigations aimed at unraveling the nuanced meanings underlying connectivity alterations. Subsequent research may benefit from incorporating further clinical metrics or analysis of specific subgroups to delve deeper into how chronic mental health conditions, disabilities, and the usage of psychotropic medications may affect the observed patterns of the dynamics of the timescale of functional connectivity within the schizophrenia cohort. Also, we suggest that further studies perform warp elasticity on EEG data as this may reveal more granular stretching and shrinking patterns considering the high temporal resolution it provides. Warp elasticity could enhance our grasp of the underlying influences on the findings and their relevance to clinical endpoints. This study not only highlights the existence of these dynamics but also emphasizes the imperative to delve deeper into their implications, driving us toward a more comprehensive understanding of the intricate workings of the human brain. 5 Conclusion In conclusion, our study delves into the complexities of brain network dynamics using the innovative warp elasticity method, revealing intricate patterns of stretching and shrinking of brain region coupling or connectivity within specific adaptive windows. By employing advanced techniques such as DTW, we highlighted the adaptability of brain regions over time, shedding light on the temporal nuances often overlooked in conventional analyses. Our research, conducted with rigorous methodology and thorough evaluations, underscores the robustness and reliability of warp elasticity in capturing these dynamic changes, even amidst the challenges posed by noise and individual subject variability. The observed differences in stretching and shrinking dynamics in the coupling of functional connectivity between individuals with schizophrenia and healthy controls, although intriguing, pose compelling questions about their underlying neural mechanisms. While our study marks a significant step forward, further research is imperative to unravel the precise implications of these findings, offering a promising avenue for advancing our understanding of the intricacies of human brain connectivity. Data and Code Availability The function for estimating the warp elasticity in MATLAB language can be accessed through GitHub (https://github.com/Sirlord-Sen/DTW_warp_elasticity). The data were not collected by us and were provided in a deidentified manner. The IRB will not allow sharing of data or individual derivatives as a data reuse agreement was not signed by the subjects during the original acquisition. Author Contributions Sir-Lord Wiafe: Conceptualization, Formal analysis, Methodology, Visualization, Writing–original draft. Ashkan Faghiri: Conceptualization, Methodology, Supervision, Validation, Writing–review & editing. Zening Fu: Resources, Writing–review & editing. Robyn Miller and Adrian Preda: Writing–review & editing. Vince D. Calhoun: Conceptualization, Funding acquisition, Validation, Methodology, Resources, Supervision, Writing–review & editing. Declaration of Competing Interest This work was supported by the National Institutes of Health (NIH) grant (R01MH123610, the National Science Foundation (NSF) grant #2112455 and the National Center for Research Resources at the NIH [grant numbers: NIH 1 U24 RR021992 (Function Biomedical Informatics Research Network), NIH 1 U24 RR025736-01 (Biomedical Informatics Research Network Coordinating Center; http://www.birncommunity.org Supplementary Materials , & Discovery and replication of time-resolved functional network connectivity differences in adolescence and adulthood in over 50K fMRI datasets . In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) R. L. J. M. E. D. A. R. , & V. D. Replicability of time-varying connectivity patterns in large resting state fMRI samples , & A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs Journal of Neuroscience , & Consensus paper: Cerebellum and emotion The Cerebellum E. A. S. M. E. B. , & V. D. Tracking whole-brain connectivity dynamics in the resting state Cerebral Cortex , & The shifting role of the cerebellum in executive, emotional and social processing across the lifespan Behavioral and Brain Functions F. Z. V. M. , & J. S. Functional connectivity in the motor cortex of resting human brain using echo-planar MRI Magnetic Resonance in Medicine M. M. , & Test–retest reliability of resting-state connectivity network characteristics using fMRI and graph theoretical measures V. D. G. D. , & J. J. A method for making group inferences from functional MRI data using independent component analysis Human Brain Mapping Vince D. , & The chronnectome: Time-varying connectivity networks as the next frontier in fMRI data discovery J. P. , & G. H. Influence of heart rate on the BOLD signal: The cardiac response function S. L. , & R. L. Dynamic functional connectivity impairments in early schizophrenia and clinical high-risk for psychosis M. A. L. E. E. A. , & V. D. NeuroMark: An automated and adaptive ICA based pipeline to identify reproducible fMRI markers of brain disorders NeuroImage: Clinical K. H. S. H. M. H. , & J. E. A meta-analysis of cerebellar contributions to higher cognition from PET and fMRI studies Human Brain Mapping S. B. A. R. , & P. T. Co-activation patterns distinguish cortical modules, their connectivity and functional differentiation C. J. R. W. C. A. A. I. J. D. , & fMRIPrep: A robust preprocessing pipeline for functional MRI Nature Methods , & V. D. Single sideband modulation as a tool to improve functional connectivity estimation . In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) , & Frequency modulation increases the specificity of time-resolved connectivity: A resting-state fMRI study M. D. A. Z. J. L. Van Essen D. C. , & M. E. The human brain is intrinsically organized into dynamic, anticorrelated functional networks Proceedings of the National Academy of Sciences G. D. J. A. B. B. , & V. D. Characterizing dynamic amplitude of low-frequency fluctuation and its relationship with dynamic functional connectivity: An application to schizophrenia R. F. de Reus M. A. van den Heuvel M. P. , & Fluctuations between high- and low-modularity topology in time-resolved functional connectivity W. L. , & S. M. Brain areas underlying visual mental imagery and visual perception: An fMRI study Cognitive Brain Research J. M. I. P. , & Functional magnetic resonance imaging phase synchronization as a measure of dynamic functional connectivity Brain Connectivity A. D. , & D. B. Physiologic noise regression, motion regression, and TOAST dynamic field correction in complex-valued fMRI time series R. M. E. A. P. A. V. D. , S., J. H. G. H. D. A. D. A. de Pasquale , & Dynamic functional connectivity: Promise, issues, and interpretations , & V. D. Tools of the trade: Estimating time-varying connectivity patterns from fMRI data Social Cognitive and Affective Neuroscience M. J. G. D. , & V. D. A method for functional network connectivity among spatially independent resting-state components in schizophrenia , & V. D. A 5D approach to study spatio-temporal dynamism of resting-state brain networks in schizophrenia . In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) van Erp T. G. M. , & V. D. A deep residual model for characterization of 5D spatiotemporal network dynamics reveals widespread spatiodynamic changes in schizophrenia Front Neuroimaging S. S. M. F. , & Abnormal intrinsic brain functional network dynamics in Parkinson’s disease L. F. , & Consensus paper: The cerebellum’s role in movement and cognition Le Bihan T. A. C. A. , & Activation of human primary visual cortex during visual recall: A magnetic resonance imaging study Proceedings of the National Academy of Sciences M. A. , J., L. Y. , & T. D. Modeling the hemodynamic response function in fMRI: Efficiency, bias and mis-modeling 1 Suppl. A. C. L. E. C. H. M. K. J. S. R. A. , & R. A. Dynamic time warping outperforms Pearson correlation in detecting atypical functional connectivity in autism spectrum disorders D. J. D. S. R. F. M. A. A. R. R. A. J. M. W. H. N. Z. R. L. , … Questions and controversies in the study of time-varying functional connectivity in resting fMRI Network Neuroscience R. J. , & Resting state fMRI functional connectivity analysis using dynamic time warping [methods] Frontiers in Neuroscience S. M. J. R. J. M. D. H. B. A. S. G. van Erp T. G. , & V. D. A method for estimating and characterizing explicitly nonlinear dynamic functional network connectivity in resting-state fMRI data Journal of Neuroscience Methods R. M. D. A. T. B. , & P. A. The impact of global signal regression on resting state correlations: Are anti-correlated networks introduced F. Y. , & R. D. Generalized likelihood ratio detection for fMRI using complex data IEEE Transactions on Medical Imaging G. C. M. W. , & M. J. Dynamics of large-scale electrophysiological networks: A technical review K. K. , & S. S. A modification over Sakoe and Chiba’s dynamic time warping algorithm for isolated word recognition Signal Processing , & G. D. On the relationship between instantaneous phase synchrony and correlation-based sliding windows for time-resolved fMRI connectivity analysis W. D. K. J. J. T. S. J. , & T. E. Statistical parametric mapping: The analysis of functional brain images R. T. S. J. A. X. , & Dynamic time warping identifies functionally distinct fMRI resting state cortical networks specific to VTA and SNc: A proof of concept Cerebral Cortex J. G. Digital signal processing: Principles, algorithms, and applications, 4/E Pearson Education India G. R. , & Hemodynamic response function (HRF) variability confounds resting-state fMRI functional connectivity Magnetic Resonance in Medicine , N., , & Functional connectivity dynamics reflect disability and multi-domain clinical impairment in patients with relapsing-remitting multiple sclerosis NeuroImage: Clinical Z. S. S. J. H. J. , & R. W. Trouble at rest: How correlation patterns and group differences become distorted after global signal regression Brain Connectivity , & Dynamic programming algorithm optimization for spoken word recognition IEEE Transactions on Acoustics, Speech, and Signal Processing G. D. K. A. Y. M. A. M. , & V. D. A method for evaluating dynamic functional network connectivity and task-modulation: Application to schizophrenia , & Progressive alteration of dynamic functional connectivity patterns in subcortical ischemic vascular cognitive impairment patients Neurobiology of Aging , & Dynamic retrospective filtering of physiological noise in BOLD fMRI: DRIFTER Dynamic time warping algorithm review Information and Computer Science Department University of Hawaii at Manoa Honolulu, USA A. C. P. T. D. G. L. Q. S. H. D. S. A. K. , & B. B. The resting brain: Unconstrained yet reliable Cerebral Cortex van Gelderen de Zwart J. A. S. G. J. M. , & J. H. Low-frequency fluctuations in the cardiac rate as a source of variance in the resting-state fMRI BOLD signal , & Brain state-dependent dynamic functional connectivity patterns in attention-deficit/hyperactivity disorder Journal of Psychiatric Research R. D. C. J. G. C. , & L. L. Comparison of physiological noise at 1.5 T, 3 T and 7 T and optimization of fMRI acquisition parameters G. E. , & Functional magnetic resonance imaging of the human brain: Data acquisition and analysis Experimental Brain Research Van Essen D. C. S. M. D. M. T. E. J. , & The WU-minn human connectome project: An overview Van Essen D. C. T. E. S. W. , S., M. F. A. C. , … The Human Connectome Project: A data acquisition perspective Van Overwalle J. D. , & Consensus paper: Cerebellum and social cognition The Cerebellum S. L. V. D. , & Phase and amplitude, two sides of functional connectivity . In 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) R. G. M. J. , & Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal E. A. R. L. , & V. D. Dynamic coherence analysis of resting fMRI data to jointly capture state-based phase, frequency, and time-domain information © 2024 Massachusetts Institute of Technology. Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. © 2024 Massachusetts Institute of Technology. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit
{"url":"https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00187/121174/The-dynamics-of-dynamic-time-warping-in-fMRI-data","timestamp":"2024-11-07T00:41:55Z","content_type":"text/html","content_length":"538429","record_id":"<urn:uuid:f6e37de7-ddb3-46b1-9a86-a40ddcdfdcca>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00291.warc.gz"}
How Do I Measure Square Feet To get a rough estimate of the amount of square footage of flooring that you will need for that room, multiply the length of the room by the width of the room. To calculate the area of a wall, use the standard formula of Length x Width = Area. Next, use the same formula to record the individual area of windows and. Free application to measure square feet of your choice. To measure a home's square footage, sketch a floor plan of the interior. Break down the sketch into measurable rectangles. Go through the house measure the. If your house is a perfect rectangle then you can roughly calculate the square footage by measuring the width and length of the house and multiplying the two. How to calculate square footage. Calculating square footage is, luckily, pretty simple. You just multiply the length of a room or house in feet by the width in. Measure length of each wall including doors and windows. Find the total square feet of the wall(s) by multiplying ceiling height by total wall length. Subtract. Measure the length in feet, Measure the width in feet. Multiply the length figure by the width figure. This will be your total square footage for that portion. Definition of Square Footage. Square footage is a measurement of area, and area is the measurement of any two-dimensional space contained within a set of lines. Steps to Calculate Square Feet · Measure the length and width of the area. · Convert these measurements to feet if they aren't already. · Multiply the length in. Multiply length times width. If you r measurements are meters you will get square meters, if your measurements are feet, then you will get. Every property is different. Knowing the square footage of your yard tells you how much water, fertilizer, or grass seed you will need to maintain it. Every property is different. Knowing the square footage of your yard tells you how much water, fertilizer, or grass seed you will need to maintain it. What is Meant by Square Footage? In mathematics, square footage is the measurement to calculate the area. The area is generally measured for any two-. For instance, if the room is 12 feet wide and 12 feet long, you will need enough flooring for square feet (12×12=). Step 2: Add 10% to the square footage. Therefore, if you can measure the width and length of the room, you can easily calculate the square footage by simply multiplying the two measurements. The. Multiply the length of the first space by the width of the first space. To find the square footage -- or the area -- of the space, just multiply the length. Multiply Length by Width: Once you have the measurements, multiply the length of the room by its width. The result is the area of the room in square feet. Multiply the length in feet by the width in feet. This yields a number called the area, which is expressed in square feet (or square inches if you are. All you do is measure the length and width of a room. Then, multiply the two numbers. Here's the full equation: L x W = A, where L = Length, W = Width, and. Measure length of each wall including doors and windows. Find the total square feet of the wall(s) by multiplying ceiling height by total wall length. Subtract. Square Footage Calculator · Length (ft) x Width (ft) = Total Square Feet · x (Diameter (ft)/2)^2 = Total Square Feet · 1/2 x Length of Side 1 (ft) * Length of. If your house is a perfect rectangle then you can roughly calculate the square footage by measuring the width and length of the house and multiplying the two. How do you calculate square feet from inches? To calculate the square footage of a space based on square inches, divide the square inches value by Square. For Rolled Goods, like Carpet and Sheet Vinyl, measure from edge to edge of the room, including closets and other nooks in the space. Measuring to account for. There's no established standard for measuring a residential property, and everyone seems to measure square footage differently. One can measure the exterior width x length if a one store square house, that is one way. One can use a tape measure and measure the interior. Definition of Square Footage. Square footage is a measurement of area, and area is the measurement of any two-dimensional space contained within a set of lines. For instance, if the room is 12 feet wide and 12 feet long, you will need enough flooring for square feet (12×12=). Step 2: Add 10% to the square footage. To calculate the area of a wall, use the standard formula of Length x Width = Area. Next, use the same formula to record the individual area of windows and. Steps to Calculate Square Feet · Measure the length and width of the area. · Convert these measurements to feet if they aren't already. · Multiply the length in. Is Snapchat A Good Stock To Buy | Is Geico Home Insurance Any Good
{"url":"https://cryptotrees.site/news/how-do-i-measure-square-feet.php","timestamp":"2024-11-09T01:15:36Z","content_type":"text/html","content_length":"10930","record_id":"<urn:uuid:dc8368fe-6379-4a8a-8b30-5508d6c9393b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00646.warc.gz"}